Apr 24 00:26:14.717850 kernel: Linux version 6.12.81-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Apr 23 22:08:58 -00 2026 Apr 24 00:26:14.718301 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=35bf60e399c7fbdab9d27e362bd719e7cadd795a3fa26a4f30de01ccc70fba7e Apr 24 00:26:14.718311 kernel: BIOS-provided physical RAM map: Apr 24 00:26:14.718317 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Apr 24 00:26:14.718322 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Apr 24 00:26:14.718326 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Apr 24 00:26:14.718334 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Apr 24 00:26:14.718339 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Apr 24 00:26:14.718457 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Apr 24 00:26:14.718463 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Apr 24 00:26:14.718468 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Apr 24 00:26:14.718472 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Apr 24 00:26:14.718477 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Apr 24 00:26:14.718481 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Apr 24 00:26:14.718490 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Apr 24 00:26:14.718495 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Apr 24 00:26:14.719038 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 24 00:26:14.719044 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 24 00:26:14.719049 kernel: NX (Execute Disable) protection: active Apr 24 00:26:14.719054 kernel: APIC: Static calls initialized Apr 24 00:26:14.719062 kernel: e820: update [mem 0x9a143018-0x9a14cc57] usable ==> usable Apr 24 00:26:14.719067 kernel: e820: update [mem 0x9a106018-0x9a142e57] usable ==> usable Apr 24 00:26:14.719072 kernel: extended physical RAM map: Apr 24 00:26:14.719077 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Apr 24 00:26:14.719082 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Apr 24 00:26:14.719087 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Apr 24 00:26:14.719091 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Apr 24 00:26:14.719096 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a106017] usable Apr 24 00:26:14.719101 kernel: reserve setup_data: [mem 0x000000009a106018-0x000000009a142e57] usable Apr 24 00:26:14.719106 kernel: reserve setup_data: [mem 0x000000009a142e58-0x000000009a143017] usable Apr 24 00:26:14.719111 kernel: reserve setup_data: [mem 0x000000009a143018-0x000000009a14cc57] usable Apr 24 00:26:14.719117 kernel: reserve setup_data: [mem 0x000000009a14cc58-0x000000009b8ecfff] usable Apr 24 00:26:14.719122 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Apr 24 00:26:14.719127 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Apr 24 00:26:14.719132 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Apr 24 00:26:14.719136 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Apr 24 00:26:14.719141 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Apr 24 00:26:14.719146 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Apr 24 00:26:14.719151 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Apr 24 00:26:14.719156 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Apr 24 00:26:14.719165 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 24 00:26:14.719170 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 24 00:26:14.719175 kernel: efi: EFI v2.7 by EDK II Apr 24 00:26:14.719180 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1b4018 RNG=0x9bb73018 Apr 24 00:26:14.719185 kernel: random: crng init done Apr 24 00:26:14.719190 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Apr 24 00:26:14.719197 kernel: secureboot: Secure boot enabled Apr 24 00:26:14.719202 kernel: SMBIOS 2.8 present. Apr 24 00:26:14.719207 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Apr 24 00:26:14.719212 kernel: DMI: Memory slots populated: 1/1 Apr 24 00:26:14.719217 kernel: Hypervisor detected: KVM Apr 24 00:26:14.719222 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x10000000000 Apr 24 00:26:14.719227 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 24 00:26:14.719233 kernel: kvm-clock: using sched offset of 22129295012 cycles Apr 24 00:26:14.719239 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 24 00:26:14.719244 kernel: tsc: Detected 2793.438 MHz processor Apr 24 00:26:14.719250 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 24 00:26:14.719257 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 24 00:26:14.719379 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x10000000000 Apr 24 00:26:14.719386 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 24 00:26:14.719391 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 24 00:26:14.719397 kernel: Using GB pages for direct mapping Apr 24 00:26:14.719512 kernel: ACPI: Early table checksum verification disabled Apr 24 00:26:14.719850 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Apr 24 00:26:14.719858 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 24 00:26:14.719863 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 00:26:14.719872 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 00:26:14.719878 kernel: ACPI: FACS 0x000000009BBDD000 000040 Apr 24 00:26:14.719883 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 00:26:14.719888 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 00:26:14.719893 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 00:26:14.719899 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 00:26:14.719905 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 24 00:26:14.719910 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Apr 24 00:26:14.719915 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Apr 24 00:26:14.719923 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Apr 24 00:26:14.719928 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Apr 24 00:26:14.719934 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Apr 24 00:26:14.719939 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Apr 24 00:26:14.719944 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Apr 24 00:26:14.719949 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Apr 24 00:26:14.719955 kernel: No NUMA configuration found Apr 24 00:26:14.719960 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Apr 24 00:26:14.719966 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] Apr 24 00:26:14.719972 kernel: Zone ranges: Apr 24 00:26:14.719978 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 24 00:26:14.719983 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Apr 24 00:26:14.719989 kernel: Normal empty Apr 24 00:26:14.719994 kernel: Device empty Apr 24 00:26:14.719999 kernel: Movable zone start for each node Apr 24 00:26:14.720005 kernel: Early memory node ranges Apr 24 00:26:14.720010 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Apr 24 00:26:14.720015 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Apr 24 00:26:14.720021 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Apr 24 00:26:14.720027 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Apr 24 00:26:14.720032 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Apr 24 00:26:14.720037 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Apr 24 00:26:14.720043 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 24 00:26:14.720048 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Apr 24 00:26:14.720054 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 24 00:26:14.720059 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Apr 24 00:26:14.720064 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Apr 24 00:26:14.720070 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Apr 24 00:26:14.720077 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 24 00:26:14.720082 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 24 00:26:14.720088 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 24 00:26:14.720093 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 24 00:26:14.720098 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 24 00:26:14.720104 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 24 00:26:14.720223 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 24 00:26:14.720229 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 24 00:26:14.720234 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 24 00:26:14.720242 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 24 00:26:14.720248 kernel: TSC deadline timer available Apr 24 00:26:14.720253 kernel: CPU topo: Max. logical packages: 1 Apr 24 00:26:14.720258 kernel: CPU topo: Max. logical dies: 1 Apr 24 00:26:14.720263 kernel: CPU topo: Max. dies per package: 1 Apr 24 00:26:14.720269 kernel: CPU topo: Max. threads per core: 1 Apr 24 00:26:14.720279 kernel: CPU topo: Num. cores per package: 4 Apr 24 00:26:14.720286 kernel: CPU topo: Num. threads per package: 4 Apr 24 00:26:14.720292 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Apr 24 00:26:14.720298 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 24 00:26:14.720415 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 24 00:26:14.720421 kernel: kvm-guest: setup PV sched yield Apr 24 00:26:14.720430 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Apr 24 00:26:14.720436 kernel: Booting paravirtualized kernel on KVM Apr 24 00:26:14.720442 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 24 00:26:14.720447 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 24 00:26:14.720453 kernel: percpu: Embedded 60 pages/cpu s207448 r8192 d30120 u524288 Apr 24 00:26:14.720461 kernel: pcpu-alloc: s207448 r8192 d30120 u524288 alloc=1*2097152 Apr 24 00:26:14.720466 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 24 00:26:14.720472 kernel: kvm-guest: PV spinlocks enabled Apr 24 00:26:14.720478 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 24 00:26:14.720484 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=35bf60e399c7fbdab9d27e362bd719e7cadd795a3fa26a4f30de01ccc70fba7e Apr 24 00:26:14.720490 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 24 00:26:14.720496 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 24 00:26:14.720502 kernel: Fallback order for Node 0: 0 Apr 24 00:26:14.720509 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 Apr 24 00:26:14.720515 kernel: Policy zone: DMA32 Apr 24 00:26:14.720854 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 24 00:26:14.720861 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 24 00:26:14.720867 kernel: ftrace: allocating 40126 entries in 157 pages Apr 24 00:26:14.720873 kernel: ftrace: allocated 157 pages with 5 groups Apr 24 00:26:14.720878 kernel: Dynamic Preempt: voluntary Apr 24 00:26:14.720884 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 24 00:26:14.721000 kernel: rcu: RCU event tracing is enabled. Apr 24 00:26:14.721011 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 24 00:26:14.721017 kernel: Trampoline variant of Tasks RCU enabled. Apr 24 00:26:14.721023 kernel: Rude variant of Tasks RCU enabled. Apr 24 00:26:14.721029 kernel: Tracing variant of Tasks RCU enabled. Apr 24 00:26:14.721035 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 24 00:26:14.721041 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 24 00:26:14.721047 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 24 00:26:14.721052 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 24 00:26:14.721058 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 24 00:26:14.721176 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 24 00:26:14.721183 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 24 00:26:14.721189 kernel: Console: colour dummy device 80x25 Apr 24 00:26:14.721195 kernel: printk: legacy console [ttyS0] enabled Apr 24 00:26:14.721201 kernel: ACPI: Core revision 20240827 Apr 24 00:26:14.721207 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 24 00:26:14.721212 kernel: APIC: Switch to symmetric I/O mode setup Apr 24 00:26:14.721218 kernel: x2apic enabled Apr 24 00:26:14.721224 kernel: APIC: Switched APIC routing to: physical x2apic Apr 24 00:26:14.721232 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 24 00:26:14.721238 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 24 00:26:14.721244 kernel: kvm-guest: setup PV IPIs Apr 24 00:26:14.721250 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 24 00:26:14.721255 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 24 00:26:14.721261 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 24 00:26:14.721267 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 24 00:26:14.721273 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 24 00:26:14.721279 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 24 00:26:14.721287 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 24 00:26:14.721292 kernel: Spectre V2 : Mitigation: Retpolines Apr 24 00:26:14.721298 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 24 00:26:14.721415 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 24 00:26:14.721422 kernel: RETBleed: Vulnerable Apr 24 00:26:14.721428 kernel: Speculative Store Bypass: Vulnerable Apr 24 00:26:14.721434 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 24 00:26:14.721442 kernel: GDS: Unknown: Dependent on hypervisor status Apr 24 00:26:14.721454 kernel: active return thunk: its_return_thunk Apr 24 00:26:14.721463 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 24 00:26:14.721472 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 24 00:26:14.721481 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 24 00:26:14.721490 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 24 00:26:14.721500 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 24 00:26:14.721509 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 24 00:26:14.721518 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 24 00:26:14.722009 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 24 00:26:14.722019 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 24 00:26:14.722025 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 24 00:26:14.722031 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 24 00:26:14.722037 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 24 00:26:14.722043 kernel: Freeing SMP alternatives memory: 32K Apr 24 00:26:14.722049 kernel: pid_max: default: 32768 minimum: 301 Apr 24 00:26:14.722055 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Apr 24 00:26:14.722061 kernel: landlock: Up and running. Apr 24 00:26:14.722067 kernel: SELinux: Initializing. Apr 24 00:26:14.722074 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 24 00:26:14.722080 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 24 00:26:14.722086 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 24 00:26:14.722207 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 24 00:26:14.722214 kernel: signal: max sigframe size: 3632 Apr 24 00:26:14.722219 kernel: rcu: Hierarchical SRCU implementation. Apr 24 00:26:14.722226 kernel: rcu: Max phase no-delay instances is 400. Apr 24 00:26:14.722231 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Apr 24 00:26:14.722237 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 24 00:26:14.722246 kernel: smp: Bringing up secondary CPUs ... Apr 24 00:26:14.722252 kernel: smpboot: x86: Booting SMP configuration: Apr 24 00:26:14.722257 kernel: .... node #0, CPUs: #1 #2 #3 Apr 24 00:26:14.722263 kernel: smp: Brought up 1 node, 4 CPUs Apr 24 00:26:14.722269 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 24 00:26:14.722276 kernel: Memory: 2357260K/2552216K available (14336K kernel code, 2453K rwdata, 26076K rodata, 46224K init, 2524K bss, 189064K reserved, 0K cma-reserved) Apr 24 00:26:14.722281 kernel: devtmpfs: initialized Apr 24 00:26:14.722287 kernel: x86/mm: Memory block size: 128MB Apr 24 00:26:14.722293 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Apr 24 00:26:14.722300 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Apr 24 00:26:14.722306 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 24 00:26:14.722312 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 24 00:26:14.722318 kernel: pinctrl core: initialized pinctrl subsystem Apr 24 00:26:14.722324 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 24 00:26:14.722330 kernel: audit: initializing netlink subsys (disabled) Apr 24 00:26:14.722335 kernel: audit: type=2000 audit(1776990347.135:1): state=initialized audit_enabled=0 res=1 Apr 24 00:26:14.722341 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 24 00:26:14.722347 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 24 00:26:14.722354 kernel: cpuidle: using governor menu Apr 24 00:26:14.722360 kernel: efi: Freeing EFI boot services memory: 42796K Apr 24 00:26:14.722481 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 24 00:26:14.722487 kernel: dca service started, version 1.12.1 Apr 24 00:26:14.722493 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Apr 24 00:26:14.722499 kernel: PCI: Using configuration type 1 for base access Apr 24 00:26:14.722505 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 24 00:26:14.722511 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 24 00:26:14.722516 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 24 00:26:14.722786 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 24 00:26:14.722792 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 24 00:26:14.722798 kernel: ACPI: Added _OSI(Module Device) Apr 24 00:26:14.722804 kernel: ACPI: Added _OSI(Processor Device) Apr 24 00:26:14.722810 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 24 00:26:14.722816 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 24 00:26:14.722821 kernel: ACPI: Interpreter enabled Apr 24 00:26:14.722827 kernel: ACPI: PM: (supports S0 S5) Apr 24 00:26:14.722833 kernel: ACPI: Using IOAPIC for interrupt routing Apr 24 00:26:14.722841 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 24 00:26:14.722847 kernel: PCI: Using E820 reservations for host bridge windows Apr 24 00:26:14.722853 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 24 00:26:14.722859 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 24 00:26:14.808511 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 24 00:26:14.812476 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 24 00:26:14.813057 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 24 00:26:14.813207 kernel: PCI host bridge to bus 0000:00 Apr 24 00:26:14.814431 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 24 00:26:14.814489 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 24 00:26:14.815042 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 24 00:26:14.815096 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Apr 24 00:26:14.815147 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Apr 24 00:26:14.815198 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Apr 24 00:26:14.815252 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 24 00:26:14.817012 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Apr 24 00:26:14.817344 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Apr 24 00:26:14.817409 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Apr 24 00:26:14.817466 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Apr 24 00:26:14.818011 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Apr 24 00:26:14.818081 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 24 00:26:14.818144 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 24414 usecs Apr 24 00:26:14.819183 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Apr 24 00:26:14.819247 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Apr 24 00:26:14.819303 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Apr 24 00:26:14.819358 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Apr 24 00:26:14.820278 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Apr 24 00:26:14.820348 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Apr 24 00:26:14.820405 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Apr 24 00:26:14.820462 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Apr 24 00:26:14.821381 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Apr 24 00:26:14.821445 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Apr 24 00:26:14.821502 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Apr 24 00:26:14.822057 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Apr 24 00:26:14.822122 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Apr 24 00:26:14.823043 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Apr 24 00:26:14.823107 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 24 00:26:14.823163 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 22460 usecs Apr 24 00:26:14.823496 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Apr 24 00:26:14.824046 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Apr 24 00:26:14.824105 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Apr 24 00:26:14.824448 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Apr 24 00:26:14.824511 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Apr 24 00:26:14.824519 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 24 00:26:14.825010 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 24 00:26:14.825017 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 24 00:26:14.825023 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 24 00:26:14.825029 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 24 00:26:14.825035 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 24 00:26:14.825044 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 24 00:26:14.825050 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 24 00:26:14.825056 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 24 00:26:14.825062 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 24 00:26:14.825068 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 24 00:26:14.825074 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 24 00:26:14.825080 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 24 00:26:14.825086 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 24 00:26:14.825092 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 24 00:26:14.825099 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 24 00:26:14.825105 kernel: iommu: Default domain type: Translated Apr 24 00:26:14.825111 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 24 00:26:14.825117 kernel: efivars: Registered efivars operations Apr 24 00:26:14.825123 kernel: PCI: Using ACPI for IRQ routing Apr 24 00:26:14.825129 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 24 00:26:14.825135 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Apr 24 00:26:14.825141 kernel: e820: reserve RAM buffer [mem 0x9a106018-0x9bffffff] Apr 24 00:26:14.825146 kernel: e820: reserve RAM buffer [mem 0x9a143018-0x9bffffff] Apr 24 00:26:14.825154 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Apr 24 00:26:14.825159 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Apr 24 00:26:14.825228 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 24 00:26:14.825285 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 24 00:26:14.825345 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 24 00:26:14.825353 kernel: vgaarb: loaded Apr 24 00:26:14.825359 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 24 00:26:14.825365 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 24 00:26:14.825371 kernel: clocksource: Switched to clocksource kvm-clock Apr 24 00:26:14.825379 kernel: VFS: Disk quotas dquot_6.6.0 Apr 24 00:26:14.825386 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 24 00:26:14.825392 kernel: pnp: PnP ACPI init Apr 24 00:26:14.828032 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Apr 24 00:26:14.828044 kernel: pnp: PnP ACPI: found 6 devices Apr 24 00:26:14.828050 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 24 00:26:14.828057 kernel: NET: Registered PF_INET protocol family Apr 24 00:26:14.828063 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 24 00:26:14.828073 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 24 00:26:14.828079 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 24 00:26:14.828085 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 24 00:26:14.828091 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 24 00:26:14.828097 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 24 00:26:14.828103 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 24 00:26:14.828110 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 24 00:26:14.828116 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 24 00:26:14.828121 kernel: NET: Registered PF_XDP protocol family Apr 24 00:26:14.828187 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Apr 24 00:26:14.828248 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Apr 24 00:26:14.828303 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 24 00:26:14.828358 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 24 00:26:14.829495 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 24 00:26:14.830497 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Apr 24 00:26:14.831068 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Apr 24 00:26:14.831128 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Apr 24 00:26:14.831136 kernel: PCI: CLS 0 bytes, default 64 Apr 24 00:26:14.831142 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 24 00:26:14.831148 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 24 00:26:14.831155 kernel: Initialise system trusted keyrings Apr 24 00:26:14.831161 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 24 00:26:14.831167 kernel: Key type asymmetric registered Apr 24 00:26:14.831173 kernel: Asymmetric key parser 'x509' registered Apr 24 00:26:14.831190 kernel: hrtimer: interrupt took 3670760 ns Apr 24 00:26:14.831200 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 24 00:26:14.831206 kernel: io scheduler mq-deadline registered Apr 24 00:26:14.831212 kernel: io scheduler kyber registered Apr 24 00:26:14.831218 kernel: io scheduler bfq registered Apr 24 00:26:14.831224 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 24 00:26:14.831231 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 24 00:26:14.831237 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 24 00:26:14.831243 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 24 00:26:14.831249 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 24 00:26:14.831257 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 24 00:26:14.831263 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 24 00:26:14.831269 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 24 00:26:14.831275 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 24 00:26:14.834056 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 24 00:26:14.834074 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 24 00:26:14.834191 kernel: rtc_cmos 00:04: registered as rtc0 Apr 24 00:26:14.834248 kernel: rtc_cmos 00:04: setting system clock to 2026-04-24T00:26:09 UTC (1776990369) Apr 24 00:26:14.834319 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Apr 24 00:26:14.834327 kernel: intel_pstate: CPU model not supported Apr 24 00:26:14.834334 kernel: efifb: probing for efifb Apr 24 00:26:14.834340 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Apr 24 00:26:14.834354 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Apr 24 00:26:14.834360 kernel: efifb: scrolling: redraw Apr 24 00:26:14.834368 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 24 00:26:14.834374 kernel: Console: switching to colour frame buffer device 160x50 Apr 24 00:26:14.834380 kernel: fb0: EFI VGA frame buffer device Apr 24 00:26:14.834387 kernel: pstore: Using crash dump compression: deflate Apr 24 00:26:14.834393 kernel: pstore: Registered efi_pstore as persistent store backend Apr 24 00:26:14.834399 kernel: NET: Registered PF_INET6 protocol family Apr 24 00:26:14.834405 kernel: Segment Routing with IPv6 Apr 24 00:26:14.834411 kernel: In-situ OAM (IOAM) with IPv6 Apr 24 00:26:14.834417 kernel: NET: Registered PF_PACKET protocol family Apr 24 00:26:14.834425 kernel: Key type dns_resolver registered Apr 24 00:26:14.834431 kernel: IPI shorthand broadcast: enabled Apr 24 00:26:14.834437 kernel: sched_clock: Marking stable (22370356109, 1706316348)->(25439437127, -1362764670) Apr 24 00:26:14.834443 kernel: registered taskstats version 1 Apr 24 00:26:14.834449 kernel: Loading compiled-in X.509 certificates Apr 24 00:26:14.834455 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.81-flatcar: 09f9b319c99eb3f54e68ef799fdb2bce5b238ec0' Apr 24 00:26:14.834461 kernel: Demotion targets for Node 0: null Apr 24 00:26:14.834467 kernel: Key type .fscrypt registered Apr 24 00:26:14.834473 kernel: Key type fscrypt-provisioning registered Apr 24 00:26:14.834481 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 24 00:26:14.834487 kernel: ima: Allocated hash algorithm: sha1 Apr 24 00:26:14.834493 kernel: ima: No architecture policies found Apr 24 00:26:14.834499 kernel: clk: Disabling unused clocks Apr 24 00:26:14.834505 kernel: Warning: unable to open an initial console. Apr 24 00:26:14.834511 kernel: Freeing unused kernel image (initmem) memory: 46224K Apr 24 00:26:14.834517 kernel: Write protecting the kernel read-only data: 40960k Apr 24 00:26:14.835035 kernel: Freeing unused kernel image (rodata/data gap) memory: 548K Apr 24 00:26:14.835044 kernel: Run /init as init process Apr 24 00:26:14.835051 kernel: with arguments: Apr 24 00:26:14.835057 kernel: /init Apr 24 00:26:14.835064 kernel: with environment: Apr 24 00:26:14.835070 kernel: HOME=/ Apr 24 00:26:14.835076 kernel: TERM=linux Apr 24 00:26:14.835205 systemd[1]: Successfully made /usr/ read-only. Apr 24 00:26:14.835216 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 24 00:26:14.835228 systemd[1]: Detected virtualization kvm. Apr 24 00:26:14.835235 systemd[1]: Detected architecture x86-64. Apr 24 00:26:14.835241 systemd[1]: Running in initrd. Apr 24 00:26:14.835247 systemd[1]: No hostname configured, using default hostname. Apr 24 00:26:14.835254 systemd[1]: Hostname set to . Apr 24 00:26:14.835260 systemd[1]: Initializing machine ID from VM UUID. Apr 24 00:26:14.835267 systemd[1]: Queued start job for default target initrd.target. Apr 24 00:26:14.835273 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 00:26:14.835282 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 00:26:14.835289 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 24 00:26:14.835295 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 24 00:26:14.835302 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 24 00:26:14.835309 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 24 00:26:14.835316 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 24 00:26:14.835324 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 24 00:26:14.835331 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 00:26:14.835337 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 24 00:26:14.835344 systemd[1]: Reached target paths.target - Path Units. Apr 24 00:26:14.835350 systemd[1]: Reached target slices.target - Slice Units. Apr 24 00:26:14.835357 systemd[1]: Reached target swap.target - Swaps. Apr 24 00:26:14.835363 systemd[1]: Reached target timers.target - Timer Units. Apr 24 00:26:14.835369 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 24 00:26:14.835376 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 24 00:26:14.835384 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 24 00:26:14.835391 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 24 00:26:14.835397 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 24 00:26:14.835405 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 24 00:26:14.835412 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 00:26:14.835418 systemd[1]: Reached target sockets.target - Socket Units. Apr 24 00:26:14.835424 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 24 00:26:14.835431 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 24 00:26:14.835439 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 24 00:26:14.835447 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Apr 24 00:26:14.835453 systemd[1]: Starting systemd-fsck-usr.service... Apr 24 00:26:14.835460 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 24 00:26:14.835466 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 24 00:26:14.835473 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 00:26:14.835479 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 24 00:26:14.836121 systemd-journald[198]: Collecting audit messages is disabled. Apr 24 00:26:14.836142 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 00:26:14.836153 systemd[1]: Finished systemd-fsck-usr.service. Apr 24 00:26:14.836159 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 24 00:26:14.836166 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 00:26:14.836174 systemd-journald[198]: Journal started Apr 24 00:26:14.836311 systemd-journald[198]: Runtime Journal (/run/log/journal/423e13c9fb9f4a86835800b177ca2615) is 5.9M, max 47.9M, 41.9M free. Apr 24 00:26:14.723238 systemd-modules-load[201]: Inserted module 'overlay' Apr 24 00:26:14.862371 systemd[1]: Started systemd-journald.service - Journal Service. Apr 24 00:26:14.905911 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 00:26:14.933068 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 24 00:26:14.989350 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 00:26:15.121927 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 24 00:26:15.201976 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 24 00:26:15.202002 kernel: Bridge firewalling registered Apr 24 00:26:15.175236 systemd-modules-load[201]: Inserted module 'br_netfilter' Apr 24 00:26:15.178320 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 24 00:26:15.181137 systemd-tmpfiles[219]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Apr 24 00:26:15.240324 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 00:26:15.251434 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 00:26:15.300405 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 00:26:15.365119 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 24 00:26:15.385516 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 00:26:15.473066 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 00:26:15.506452 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 24 00:26:15.540420 dracut-cmdline[239]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=35bf60e399c7fbdab9d27e362bd719e7cadd795a3fa26a4f30de01ccc70fba7e Apr 24 00:26:15.726310 systemd-resolved[251]: Positive Trust Anchors: Apr 24 00:26:15.727389 systemd-resolved[251]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 24 00:26:15.727417 systemd-resolved[251]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 24 00:26:15.780249 systemd-resolved[251]: Defaulting to hostname 'linux'. Apr 24 00:26:15.811048 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 24 00:26:15.883378 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 24 00:26:16.705166 kernel: SCSI subsystem initialized Apr 24 00:26:16.744284 kernel: Loading iSCSI transport class v2.0-870. Apr 24 00:26:16.808488 kernel: iscsi: registered transport (tcp) Apr 24 00:26:16.941364 kernel: iscsi: registered transport (qla4xxx) Apr 24 00:26:16.942084 kernel: QLogic iSCSI HBA Driver Apr 24 00:26:17.152228 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 24 00:26:17.304103 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 24 00:26:17.305226 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 24 00:26:17.593396 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 24 00:26:17.644208 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 24 00:26:17.881134 kernel: raid6: avx512x4 gen() 22766 MB/s Apr 24 00:26:17.907032 kernel: raid6: avx512x2 gen() 26453 MB/s Apr 24 00:26:17.933996 kernel: raid6: avx512x1 gen() 27871 MB/s Apr 24 00:26:17.959032 kernel: raid6: avx2x4 gen() 21868 MB/s Apr 24 00:26:17.984999 kernel: raid6: avx2x2 gen() 23424 MB/s Apr 24 00:26:18.022222 kernel: raid6: avx2x1 gen() 16751 MB/s Apr 24 00:26:18.022271 kernel: raid6: using algorithm avx512x1 gen() 27871 MB/s Apr 24 00:26:18.060410 kernel: raid6: .... xor() 16855 MB/s, rmw enabled Apr 24 00:26:18.061392 kernel: raid6: using avx512x2 recovery algorithm Apr 24 00:26:18.170224 kernel: xor: automatically using best checksumming function avx Apr 24 00:26:19.015497 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 24 00:26:19.064104 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 24 00:26:19.071238 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 00:26:19.301269 systemd-udevd[453]: Using default interface naming scheme 'v255'. Apr 24 00:26:19.330032 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 00:26:19.358847 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 24 00:26:19.503269 dracut-pre-trigger[458]: rd.md=0: removing MD RAID activation Apr 24 00:26:19.715489 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 24 00:26:19.769413 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 24 00:26:20.046482 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 00:26:20.052358 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 24 00:26:20.406481 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 24 00:26:20.470217 kernel: cryptd: max_cpu_qlen set to 1000 Apr 24 00:26:20.537513 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 24 00:26:20.597239 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 00:26:20.736460 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 24 00:26:20.737359 kernel: GPT:9289727 != 19775487 Apr 24 00:26:20.737378 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 24 00:26:20.737386 kernel: GPT:9289727 != 19775487 Apr 24 00:26:20.737395 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 24 00:26:20.737404 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 24 00:26:20.737413 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Apr 24 00:26:20.601228 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 00:26:20.764418 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 00:26:20.827262 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 00:26:20.857280 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 24 00:26:20.917265 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 00:26:20.917509 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 00:26:20.981256 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 00:26:21.007906 kernel: libata version 3.00 loaded. Apr 24 00:26:21.056051 kernel: AES CTR mode by8 optimization enabled Apr 24 00:26:21.199264 kernel: ahci 0000:00:1f.2: version 3.0 Apr 24 00:26:21.199508 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 24 00:26:21.263417 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Apr 24 00:26:21.263994 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Apr 24 00:26:21.264076 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 24 00:26:21.292375 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 24 00:26:21.331383 kernel: scsi host0: ahci Apr 24 00:26:21.332283 kernel: scsi host1: ahci Apr 24 00:26:21.332367 kernel: scsi host2: ahci Apr 24 00:26:21.332436 kernel: scsi host3: ahci Apr 24 00:26:21.351902 kernel: scsi host4: ahci Apr 24 00:26:21.363099 kernel: scsi host5: ahci Apr 24 00:26:21.387222 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 00:26:21.509295 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Apr 24 00:26:21.509318 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Apr 24 00:26:21.509326 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Apr 24 00:26:21.509334 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Apr 24 00:26:21.509341 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Apr 24 00:26:21.509356 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Apr 24 00:26:21.520094 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 24 00:26:21.556517 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 24 00:26:21.574978 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 24 00:26:21.603248 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 24 00:26:21.700392 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 24 00:26:21.811167 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 24 00:26:21.822141 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 24 00:26:21.822186 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 24 00:26:21.829183 disk-uuid[648]: Primary Header is updated. Apr 24 00:26:21.829183 disk-uuid[648]: Secondary Entries is updated. Apr 24 00:26:21.829183 disk-uuid[648]: Secondary Header is updated. Apr 24 00:26:21.883963 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 24 00:26:21.905085 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 24 00:26:21.905129 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 24 00:26:21.923169 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 24 00:26:21.957891 kernel: ata3.00: LPM support broken, forcing max_power Apr 24 00:26:21.957934 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 24 00:26:21.957944 kernel: ata3.00: applying bridge limits Apr 24 00:26:21.989266 kernel: ata3.00: LPM support broken, forcing max_power Apr 24 00:26:21.989307 kernel: ata3.00: configured for UDMA/100 Apr 24 00:26:22.021118 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 24 00:26:22.137205 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 24 00:26:22.137469 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 24 00:26:22.164489 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 24 00:26:22.690326 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 24 00:26:22.692957 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 24 00:26:22.732400 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 00:26:22.798966 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 24 00:26:22.820366 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 24 00:26:22.957466 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 24 00:26:22.980309 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 24 00:26:22.988383 disk-uuid[649]: The operation has completed successfully. Apr 24 00:26:23.107237 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 24 00:26:23.108191 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 24 00:26:23.150891 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 24 00:26:23.250141 sh[675]: Success Apr 24 00:26:23.365457 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 24 00:26:23.365961 kernel: device-mapper: uevent: version 1.0.3 Apr 24 00:26:23.395218 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Apr 24 00:26:23.512244 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 24 00:26:24.019154 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 24 00:26:24.051371 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 24 00:26:24.156311 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 24 00:26:24.185278 kernel: BTRFS: device fsid b0afcb9a-4dc6-42cc-b61f-b370046a03ca devid 1 transid 32 /dev/mapper/usr (253:0) scanned by mount (685) Apr 24 00:26:24.210047 kernel: BTRFS info (device dm-0): first mount of filesystem b0afcb9a-4dc6-42cc-b61f-b370046a03ca Apr 24 00:26:24.229344 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 24 00:26:24.327485 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Apr 24 00:26:24.330048 kernel: BTRFS info (device dm-0 state E): enabling free space tree Apr 24 00:26:24.342126 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 24 00:26:24.360315 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Apr 24 00:26:24.384433 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 24 00:26:24.388974 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 24 00:26:24.403454 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 24 00:26:24.658932 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (720) Apr 24 00:26:24.681040 kernel: BTRFS info (device vda6): first mount of filesystem 198e7c3b-b6f6-48f6-8d3f-d053e5a12995 Apr 24 00:26:24.681096 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 00:26:24.760327 kernel: BTRFS info (device vda6): turning on async discard Apr 24 00:26:24.760436 kernel: BTRFS info (device vda6): enabling free space tree Apr 24 00:26:24.809218 kernel: BTRFS info (device vda6): last unmount of filesystem 198e7c3b-b6f6-48f6-8d3f-d053e5a12995 Apr 24 00:26:24.827080 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 24 00:26:24.866177 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 24 00:26:25.901395 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 24 00:26:25.932461 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 24 00:26:26.179301 systemd-networkd[861]: lo: Link UP Apr 24 00:26:26.179432 systemd-networkd[861]: lo: Gained carrier Apr 24 00:26:26.193129 systemd-networkd[861]: Enumeration completed Apr 24 00:26:26.194429 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 24 00:26:26.205060 systemd-networkd[861]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 00:26:26.205064 systemd-networkd[861]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 00:26:26.309205 ignition[779]: Ignition 2.22.0 Apr 24 00:26:26.212881 systemd-networkd[861]: eth0: Link UP Apr 24 00:26:26.310162 ignition[779]: Stage: fetch-offline Apr 24 00:26:26.213019 systemd-networkd[861]: eth0: Gained carrier Apr 24 00:26:26.313490 ignition[779]: no configs at "/usr/lib/ignition/base.d" Apr 24 00:26:26.213028 systemd-networkd[861]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 00:26:26.313506 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 24 00:26:26.214434 systemd[1]: Reached target network.target - Network. Apr 24 00:26:26.315053 ignition[779]: parsed url from cmdline: "" Apr 24 00:26:26.292946 systemd-networkd[861]: eth0: DHCPv4 address 10.0.0.89/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 24 00:26:26.315057 ignition[779]: no config URL provided Apr 24 00:26:26.315062 ignition[779]: reading system config file "/usr/lib/ignition/user.ign" Apr 24 00:26:26.315068 ignition[779]: no config at "/usr/lib/ignition/user.ign" Apr 24 00:26:26.316960 ignition[779]: op(1): [started] loading QEMU firmware config module Apr 24 00:26:26.316964 ignition[779]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 24 00:26:26.591229 ignition[779]: op(1): [finished] loading QEMU firmware config module Apr 24 00:26:26.592015 ignition[779]: QEMU firmware config was not found. Ignoring... Apr 24 00:26:27.740081 systemd-networkd[861]: eth0: Gained IPv6LL Apr 24 00:26:31.160414 ignition[779]: parsing config with SHA512: f882f271970506a993551608118f83270933d150acd05152f163e5ef92d3e87ad313feb1d8ff9c6505926b3af45e536b6e33cf9c1d71fa0eb74bc08ea4ff8202 Apr 24 00:26:31.215249 unknown[779]: fetched base config from "system" Apr 24 00:26:31.215379 unknown[779]: fetched user config from "qemu" Apr 24 00:26:31.247065 ignition[779]: fetch-offline: fetch-offline passed Apr 24 00:26:31.247938 ignition[779]: Ignition finished successfully Apr 24 00:26:31.300241 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 24 00:26:31.318075 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 24 00:26:31.321287 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 24 00:26:32.026155 ignition[870]: Ignition 2.22.0 Apr 24 00:26:32.026327 ignition[870]: Stage: kargs Apr 24 00:26:32.027297 ignition[870]: no configs at "/usr/lib/ignition/base.d" Apr 24 00:26:32.027306 ignition[870]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 24 00:26:32.028354 ignition[870]: kargs: kargs passed Apr 24 00:26:32.028391 ignition[870]: Ignition finished successfully Apr 24 00:26:32.102237 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 24 00:26:32.113987 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 24 00:26:32.529451 ignition[878]: Ignition 2.22.0 Apr 24 00:26:32.530401 ignition[878]: Stage: disks Apr 24 00:26:32.530991 ignition[878]: no configs at "/usr/lib/ignition/base.d" Apr 24 00:26:32.530999 ignition[878]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 24 00:26:32.532946 ignition[878]: disks: disks passed Apr 24 00:26:32.532998 ignition[878]: Ignition finished successfully Apr 24 00:26:32.648297 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 24 00:26:32.673168 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 24 00:26:32.710012 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 24 00:26:32.767429 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 24 00:26:32.804444 systemd[1]: Reached target sysinit.target - System Initialization. Apr 24 00:26:32.842029 systemd[1]: Reached target basic.target - Basic System. Apr 24 00:26:32.878451 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 24 00:26:33.010335 systemd-fsck[888]: ROOT: clean, 15/553520 files, 52789/553472 blocks Apr 24 00:26:33.033190 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 24 00:26:33.094234 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 24 00:26:33.983059 kernel: EXT4-fs (vda9): mounted filesystem 8c3ace63-1728-4b5e-a7b6-4ef650e41ba1 r/w with ordered data mode. Quota mode: none. Apr 24 00:26:33.986311 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 24 00:26:33.990412 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 24 00:26:34.018270 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 24 00:26:34.093290 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 24 00:26:34.109440 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 24 00:26:34.144463 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (897) Apr 24 00:26:34.109481 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 24 00:26:34.109504 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 24 00:26:34.178009 kernel: BTRFS info (device vda6): first mount of filesystem 198e7c3b-b6f6-48f6-8d3f-d053e5a12995 Apr 24 00:26:34.178069 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 00:26:34.237443 kernel: BTRFS info (device vda6): turning on async discard Apr 24 00:26:34.238043 kernel: BTRFS info (device vda6): enabling free space tree Apr 24 00:26:34.305235 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 24 00:26:34.315389 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 24 00:26:34.365499 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 24 00:26:34.634768 initrd-setup-root[921]: cut: /sysroot/etc/passwd: No such file or directory Apr 24 00:26:34.670130 initrd-setup-root[928]: cut: /sysroot/etc/group: No such file or directory Apr 24 00:26:34.723348 initrd-setup-root[935]: cut: /sysroot/etc/shadow: No such file or directory Apr 24 00:26:34.746178 initrd-setup-root[942]: cut: /sysroot/etc/gshadow: No such file or directory Apr 24 00:26:35.431462 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 24 00:26:35.456500 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 24 00:26:35.516364 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 24 00:26:35.555274 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 24 00:26:35.589964 kernel: BTRFS info (device vda6): last unmount of filesystem 198e7c3b-b6f6-48f6-8d3f-d053e5a12995 Apr 24 00:26:35.667425 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 24 00:26:35.805022 ignition[1011]: INFO : Ignition 2.22.0 Apr 24 00:26:35.819337 ignition[1011]: INFO : Stage: mount Apr 24 00:26:35.819337 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 00:26:35.819337 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 24 00:26:35.819337 ignition[1011]: INFO : mount: mount passed Apr 24 00:26:35.819337 ignition[1011]: INFO : Ignition finished successfully Apr 24 00:26:35.911310 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 24 00:26:35.928066 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 24 00:26:36.003362 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 24 00:26:36.091224 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1024) Apr 24 00:26:36.129190 kernel: BTRFS info (device vda6): first mount of filesystem 198e7c3b-b6f6-48f6-8d3f-d053e5a12995 Apr 24 00:26:36.129237 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 00:26:36.196281 kernel: BTRFS info (device vda6): turning on async discard Apr 24 00:26:36.196352 kernel: BTRFS info (device vda6): enabling free space tree Apr 24 00:26:36.203212 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 24 00:26:36.404136 ignition[1040]: INFO : Ignition 2.22.0 Apr 24 00:26:36.422492 ignition[1040]: INFO : Stage: files Apr 24 00:26:36.422492 ignition[1040]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 00:26:36.422492 ignition[1040]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 24 00:26:36.422492 ignition[1040]: DEBUG : files: compiled without relabeling support, skipping Apr 24 00:26:36.422492 ignition[1040]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 24 00:26:36.422492 ignition[1040]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 24 00:26:36.558297 ignition[1040]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 24 00:26:36.558297 ignition[1040]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 24 00:26:36.558297 ignition[1040]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 24 00:26:36.558297 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 24 00:26:36.558297 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 24 00:26:36.435998 unknown[1040]: wrote ssh authorized keys file for user: core Apr 24 00:26:36.884969 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 24 00:26:37.027183 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 24 00:26:37.044108 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 24 00:26:37.044108 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 24 00:26:37.311942 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 24 00:26:37.428120 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 24 00:26:37.428120 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 24 00:26:37.461261 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 24 00:26:37.461261 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 24 00:26:37.461261 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 24 00:26:37.461261 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 24 00:26:37.461261 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 24 00:26:37.461261 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 24 00:26:37.461261 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 24 00:26:37.461261 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 24 00:26:37.461261 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 24 00:26:37.461261 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 24 00:26:37.461261 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 24 00:26:37.461261 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 24 00:26:37.461261 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Apr 24 00:26:37.676913 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 24 00:26:38.098933 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 24 00:26:38.098933 ignition[1040]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 24 00:26:38.137332 ignition[1040]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 24 00:26:38.155954 ignition[1040]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 24 00:26:38.155954 ignition[1040]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 24 00:26:38.155954 ignition[1040]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Apr 24 00:26:38.155954 ignition[1040]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 24 00:26:38.155954 ignition[1040]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 24 00:26:38.155954 ignition[1040]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Apr 24 00:26:38.155954 ignition[1040]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Apr 24 00:26:38.283213 ignition[1040]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 24 00:26:38.283213 ignition[1040]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 24 00:26:38.283213 ignition[1040]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Apr 24 00:26:38.283213 ignition[1040]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Apr 24 00:26:38.283213 ignition[1040]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Apr 24 00:26:38.283213 ignition[1040]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 24 00:26:38.283213 ignition[1040]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 24 00:26:38.283213 ignition[1040]: INFO : files: files passed Apr 24 00:26:38.283213 ignition[1040]: INFO : Ignition finished successfully Apr 24 00:26:38.304943 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 24 00:26:38.315450 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 24 00:26:38.425270 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 24 00:26:38.433307 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 24 00:26:38.433477 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 24 00:26:38.503139 initrd-setup-root-after-ignition[1072]: grep: /sysroot/oem/oem-release: No such file or directory Apr 24 00:26:38.521063 initrd-setup-root-after-ignition[1074]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 24 00:26:38.533986 initrd-setup-root-after-ignition[1074]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 24 00:26:38.528377 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 24 00:26:38.563400 initrd-setup-root-after-ignition[1078]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 24 00:26:38.557772 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 24 00:26:38.598131 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 24 00:26:38.705238 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 24 00:26:38.705492 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 24 00:26:38.723341 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 24 00:26:38.733936 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 24 00:26:38.760342 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 24 00:26:38.782481 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 24 00:26:38.836035 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 24 00:26:38.860095 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 24 00:26:38.921360 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 24 00:26:38.922002 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 00:26:38.959498 systemd[1]: Stopped target timers.target - Timer Units. Apr 24 00:26:38.975205 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 24 00:26:38.975986 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 24 00:26:39.002398 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 24 00:26:39.007355 systemd[1]: Stopped target basic.target - Basic System. Apr 24 00:26:39.026026 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 24 00:26:39.026211 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 24 00:26:39.051971 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 24 00:26:39.069807 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Apr 24 00:26:39.088184 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 24 00:26:39.088809 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 24 00:26:39.142094 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 24 00:26:39.151108 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 24 00:26:39.159977 systemd[1]: Stopped target swap.target - Swaps. Apr 24 00:26:39.189975 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 24 00:26:39.190260 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 24 00:26:39.207091 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 24 00:26:39.215945 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 00:26:39.251416 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 24 00:26:39.260280 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 00:26:39.260938 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 24 00:26:39.261229 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 24 00:26:39.306908 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 24 00:26:39.307167 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 24 00:26:39.325134 systemd[1]: Stopped target paths.target - Path Units. Apr 24 00:26:39.334401 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 24 00:26:39.335266 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 00:26:39.367053 systemd[1]: Stopped target slices.target - Slice Units. Apr 24 00:26:39.386505 systemd[1]: Stopped target sockets.target - Socket Units. Apr 24 00:26:39.408399 systemd[1]: iscsid.socket: Deactivated successfully. Apr 24 00:26:39.408828 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 24 00:26:39.423417 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 24 00:26:39.423483 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 24 00:26:39.446147 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 24 00:26:39.446240 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 24 00:26:39.454317 systemd[1]: ignition-files.service: Deactivated successfully. Apr 24 00:26:39.454391 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 24 00:26:39.507957 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 24 00:26:39.531185 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 24 00:26:39.538496 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 24 00:26:39.538758 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 00:26:39.548499 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 24 00:26:39.549048 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 24 00:26:39.621770 ignition[1098]: INFO : Ignition 2.22.0 Apr 24 00:26:39.621770 ignition[1098]: INFO : Stage: umount Apr 24 00:26:39.621770 ignition[1098]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 00:26:39.621770 ignition[1098]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 24 00:26:39.621770 ignition[1098]: INFO : umount: umount passed Apr 24 00:26:39.621770 ignition[1098]: INFO : Ignition finished successfully Apr 24 00:26:39.571324 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 24 00:26:39.574418 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 24 00:26:39.574491 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 24 00:26:39.603798 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 24 00:26:39.604048 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 24 00:26:39.709804 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 24 00:26:39.710117 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 24 00:26:39.717996 systemd[1]: Stopped target network.target - Network. Apr 24 00:26:39.733338 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 24 00:26:39.733387 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 24 00:26:39.747132 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 24 00:26:39.747180 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 24 00:26:39.780055 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 24 00:26:39.780171 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 24 00:26:39.787244 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 24 00:26:39.787279 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 24 00:26:39.802731 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 24 00:26:39.802771 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 24 00:26:39.819038 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 24 00:26:39.850823 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 24 00:26:39.876247 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 24 00:26:39.877192 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 24 00:26:39.918709 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Apr 24 00:26:39.920042 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 24 00:26:39.920513 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 24 00:26:39.947943 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Apr 24 00:26:39.948999 systemd[1]: Stopped target network-pre.target - Preparation for Network. Apr 24 00:26:39.954252 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 24 00:26:39.954285 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 24 00:26:39.972504 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 24 00:26:39.994782 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 24 00:26:39.994933 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 24 00:26:40.016062 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 24 00:26:40.016107 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 24 00:26:40.084988 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 24 00:26:40.085124 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 24 00:26:40.102164 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 24 00:26:40.102213 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 00:26:40.140211 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 00:26:40.141469 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 24 00:26:40.141704 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Apr 24 00:26:40.190350 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 24 00:26:40.190825 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 24 00:26:40.216267 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 24 00:26:40.216963 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 00:26:40.224975 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 24 00:26:40.225007 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 24 00:26:40.244811 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 24 00:26:40.244932 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 00:26:40.261412 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 24 00:26:40.261456 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 24 00:26:40.294436 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 24 00:26:40.294489 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 24 00:26:40.308809 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 24 00:26:40.309116 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 00:26:40.324278 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 24 00:26:40.336300 systemd[1]: systemd-network-generator.service: Deactivated successfully. Apr 24 00:26:40.336350 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Apr 24 00:26:40.416106 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 24 00:26:40.416261 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 00:26:40.435076 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 24 00:26:40.435124 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 00:26:40.476429 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 24 00:26:40.476782 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 00:26:40.487445 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 00:26:40.487487 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 00:26:40.525100 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Apr 24 00:26:40.525152 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Apr 24 00:26:40.525179 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Apr 24 00:26:40.525206 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 24 00:26:40.590328 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 24 00:26:40.590812 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 24 00:26:40.619216 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 24 00:26:40.636706 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 24 00:26:40.680742 systemd[1]: Switching root. Apr 24 00:26:40.718116 systemd-journald[198]: Journal stopped Apr 24 00:26:43.005181 systemd-journald[198]: Received SIGTERM from PID 1 (systemd). Apr 24 00:26:43.005234 kernel: SELinux: policy capability network_peer_controls=1 Apr 24 00:26:43.005246 kernel: SELinux: policy capability open_perms=1 Apr 24 00:26:43.005257 kernel: SELinux: policy capability extended_socket_class=1 Apr 24 00:26:43.005265 kernel: SELinux: policy capability always_check_network=0 Apr 24 00:26:43.005276 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 24 00:26:43.005287 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 24 00:26:43.005295 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 24 00:26:43.005303 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 24 00:26:43.005313 kernel: SELinux: policy capability userspace_initial_context=0 Apr 24 00:26:43.005322 kernel: audit: type=1403 audit(1776990400.954:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 24 00:26:43.005331 systemd[1]: Successfully loaded SELinux policy in 125.891ms. Apr 24 00:26:43.005347 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.706ms. Apr 24 00:26:43.005356 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 24 00:26:43.005366 systemd[1]: Detected virtualization kvm. Apr 24 00:26:43.005374 systemd[1]: Detected architecture x86-64. Apr 24 00:26:43.005382 systemd[1]: Detected first boot. Apr 24 00:26:43.005393 systemd[1]: Initializing machine ID from VM UUID. Apr 24 00:26:43.005402 zram_generator::config[1145]: No configuration found. Apr 24 00:26:43.005411 kernel: Guest personality initialized and is inactive Apr 24 00:26:43.005419 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Apr 24 00:26:43.005426 kernel: Initialized host personality Apr 24 00:26:43.005435 kernel: NET: Registered PF_VSOCK protocol family Apr 24 00:26:43.005443 systemd[1]: Populated /etc with preset unit settings. Apr 24 00:26:43.005451 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Apr 24 00:26:43.005459 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 24 00:26:43.005468 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 24 00:26:43.005477 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 24 00:26:43.005485 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 24 00:26:43.005493 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 24 00:26:43.005500 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 24 00:26:43.005510 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 24 00:26:43.005519 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 24 00:26:43.005669 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 24 00:26:43.005678 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 24 00:26:43.005686 systemd[1]: Created slice user.slice - User and Session Slice. Apr 24 00:26:43.005695 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 00:26:43.005704 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 00:26:43.005712 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 24 00:26:43.005723 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 24 00:26:43.005731 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 24 00:26:43.005739 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 24 00:26:43.005747 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 24 00:26:43.005758 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 00:26:43.005769 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 24 00:26:43.005776 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 24 00:26:43.005784 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 24 00:26:43.005795 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 24 00:26:43.005803 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 24 00:26:43.005810 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 00:26:43.005818 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 24 00:26:43.005826 systemd[1]: Reached target slices.target - Slice Units. Apr 24 00:26:43.005834 systemd[1]: Reached target swap.target - Swaps. Apr 24 00:26:43.005842 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 24 00:26:43.005850 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 24 00:26:43.005932 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 24 00:26:43.005944 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 24 00:26:43.005952 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 24 00:26:43.005960 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 00:26:43.005968 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 24 00:26:43.005976 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 24 00:26:43.005984 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 24 00:26:43.005993 systemd[1]: Mounting media.mount - External Media Directory... Apr 24 00:26:43.006001 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 00:26:43.006009 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 24 00:26:43.006018 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 24 00:26:43.006026 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 24 00:26:43.006035 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 24 00:26:43.006044 systemd[1]: Reached target machines.target - Containers. Apr 24 00:26:43.006051 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 24 00:26:43.006060 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 00:26:43.006068 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 24 00:26:43.006075 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 24 00:26:43.006085 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 00:26:43.006092 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 24 00:26:43.006100 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 00:26:43.006108 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 24 00:26:43.006116 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 00:26:43.006124 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 24 00:26:43.006132 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 24 00:26:43.006140 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 24 00:26:43.006147 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 24 00:26:43.006157 systemd[1]: Stopped systemd-fsck-usr.service. Apr 24 00:26:43.006165 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 24 00:26:43.006173 kernel: ACPI: bus type drm_connector registered Apr 24 00:26:43.006180 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 24 00:26:43.006188 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 24 00:26:43.006196 kernel: loop: module loaded Apr 24 00:26:43.006204 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 24 00:26:43.006212 kernel: fuse: init (API version 7.41) Apr 24 00:26:43.006220 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 24 00:26:43.006250 systemd-journald[1230]: Collecting audit messages is disabled. Apr 24 00:26:43.006269 systemd-journald[1230]: Journal started Apr 24 00:26:43.006286 systemd-journald[1230]: Runtime Journal (/run/log/journal/423e13c9fb9f4a86835800b177ca2615) is 5.9M, max 47.9M, 41.9M free. Apr 24 00:26:41.759501 systemd[1]: Queued start job for default target multi-user.target. Apr 24 00:26:41.776067 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 24 00:26:41.777306 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 24 00:26:41.777954 systemd[1]: systemd-journald.service: Consumed 3.695s CPU time. Apr 24 00:26:43.027472 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 24 00:26:43.065659 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 24 00:26:43.087853 systemd[1]: verity-setup.service: Deactivated successfully. Apr 24 00:26:43.087984 systemd[1]: Stopped verity-setup.service. Apr 24 00:26:43.109786 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 00:26:43.119782 systemd[1]: Started systemd-journald.service - Journal Service. Apr 24 00:26:43.129009 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 24 00:26:43.137785 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 24 00:26:43.146846 systemd[1]: Mounted media.mount - External Media Directory. Apr 24 00:26:43.155106 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 24 00:26:43.164305 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 24 00:26:43.173968 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 24 00:26:43.182360 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 24 00:26:43.192363 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 00:26:43.203262 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 24 00:26:43.204075 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 24 00:26:43.214199 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 00:26:43.214995 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 00:26:43.224755 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 24 00:26:43.225190 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 24 00:26:43.235058 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 00:26:43.235358 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 00:26:43.245787 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 24 00:26:43.246148 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 24 00:26:43.255735 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 00:26:43.256119 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 00:26:43.265819 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 24 00:26:43.275470 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 24 00:26:43.287081 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 24 00:26:43.297840 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 24 00:26:43.308736 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 00:26:43.341263 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 24 00:26:43.351974 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 24 00:26:43.362854 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 24 00:26:43.372207 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 24 00:26:43.372322 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 24 00:26:43.382044 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 24 00:26:43.395244 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 24 00:26:43.405276 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 00:26:43.407505 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 24 00:26:43.417986 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 24 00:26:43.428023 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 24 00:26:43.429379 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 24 00:26:43.437987 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 24 00:26:43.439311 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 00:26:43.461712 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 24 00:26:43.470733 systemd-journald[1230]: Time spent on flushing to /var/log/journal/423e13c9fb9f4a86835800b177ca2615 is 21.847ms for 1054 entries. Apr 24 00:26:43.470733 systemd-journald[1230]: System Journal (/var/log/journal/423e13c9fb9f4a86835800b177ca2615) is 8M, max 195.6M, 187.6M free. Apr 24 00:26:43.529845 systemd-journald[1230]: Received client request to flush runtime journal. Apr 24 00:26:43.475962 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 24 00:26:43.492025 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 24 00:26:43.503145 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 24 00:26:43.516171 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 24 00:26:43.535274 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 24 00:26:43.551214 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 00:26:43.564124 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 24 00:26:43.570784 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Apr 24 00:26:43.570797 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Apr 24 00:26:43.577683 kernel: loop0: detected capacity change from 0 to 110984 Apr 24 00:26:43.583144 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 24 00:26:43.593232 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 00:26:43.607355 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 24 00:26:43.645951 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 24 00:26:43.680409 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 24 00:26:43.691985 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 24 00:26:43.710085 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 24 00:26:43.724405 kernel: loop1: detected capacity change from 0 to 219192 Apr 24 00:26:43.758364 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. Apr 24 00:26:43.758796 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. Apr 24 00:26:43.762333 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 00:26:43.801699 kernel: loop2: detected capacity change from 0 to 128560 Apr 24 00:26:43.870798 kernel: loop3: detected capacity change from 0 to 110984 Apr 24 00:26:43.918004 kernel: loop4: detected capacity change from 0 to 219192 Apr 24 00:26:43.932977 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 24 00:26:43.945712 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 00:26:43.950379 kernel: loop5: detected capacity change from 0 to 128560 Apr 24 00:26:43.992085 (sd-merge)[1291]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 24 00:26:43.992498 (sd-merge)[1291]: Merged extensions into '/usr'. Apr 24 00:26:43.999332 systemd[1]: Reload requested from client PID 1265 ('systemd-sysext') (unit systemd-sysext.service)... Apr 24 00:26:43.999428 systemd[1]: Reloading... Apr 24 00:26:44.006422 systemd-udevd[1293]: Using default interface naming scheme 'v255'. Apr 24 00:26:44.082778 zram_generator::config[1319]: No configuration found. Apr 24 00:26:44.177692 ldconfig[1260]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 24 00:26:44.280741 kernel: mousedev: PS/2 mouse device common for all mice Apr 24 00:26:44.296941 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 24 00:26:44.310049 kernel: ACPI: button: Power Button [PWRF] Apr 24 00:26:44.326066 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 24 00:26:44.336134 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 24 00:26:44.339452 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 24 00:26:44.339518 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 24 00:26:44.340136 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 24 00:26:44.352751 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 24 00:26:44.359017 systemd[1]: Reloading finished in 359 ms. Apr 24 00:26:44.377477 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 00:26:44.391969 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 24 00:26:44.402510 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 24 00:26:44.458264 systemd[1]: Starting ensure-sysext.service... Apr 24 00:26:44.469742 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 24 00:26:44.487226 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 24 00:26:44.503839 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 24 00:26:44.530339 systemd[1]: Reload requested from client PID 1409 ('systemctl') (unit ensure-sysext.service)... Apr 24 00:26:44.530350 systemd[1]: Reloading... Apr 24 00:26:44.566355 systemd-tmpfiles[1412]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 24 00:26:44.566470 systemd-tmpfiles[1412]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 24 00:26:44.569179 systemd-tmpfiles[1412]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 24 00:26:44.570185 systemd-tmpfiles[1412]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 24 00:26:44.573839 systemd-tmpfiles[1412]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 24 00:26:44.574450 systemd-tmpfiles[1412]: ACLs are not supported, ignoring. Apr 24 00:26:44.575311 systemd-tmpfiles[1412]: ACLs are not supported, ignoring. Apr 24 00:26:44.591203 systemd-tmpfiles[1412]: Detected autofs mount point /boot during canonicalization of boot. Apr 24 00:26:44.591213 systemd-tmpfiles[1412]: Skipping /boot Apr 24 00:26:44.622318 systemd-tmpfiles[1412]: Detected autofs mount point /boot during canonicalization of boot. Apr 24 00:26:44.622328 systemd-tmpfiles[1412]: Skipping /boot Apr 24 00:26:44.822079 zram_generator::config[1444]: No configuration found. Apr 24 00:26:45.381846 systemd[1]: Reloading finished in 851 ms. Apr 24 00:26:45.418150 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 24 00:26:45.450048 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 00:26:45.498736 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 00:26:45.501004 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 24 00:26:45.513731 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 24 00:26:45.523371 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 00:26:45.539286 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 00:26:45.551835 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 00:26:45.564732 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 00:26:45.573196 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 00:26:45.573427 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 24 00:26:45.581424 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 24 00:26:45.594820 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 24 00:26:45.606343 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 24 00:26:45.619015 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 24 00:26:45.638183 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 00:26:45.647461 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 00:26:45.653796 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 00:26:45.654109 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 00:26:45.667090 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 00:26:45.667409 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 00:26:45.683256 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 24 00:26:45.702279 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 00:26:45.702509 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 00:26:45.716079 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 24 00:26:45.730735 augenrules[1510]: No rules Apr 24 00:26:45.732102 systemd[1]: audit-rules.service: Deactivated successfully. Apr 24 00:26:45.732448 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 24 00:26:45.752854 systemd[1]: Finished ensure-sysext.service. Apr 24 00:26:45.762148 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 24 00:26:45.763129 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 24 00:26:45.778337 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 00:26:45.792184 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 24 00:26:45.792670 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 00:26:45.794125 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 00:26:45.806764 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 24 00:26:45.808483 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 00:26:45.817866 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 00:26:45.831840 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 00:26:45.831976 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 24 00:26:45.836981 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 24 00:26:45.857403 augenrules[1531]: /sbin/augenrules: No change Apr 24 00:26:45.859030 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 24 00:26:45.859253 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 24 00:26:45.859278 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 00:26:45.863762 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 00:26:45.864152 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 00:26:45.868000 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 24 00:26:45.868282 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 24 00:26:45.879057 augenrules[1556]: No rules Apr 24 00:26:45.880060 systemd[1]: audit-rules.service: Deactivated successfully. Apr 24 00:26:45.891326 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 24 00:26:45.891863 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 00:26:45.892091 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 00:26:45.902790 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 00:26:45.903364 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 00:26:45.909842 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 24 00:26:45.910074 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 24 00:26:45.940385 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 24 00:26:45.967328 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 00:26:45.968230 systemd-networkd[1411]: lo: Link UP Apr 24 00:26:45.968322 systemd-networkd[1411]: lo: Gained carrier Apr 24 00:26:45.969821 systemd-networkd[1411]: Enumeration completed Apr 24 00:26:45.971978 systemd-networkd[1411]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 00:26:45.971985 systemd-networkd[1411]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 00:26:45.973774 systemd-networkd[1411]: eth0: Link UP Apr 24 00:26:45.974035 systemd-networkd[1411]: eth0: Gained carrier Apr 24 00:26:45.974047 systemd-networkd[1411]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 00:26:45.977346 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 24 00:26:45.988754 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 24 00:26:46.001166 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 24 00:26:46.002224 systemd-resolved[1492]: Positive Trust Anchors: Apr 24 00:26:46.002322 systemd-resolved[1492]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 24 00:26:46.002347 systemd-resolved[1492]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 24 00:26:46.009088 systemd-resolved[1492]: Defaulting to hostname 'linux'. Apr 24 00:26:46.011172 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 24 00:26:46.021178 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 24 00:26:46.031086 systemd[1]: Reached target network.target - Network. Apr 24 00:26:46.038765 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 24 00:26:46.041708 systemd-networkd[1411]: eth0: DHCPv4 address 10.0.0.89/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 24 00:26:46.042701 systemd-timesyncd[1545]: Network configuration changed, trying to establish connection. Apr 24 00:26:46.852836 systemd-resolved[1492]: Clock change detected. Flushing caches. Apr 24 00:26:46.852874 systemd-timesyncd[1545]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 24 00:26:46.852917 systemd-timesyncd[1545]: Initial clock synchronization to Fri 2026-04-24 00:26:46.852654 UTC. Apr 24 00:26:46.857434 systemd[1]: Reached target sysinit.target - System Initialization. Apr 24 00:26:46.868694 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 24 00:26:46.879021 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 24 00:26:46.889587 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Apr 24 00:26:46.899037 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 24 00:26:46.909852 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 24 00:26:46.909879 systemd[1]: Reached target paths.target - Path Units. Apr 24 00:26:46.917392 systemd[1]: Reached target time-set.target - System Time Set. Apr 24 00:26:46.925907 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 24 00:26:46.934793 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 24 00:26:46.945431 systemd[1]: Reached target timers.target - Timer Units. Apr 24 00:26:46.955571 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 24 00:26:46.969899 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 24 00:26:46.983870 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 24 00:26:46.995083 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Apr 24 00:26:47.006487 systemd[1]: Reached target ssh-access.target - SSH Access Available. Apr 24 00:26:47.019639 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 24 00:26:47.028443 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 24 00:26:47.039819 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 24 00:26:47.050781 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 24 00:26:47.062460 systemd[1]: Reached target sockets.target - Socket Units. Apr 24 00:26:47.070600 systemd[1]: Reached target basic.target - Basic System. Apr 24 00:26:47.078413 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 24 00:26:47.078596 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 24 00:26:47.080062 systemd[1]: Starting containerd.service - containerd container runtime... Apr 24 00:26:47.109948 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 24 00:26:47.120615 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 24 00:26:47.132889 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 24 00:26:47.164728 jq[1579]: false Apr 24 00:26:47.153741 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 24 00:26:47.164042 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 24 00:26:47.165856 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Apr 24 00:26:47.175964 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 24 00:26:47.186437 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 24 00:26:47.186785 google_oslogin_nss_cache[1581]: oslogin_cache_refresh[1581]: Refreshing passwd entry cache Apr 24 00:26:47.186955 oslogin_cache_refresh[1581]: Refreshing passwd entry cache Apr 24 00:26:47.196657 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 24 00:26:47.200632 google_oslogin_nss_cache[1581]: oslogin_cache_refresh[1581]: Failure getting users, quitting Apr 24 00:26:47.200632 google_oslogin_nss_cache[1581]: oslogin_cache_refresh[1581]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 24 00:26:47.200632 google_oslogin_nss_cache[1581]: oslogin_cache_refresh[1581]: Refreshing group entry cache Apr 24 00:26:47.200476 oslogin_cache_refresh[1581]: Failure getting users, quitting Apr 24 00:26:47.200580 oslogin_cache_refresh[1581]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 24 00:26:47.200635 oslogin_cache_refresh[1581]: Refreshing group entry cache Apr 24 00:26:47.211308 google_oslogin_nss_cache[1581]: oslogin_cache_refresh[1581]: Failure getting groups, quitting Apr 24 00:26:47.211401 oslogin_cache_refresh[1581]: Failure getting groups, quitting Apr 24 00:26:47.211690 google_oslogin_nss_cache[1581]: oslogin_cache_refresh[1581]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 24 00:26:47.211488 oslogin_cache_refresh[1581]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 24 00:26:47.213679 extend-filesystems[1580]: Found /dev/vda6 Apr 24 00:26:47.214324 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 24 00:26:47.225767 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 24 00:26:47.229793 extend-filesystems[1580]: Found /dev/vda9 Apr 24 00:26:47.245908 extend-filesystems[1580]: Checking size of /dev/vda9 Apr 24 00:26:47.239674 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 24 00:26:47.240368 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 24 00:26:47.253029 systemd[1]: Starting update-engine.service - Update Engine... Apr 24 00:26:47.258635 extend-filesystems[1580]: Resized partition /dev/vda9 Apr 24 00:26:47.265641 extend-filesystems[1602]: resize2fs 1.47.3 (8-Jul-2025) Apr 24 00:26:47.297726 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 24 00:26:47.263692 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 24 00:26:47.304332 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 24 00:26:47.317696 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 24 00:26:47.319289 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 24 00:26:47.320477 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Apr 24 00:26:47.323444 jq[1603]: true Apr 24 00:26:47.321011 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Apr 24 00:26:47.331658 systemd[1]: motdgen.service: Deactivated successfully. Apr 24 00:26:47.332389 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 24 00:26:47.345359 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 24 00:26:47.345883 update_engine[1598]: I20260424 00:26:47.345382 1598 main.cc:92] Flatcar Update Engine starting Apr 24 00:26:47.345752 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 24 00:26:47.366871 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 24 00:26:47.390426 jq[1610]: true Apr 24 00:26:47.394603 extend-filesystems[1602]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 24 00:26:47.394603 extend-filesystems[1602]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 24 00:26:47.394603 extend-filesystems[1602]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 24 00:26:47.436414 extend-filesystems[1580]: Resized filesystem in /dev/vda9 Apr 24 00:26:47.404819 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 24 00:26:47.461640 tar[1607]: linux-amd64/LICENSE Apr 24 00:26:47.461640 tar[1607]: linux-amd64/helm Apr 24 00:26:47.442958 dbus-daemon[1577]: [system] SELinux support is enabled Apr 24 00:26:47.461915 update_engine[1598]: I20260424 00:26:47.447932 1598 update_check_scheduler.cc:74] Next update check in 3m54s Apr 24 00:26:47.425953 systemd-logind[1595]: Watching system buttons on /dev/input/event2 (Power Button) Apr 24 00:26:47.425966 systemd-logind[1595]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 24 00:26:47.426757 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 24 00:26:47.427383 systemd-logind[1595]: New seat seat0. Apr 24 00:26:47.428791 (ntainerd)[1622]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 24 00:26:47.436811 systemd[1]: Started systemd-logind.service - User Login Management. Apr 24 00:26:47.445911 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 24 00:26:47.468007 dbus-daemon[1577]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 24 00:26:47.472615 systemd[1]: Started update-engine.service - Update Engine. Apr 24 00:26:47.484018 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 24 00:26:47.485455 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 24 00:26:47.495953 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 24 00:26:47.496046 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 24 00:26:47.510852 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 24 00:26:47.540722 bash[1640]: Updated "/home/core/.ssh/authorized_keys" Apr 24 00:26:47.541845 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 24 00:26:47.556730 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 24 00:26:47.620322 locksmithd[1641]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 24 00:26:47.740374 containerd[1622]: time="2026-04-24T00:26:47Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Apr 24 00:26:47.741912 containerd[1622]: time="2026-04-24T00:26:47.741642205Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Apr 24 00:26:47.751823 sshd_keygen[1615]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 24 00:26:47.756485 containerd[1622]: time="2026-04-24T00:26:47.756054620Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.513µs" Apr 24 00:26:47.756485 containerd[1622]: time="2026-04-24T00:26:47.756357449Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Apr 24 00:26:47.756485 containerd[1622]: time="2026-04-24T00:26:47.756376076Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Apr 24 00:26:47.756485 containerd[1622]: time="2026-04-24T00:26:47.756482874Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Apr 24 00:26:47.756669 containerd[1622]: time="2026-04-24T00:26:47.756591185Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Apr 24 00:26:47.756669 containerd[1622]: time="2026-04-24T00:26:47.756614123Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 24 00:26:47.756669 containerd[1622]: time="2026-04-24T00:26:47.756655935Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 24 00:26:47.756669 containerd[1622]: time="2026-04-24T00:26:47.756663678Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 24 00:26:47.757420 containerd[1622]: time="2026-04-24T00:26:47.756992375Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 24 00:26:47.757420 containerd[1622]: time="2026-04-24T00:26:47.757079339Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 24 00:26:47.757420 containerd[1622]: time="2026-04-24T00:26:47.757089920Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 24 00:26:47.757420 containerd[1622]: time="2026-04-24T00:26:47.757096035Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Apr 24 00:26:47.757420 containerd[1622]: time="2026-04-24T00:26:47.757332136Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Apr 24 00:26:47.757615 containerd[1622]: time="2026-04-24T00:26:47.757484781Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 24 00:26:47.757633 containerd[1622]: time="2026-04-24T00:26:47.757620623Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 24 00:26:47.757633 containerd[1622]: time="2026-04-24T00:26:47.757629712Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Apr 24 00:26:47.757884 containerd[1622]: time="2026-04-24T00:26:47.757725712Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Apr 24 00:26:47.758678 containerd[1622]: time="2026-04-24T00:26:47.758438982Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Apr 24 00:26:47.758678 containerd[1622]: time="2026-04-24T00:26:47.758642025Z" level=info msg="metadata content store policy set" policy=shared Apr 24 00:26:47.769592 containerd[1622]: time="2026-04-24T00:26:47.769490139Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Apr 24 00:26:47.769762 containerd[1622]: time="2026-04-24T00:26:47.769747917Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Apr 24 00:26:47.770378 containerd[1622]: time="2026-04-24T00:26:47.770363707Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Apr 24 00:26:47.770421 containerd[1622]: time="2026-04-24T00:26:47.770413916Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Apr 24 00:26:47.770465 containerd[1622]: time="2026-04-24T00:26:47.770458031Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Apr 24 00:26:47.770601 containerd[1622]: time="2026-04-24T00:26:47.770488248Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Apr 24 00:26:47.771107 containerd[1622]: time="2026-04-24T00:26:47.771095655Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Apr 24 00:26:47.771301 containerd[1622]: time="2026-04-24T00:26:47.771137425Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Apr 24 00:26:47.771339 containerd[1622]: time="2026-04-24T00:26:47.771332440Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Apr 24 00:26:47.771377 containerd[1622]: time="2026-04-24T00:26:47.771370972Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Apr 24 00:26:47.771406 containerd[1622]: time="2026-04-24T00:26:47.771400262Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Apr 24 00:26:47.771441 containerd[1622]: time="2026-04-24T00:26:47.771434362Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Apr 24 00:26:47.771672 containerd[1622]: time="2026-04-24T00:26:47.771657687Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Apr 24 00:26:47.771740 containerd[1622]: time="2026-04-24T00:26:47.771727865Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Apr 24 00:26:47.771802 containerd[1622]: time="2026-04-24T00:26:47.771784772Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Apr 24 00:26:47.771833 containerd[1622]: time="2026-04-24T00:26:47.771827620Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Apr 24 00:26:47.771861 containerd[1622]: time="2026-04-24T00:26:47.771855489Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Apr 24 00:26:47.771898 containerd[1622]: time="2026-04-24T00:26:47.771891434Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Apr 24 00:26:47.771934 containerd[1622]: time="2026-04-24T00:26:47.771927183Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Apr 24 00:26:47.771963 containerd[1622]: time="2026-04-24T00:26:47.771957004Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Apr 24 00:26:47.772008 containerd[1622]: time="2026-04-24T00:26:47.771998390Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Apr 24 00:26:47.772043 containerd[1622]: time="2026-04-24T00:26:47.772036433Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Apr 24 00:26:47.772071 containerd[1622]: time="2026-04-24T00:26:47.772065703Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Apr 24 00:26:47.772127 containerd[1622]: time="2026-04-24T00:26:47.772119550Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Apr 24 00:26:47.772352 containerd[1622]: time="2026-04-24T00:26:47.772344169Z" level=info msg="Start snapshots syncer" Apr 24 00:26:47.772485 containerd[1622]: time="2026-04-24T00:26:47.772473988Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Apr 24 00:26:47.772821 containerd[1622]: time="2026-04-24T00:26:47.772795001Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Apr 24 00:26:47.773663 containerd[1622]: time="2026-04-24T00:26:47.773647857Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Apr 24 00:26:47.775808 containerd[1622]: time="2026-04-24T00:26:47.775786523Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Apr 24 00:26:47.775946 containerd[1622]: time="2026-04-24T00:26:47.775933694Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Apr 24 00:26:47.776001 containerd[1622]: time="2026-04-24T00:26:47.775990829Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Apr 24 00:26:47.778291 containerd[1622]: time="2026-04-24T00:26:47.776743208Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Apr 24 00:26:47.778291 containerd[1622]: time="2026-04-24T00:26:47.776762359Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Apr 24 00:26:47.778291 containerd[1622]: time="2026-04-24T00:26:47.776772989Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Apr 24 00:26:47.778291 containerd[1622]: time="2026-04-24T00:26:47.776780240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Apr 24 00:26:47.778291 containerd[1622]: time="2026-04-24T00:26:47.776788734Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Apr 24 00:26:47.778291 containerd[1622]: time="2026-04-24T00:26:47.776809142Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Apr 24 00:26:47.778291 containerd[1622]: time="2026-04-24T00:26:47.776818058Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Apr 24 00:26:47.778291 containerd[1622]: time="2026-04-24T00:26:47.776826724Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Apr 24 00:26:47.778291 containerd[1622]: time="2026-04-24T00:26:47.776943409Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 24 00:26:47.778291 containerd[1622]: time="2026-04-24T00:26:47.776956803Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 24 00:26:47.778291 containerd[1622]: time="2026-04-24T00:26:47.776963288Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 24 00:26:47.778291 containerd[1622]: time="2026-04-24T00:26:47.776970194Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 24 00:26:47.778291 containerd[1622]: time="2026-04-24T00:26:47.776975690Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Apr 24 00:26:47.778291 containerd[1622]: time="2026-04-24T00:26:47.776983643Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Apr 24 00:26:47.778596 containerd[1622]: time="2026-04-24T00:26:47.777001261Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Apr 24 00:26:47.778596 containerd[1622]: time="2026-04-24T00:26:47.777019518Z" level=info msg="runtime interface created" Apr 24 00:26:47.778596 containerd[1622]: time="2026-04-24T00:26:47.777025222Z" level=info msg="created NRI interface" Apr 24 00:26:47.778596 containerd[1622]: time="2026-04-24T00:26:47.777034583Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Apr 24 00:26:47.778596 containerd[1622]: time="2026-04-24T00:26:47.777048098Z" level=info msg="Connect containerd service" Apr 24 00:26:47.778596 containerd[1622]: time="2026-04-24T00:26:47.777069719Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 24 00:26:47.779758 containerd[1622]: time="2026-04-24T00:26:47.779738258Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 24 00:26:47.797571 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 24 00:26:47.809480 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 24 00:26:47.847054 systemd[1]: issuegen.service: Deactivated successfully. Apr 24 00:26:47.847792 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 24 00:26:47.862949 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 24 00:26:47.905423 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 24 00:26:47.920644 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 24 00:26:47.930914 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 24 00:26:47.940751 systemd[1]: Reached target getty.target - Login Prompts. Apr 24 00:26:47.954646 containerd[1622]: time="2026-04-24T00:26:47.954282174Z" level=info msg="Start subscribing containerd event" Apr 24 00:26:47.954646 containerd[1622]: time="2026-04-24T00:26:47.954397970Z" level=info msg="Start recovering state" Apr 24 00:26:47.954749 containerd[1622]: time="2026-04-24T00:26:47.954707100Z" level=info msg="Start event monitor" Apr 24 00:26:47.954749 containerd[1622]: time="2026-04-24T00:26:47.954721375Z" level=info msg="Start cni network conf syncer for default" Apr 24 00:26:47.954749 containerd[1622]: time="2026-04-24T00:26:47.954729401Z" level=info msg="Start streaming server" Apr 24 00:26:47.954749 containerd[1622]: time="2026-04-24T00:26:47.954737321Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Apr 24 00:26:47.954749 containerd[1622]: time="2026-04-24T00:26:47.954742708Z" level=info msg="runtime interface starting up..." Apr 24 00:26:47.954749 containerd[1622]: time="2026-04-24T00:26:47.954746738Z" level=info msg="starting plugins..." Apr 24 00:26:47.954827 containerd[1622]: time="2026-04-24T00:26:47.954756398Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Apr 24 00:26:47.955862 containerd[1622]: time="2026-04-24T00:26:47.955788291Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 24 00:26:47.955862 containerd[1622]: time="2026-04-24T00:26:47.955834066Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 24 00:26:47.955958 systemd[1]: Started containerd.service - containerd container runtime. Apr 24 00:26:47.956617 containerd[1622]: time="2026-04-24T00:26:47.956399870Z" level=info msg="containerd successfully booted in 0.217288s" Apr 24 00:26:48.033830 tar[1607]: linux-amd64/README.md Apr 24 00:26:48.060461 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 24 00:26:48.705990 systemd-networkd[1411]: eth0: Gained IPv6LL Apr 24 00:26:48.711905 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 24 00:26:48.725107 systemd[1]: Reached target network-online.target - Network is Online. Apr 24 00:26:48.738899 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 24 00:26:48.757289 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 00:26:48.768300 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 24 00:26:48.821847 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 24 00:26:48.832137 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 24 00:26:48.832747 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 24 00:26:48.843824 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 24 00:26:48.864037 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 24 00:26:48.875121 systemd[1]: Started sshd@0-10.0.0.89:22-10.0.0.1:41142.service - OpenSSH per-connection server daemon (10.0.0.1:41142). Apr 24 00:26:49.006376 sshd[1705]: Accepted publickey for core from 10.0.0.1 port 41142 ssh2: RSA SHA256:O5eEjr93EU6o+yIitYA6KggdYqbq1kMU8aUvK/sf8Ls Apr 24 00:26:49.009362 sshd-session[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:26:49.020960 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 24 00:26:49.031411 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 24 00:26:49.051369 systemd-logind[1595]: New session 1 of user core. Apr 24 00:26:49.071466 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 24 00:26:49.085797 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 24 00:26:49.108778 (systemd)[1710]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 24 00:26:49.114740 systemd-logind[1595]: New session c1 of user core. Apr 24 00:26:49.268648 systemd[1710]: Queued start job for default target default.target. Apr 24 00:26:49.283662 systemd[1710]: Created slice app.slice - User Application Slice. Apr 24 00:26:49.283771 systemd[1710]: Reached target paths.target - Paths. Apr 24 00:26:49.283879 systemd[1710]: Reached target timers.target - Timers. Apr 24 00:26:49.285974 systemd[1710]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 24 00:26:49.306878 systemd[1710]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 24 00:26:49.307075 systemd[1710]: Reached target sockets.target - Sockets. Apr 24 00:26:49.307106 systemd[1710]: Reached target basic.target - Basic System. Apr 24 00:26:49.307128 systemd[1710]: Reached target default.target - Main User Target. Apr 24 00:26:49.307329 systemd[1710]: Startup finished in 179ms. Apr 24 00:26:49.307468 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 24 00:26:49.330950 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 24 00:26:49.369421 systemd[1]: Started sshd@1-10.0.0.89:22-10.0.0.1:41152.service - OpenSSH per-connection server daemon (10.0.0.1:41152). Apr 24 00:26:49.452008 sshd[1721]: Accepted publickey for core from 10.0.0.1 port 41152 ssh2: RSA SHA256:O5eEjr93EU6o+yIitYA6KggdYqbq1kMU8aUvK/sf8Ls Apr 24 00:26:49.453800 sshd-session[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:26:49.463452 systemd-logind[1595]: New session 2 of user core. Apr 24 00:26:49.474627 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 24 00:26:49.510800 sshd[1724]: Connection closed by 10.0.0.1 port 41152 Apr 24 00:26:49.512458 sshd-session[1721]: pam_unix(sshd:session): session closed for user core Apr 24 00:26:49.521776 systemd[1]: sshd@1-10.0.0.89:22-10.0.0.1:41152.service: Deactivated successfully. Apr 24 00:26:49.524588 systemd[1]: session-2.scope: Deactivated successfully. Apr 24 00:26:49.526449 systemd-logind[1595]: Session 2 logged out. Waiting for processes to exit. Apr 24 00:26:49.529495 systemd[1]: Started sshd@2-10.0.0.89:22-10.0.0.1:48260.service - OpenSSH per-connection server daemon (10.0.0.1:48260). Apr 24 00:26:49.543936 systemd-logind[1595]: Removed session 2. Apr 24 00:26:49.603493 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 48260 ssh2: RSA SHA256:O5eEjr93EU6o+yIitYA6KggdYqbq1kMU8aUvK/sf8Ls Apr 24 00:26:49.605411 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:26:49.614023 systemd-logind[1595]: New session 3 of user core. Apr 24 00:26:49.619494 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 24 00:26:49.652683 sshd[1734]: Connection closed by 10.0.0.1 port 48260 Apr 24 00:26:49.653086 sshd-session[1730]: pam_unix(sshd:session): session closed for user core Apr 24 00:26:49.660133 systemd[1]: sshd@2-10.0.0.89:22-10.0.0.1:48260.service: Deactivated successfully. Apr 24 00:26:49.662746 systemd[1]: session-3.scope: Deactivated successfully. Apr 24 00:26:49.664797 systemd-logind[1595]: Session 3 logged out. Waiting for processes to exit. Apr 24 00:26:49.667120 systemd-logind[1595]: Removed session 3. Apr 24 00:26:50.035756 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 00:26:50.046365 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 24 00:26:50.055725 systemd[1]: Startup finished in 23.179s (kernel) + 29.947s (initrd) + 8.417s (userspace) = 1min 1.543s. Apr 24 00:26:50.072016 (kubelet)[1744]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 00:26:50.818640 kubelet[1744]: E0424 00:26:50.817863 1744 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 00:26:50.821680 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 00:26:50.821881 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 00:26:50.822771 systemd[1]: kubelet.service: Consumed 1.177s CPU time, 258.4M memory peak. Apr 24 00:26:59.677752 systemd[1]: Started sshd@3-10.0.0.89:22-10.0.0.1:33846.service - OpenSSH per-connection server daemon (10.0.0.1:33846). Apr 24 00:26:59.778051 sshd[1757]: Accepted publickey for core from 10.0.0.1 port 33846 ssh2: RSA SHA256:O5eEjr93EU6o+yIitYA6KggdYqbq1kMU8aUvK/sf8Ls Apr 24 00:26:59.780408 sshd-session[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:26:59.789300 systemd-logind[1595]: New session 4 of user core. Apr 24 00:26:59.795655 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 24 00:26:59.830282 sshd[1760]: Connection closed by 10.0.0.1 port 33846 Apr 24 00:26:59.830737 sshd-session[1757]: pam_unix(sshd:session): session closed for user core Apr 24 00:26:59.839919 systemd[1]: sshd@3-10.0.0.89:22-10.0.0.1:33846.service: Deactivated successfully. Apr 24 00:26:59.842001 systemd[1]: session-4.scope: Deactivated successfully. Apr 24 00:26:59.844386 systemd-logind[1595]: Session 4 logged out. Waiting for processes to exit. Apr 24 00:26:59.847133 systemd[1]: Started sshd@4-10.0.0.89:22-10.0.0.1:33852.service - OpenSSH per-connection server daemon (10.0.0.1:33852). Apr 24 00:26:59.850458 systemd-logind[1595]: Removed session 4. Apr 24 00:26:59.920413 sshd[1766]: Accepted publickey for core from 10.0.0.1 port 33852 ssh2: RSA SHA256:O5eEjr93EU6o+yIitYA6KggdYqbq1kMU8aUvK/sf8Ls Apr 24 00:26:59.921890 sshd-session[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:26:59.930789 systemd-logind[1595]: New session 5 of user core. Apr 24 00:26:59.942840 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 24 00:26:59.957366 sshd[1769]: Connection closed by 10.0.0.1 port 33852 Apr 24 00:26:59.957554 sshd-session[1766]: pam_unix(sshd:session): session closed for user core Apr 24 00:26:59.970756 systemd[1]: sshd@4-10.0.0.89:22-10.0.0.1:33852.service: Deactivated successfully. Apr 24 00:26:59.973441 systemd[1]: session-5.scope: Deactivated successfully. Apr 24 00:26:59.975920 systemd-logind[1595]: Session 5 logged out. Waiting for processes to exit. Apr 24 00:26:59.978972 systemd[1]: Started sshd@5-10.0.0.89:22-10.0.0.1:33864.service - OpenSSH per-connection server daemon (10.0.0.1:33864). Apr 24 00:26:59.981919 systemd-logind[1595]: Removed session 5. Apr 24 00:27:00.060968 sshd[1775]: Accepted publickey for core from 10.0.0.1 port 33864 ssh2: RSA SHA256:O5eEjr93EU6o+yIitYA6KggdYqbq1kMU8aUvK/sf8Ls Apr 24 00:27:00.062547 sshd-session[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:27:00.072442 systemd-logind[1595]: New session 6 of user core. Apr 24 00:27:00.081966 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 24 00:27:00.115863 sshd[1778]: Connection closed by 10.0.0.1 port 33864 Apr 24 00:27:00.115983 sshd-session[1775]: pam_unix(sshd:session): session closed for user core Apr 24 00:27:00.133812 systemd[1]: sshd@5-10.0.0.89:22-10.0.0.1:33864.service: Deactivated successfully. Apr 24 00:27:00.140389 systemd[1]: session-6.scope: Deactivated successfully. Apr 24 00:27:00.142292 systemd-logind[1595]: Session 6 logged out. Waiting for processes to exit. Apr 24 00:27:00.145439 systemd[1]: Started sshd@6-10.0.0.89:22-10.0.0.1:33870.service - OpenSSH per-connection server daemon (10.0.0.1:33870). Apr 24 00:27:00.151755 systemd-logind[1595]: Removed session 6. Apr 24 00:27:00.227780 sshd[1784]: Accepted publickey for core from 10.0.0.1 port 33870 ssh2: RSA SHA256:O5eEjr93EU6o+yIitYA6KggdYqbq1kMU8aUvK/sf8Ls Apr 24 00:27:00.228904 sshd-session[1784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:27:00.239433 systemd-logind[1595]: New session 7 of user core. Apr 24 00:27:00.249756 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 24 00:27:00.290124 sudo[1788]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 24 00:27:00.290807 sudo[1788]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 00:27:00.322467 sudo[1788]: pam_unix(sudo:session): session closed for user root Apr 24 00:27:00.325990 sshd[1787]: Connection closed by 10.0.0.1 port 33870 Apr 24 00:27:00.326061 sshd-session[1784]: pam_unix(sshd:session): session closed for user core Apr 24 00:27:00.340683 systemd[1]: sshd@6-10.0.0.89:22-10.0.0.1:33870.service: Deactivated successfully. Apr 24 00:27:00.345988 systemd[1]: session-7.scope: Deactivated successfully. Apr 24 00:27:00.348121 systemd-logind[1595]: Session 7 logged out. Waiting for processes to exit. Apr 24 00:27:00.352017 systemd[1]: Started sshd@7-10.0.0.89:22-10.0.0.1:33886.service - OpenSSH per-connection server daemon (10.0.0.1:33886). Apr 24 00:27:00.354534 systemd-logind[1595]: Removed session 7. Apr 24 00:27:00.426660 sshd[1794]: Accepted publickey for core from 10.0.0.1 port 33886 ssh2: RSA SHA256:O5eEjr93EU6o+yIitYA6KggdYqbq1kMU8aUvK/sf8Ls Apr 24 00:27:00.428496 sshd-session[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:27:00.437119 systemd-logind[1595]: New session 8 of user core. Apr 24 00:27:00.447715 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 24 00:27:00.470734 sudo[1799]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 24 00:27:00.471012 sudo[1799]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 00:27:00.480988 sudo[1799]: pam_unix(sudo:session): session closed for user root Apr 24 00:27:00.493918 sudo[1798]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 24 00:27:00.494385 sudo[1798]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 00:27:00.510817 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 24 00:27:00.595453 augenrules[1821]: No rules Apr 24 00:27:00.597396 systemd[1]: audit-rules.service: Deactivated successfully. Apr 24 00:27:00.597818 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 24 00:27:00.599095 sudo[1798]: pam_unix(sudo:session): session closed for user root Apr 24 00:27:00.601855 sshd[1797]: Connection closed by 10.0.0.1 port 33886 Apr 24 00:27:00.602055 sshd-session[1794]: pam_unix(sshd:session): session closed for user core Apr 24 00:27:00.614058 systemd[1]: sshd@7-10.0.0.89:22-10.0.0.1:33886.service: Deactivated successfully. Apr 24 00:27:00.616434 systemd[1]: session-8.scope: Deactivated successfully. Apr 24 00:27:00.617881 systemd-logind[1595]: Session 8 logged out. Waiting for processes to exit. Apr 24 00:27:00.620699 systemd[1]: Started sshd@8-10.0.0.89:22-10.0.0.1:33902.service - OpenSSH per-connection server daemon (10.0.0.1:33902). Apr 24 00:27:00.623414 systemd-logind[1595]: Removed session 8. Apr 24 00:27:00.693256 sshd[1830]: Accepted publickey for core from 10.0.0.1 port 33902 ssh2: RSA SHA256:O5eEjr93EU6o+yIitYA6KggdYqbq1kMU8aUvK/sf8Ls Apr 24 00:27:00.694912 sshd-session[1830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:27:00.706908 systemd-logind[1595]: New session 9 of user core. Apr 24 00:27:00.717515 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 24 00:27:00.737975 sudo[1834]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 24 00:27:00.738421 sudo[1834]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 00:27:01.070990 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 24 00:27:01.076716 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 00:27:01.329546 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 00:27:01.336781 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 24 00:27:01.348752 (kubelet)[1863]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 00:27:01.348758 (dockerd)[1864]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 24 00:27:01.444524 kubelet[1863]: E0424 00:27:01.442391 1863 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 00:27:01.446964 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 00:27:01.447347 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 00:27:01.448044 systemd[1]: kubelet.service: Consumed 290ms CPU time, 110.9M memory peak. Apr 24 00:27:01.800858 dockerd[1864]: time="2026-04-24T00:27:01.799872194Z" level=info msg="Starting up" Apr 24 00:27:01.801494 dockerd[1864]: time="2026-04-24T00:27:01.801405708Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Apr 24 00:27:01.835768 dockerd[1864]: time="2026-04-24T00:27:01.835101168Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Apr 24 00:27:01.968493 dockerd[1864]: time="2026-04-24T00:27:01.967874283Z" level=info msg="Loading containers: start." Apr 24 00:27:01.992428 kernel: Initializing XFRM netlink socket Apr 24 00:27:06.567849 systemd-networkd[1411]: docker0: Link UP Apr 24 00:27:06.580402 dockerd[1864]: time="2026-04-24T00:27:06.580044796Z" level=info msg="Loading containers: done." Apr 24 00:27:06.654465 dockerd[1864]: time="2026-04-24T00:27:06.653881747Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 24 00:27:06.654465 dockerd[1864]: time="2026-04-24T00:27:06.654415200Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Apr 24 00:27:06.654465 dockerd[1864]: time="2026-04-24T00:27:06.654487798Z" level=info msg="Initializing buildkit" Apr 24 00:27:06.757289 dockerd[1864]: time="2026-04-24T00:27:06.756387362Z" level=info msg="Completed buildkit initialization" Apr 24 00:27:06.767758 dockerd[1864]: time="2026-04-24T00:27:06.767509053Z" level=info msg="Daemon has completed initialization" Apr 24 00:27:06.767857 dockerd[1864]: time="2026-04-24T00:27:06.767778357Z" level=info msg="API listen on /run/docker.sock" Apr 24 00:27:06.769588 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 24 00:27:07.963826 containerd[1622]: time="2026-04-24T00:27:07.963360532Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\"" Apr 24 00:27:08.640589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount946076321.mount: Deactivated successfully. Apr 24 00:27:10.910866 containerd[1622]: time="2026-04-24T00:27:10.910597884Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:27:10.913055 containerd[1622]: time="2026-04-24T00:27:10.913024778Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.7: active requests=0, bytes read=27099952" Apr 24 00:27:10.915434 containerd[1622]: time="2026-04-24T00:27:10.915094074Z" level=info msg="ImageCreate event name:\"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:27:10.919360 containerd[1622]: time="2026-04-24T00:27:10.919336147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:27:10.919967 containerd[1622]: time="2026-04-24T00:27:10.919838035Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.7\" with image id \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\", size \"27097113\" in 2.956254349s" Apr 24 00:27:10.919967 containerd[1622]: time="2026-04-24T00:27:10.919868992Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\" returns image reference \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\"" Apr 24 00:27:10.921933 containerd[1622]: time="2026-04-24T00:27:10.921562864Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\"" Apr 24 00:27:11.572621 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 24 00:27:11.586404 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 00:27:12.018619 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 00:27:12.035500 (kubelet)[2165]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 00:27:12.182874 kubelet[2165]: E0424 00:27:12.182624 2165 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 00:27:12.186504 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 00:27:12.186851 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 00:27:12.191573 systemd[1]: kubelet.service: Consumed 524ms CPU time, 109.4M memory peak. Apr 24 00:27:12.901595 containerd[1622]: time="2026-04-24T00:27:12.900448650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:27:12.902574 containerd[1622]: time="2026-04-24T00:27:12.901866993Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.7: active requests=0, bytes read=21252670" Apr 24 00:27:12.904988 containerd[1622]: time="2026-04-24T00:27:12.904452161Z" level=info msg="ImageCreate event name:\"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:27:12.908307 containerd[1622]: time="2026-04-24T00:27:12.908281272Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:27:12.909120 containerd[1622]: time="2026-04-24T00:27:12.908998006Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.7\" with image id \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\", size \"22819085\" in 1.987255987s" Apr 24 00:27:12.909120 containerd[1622]: time="2026-04-24T00:27:12.909115533Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\" returns image reference \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\"" Apr 24 00:27:12.911840 containerd[1622]: time="2026-04-24T00:27:12.911790774Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\"" Apr 24 00:27:14.413418 containerd[1622]: time="2026-04-24T00:27:14.412967200Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:27:14.415287 containerd[1622]: time="2026-04-24T00:27:14.415262739Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.7: active requests=0, bytes read=15810823" Apr 24 00:27:14.417048 containerd[1622]: time="2026-04-24T00:27:14.417019587Z" level=info msg="ImageCreate event name:\"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:27:14.421320 containerd[1622]: time="2026-04-24T00:27:14.421295603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:27:14.422826 containerd[1622]: time="2026-04-24T00:27:14.422799444Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.7\" with image id \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\", size \"17377256\" in 1.510989806s" Apr 24 00:27:14.423739 containerd[1622]: time="2026-04-24T00:27:14.422891494Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\" returns image reference \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\"" Apr 24 00:27:14.425571 containerd[1622]: time="2026-04-24T00:27:14.425466692Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\"" Apr 24 00:27:15.809083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount689368676.mount: Deactivated successfully. Apr 24 00:27:16.594972 containerd[1622]: time="2026-04-24T00:27:16.594911553Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:27:16.596922 containerd[1622]: time="2026-04-24T00:27:16.596889791Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.7: active requests=0, bytes read=25972848" Apr 24 00:27:16.600863 containerd[1622]: time="2026-04-24T00:27:16.600819211Z" level=info msg="ImageCreate event name:\"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:27:16.605646 containerd[1622]: time="2026-04-24T00:27:16.605623698Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:27:16.606037 containerd[1622]: time="2026-04-24T00:27:16.606009412Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.7\" with image id \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\", repo tag \"registry.k8s.io/kube-proxy:v1.34.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\", size \"25971973\" in 2.180427817s" Apr 24 00:27:16.606467 containerd[1622]: time="2026-04-24T00:27:16.606112396Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\" returns image reference \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\"" Apr 24 00:27:16.608075 containerd[1622]: time="2026-04-24T00:27:16.607956801Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 24 00:27:17.138040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount955677025.mount: Deactivated successfully. Apr 24 00:27:19.312647 containerd[1622]: time="2026-04-24T00:27:19.312355677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:27:19.315041 containerd[1622]: time="2026-04-24T00:27:19.314495990Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22387483" Apr 24 00:27:19.317593 containerd[1622]: time="2026-04-24T00:27:19.317383510Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:27:19.322349 containerd[1622]: time="2026-04-24T00:27:19.320915237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:27:19.322349 containerd[1622]: time="2026-04-24T00:27:19.322068415Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.713992867s" Apr 24 00:27:19.322349 containerd[1622]: time="2026-04-24T00:27:19.322094531Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Apr 24 00:27:19.326097 containerd[1622]: time="2026-04-24T00:27:19.325847432Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 24 00:27:19.850806 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3013223183.mount: Deactivated successfully. Apr 24 00:27:19.864560 containerd[1622]: time="2026-04-24T00:27:19.864424626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:27:19.866880 containerd[1622]: time="2026-04-24T00:27:19.866828216Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321150" Apr 24 00:27:19.868828 containerd[1622]: time="2026-04-24T00:27:19.868799211Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:27:19.873038 containerd[1622]: time="2026-04-24T00:27:19.872607101Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:27:19.873868 containerd[1622]: time="2026-04-24T00:27:19.873839935Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 547.829825ms" Apr 24 00:27:19.877395 containerd[1622]: time="2026-04-24T00:27:19.877378206Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 24 00:27:19.878049 containerd[1622]: time="2026-04-24T00:27:19.878026286Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 24 00:27:20.507022 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3750798434.mount: Deactivated successfully. Apr 24 00:27:22.321094 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 24 00:27:22.327460 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 00:27:22.625022 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 00:27:22.644818 (kubelet)[2301]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 00:27:22.843074 kubelet[2301]: E0424 00:27:22.842552 2301 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 00:27:22.847571 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 00:27:22.847833 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 00:27:22.848416 systemd[1]: kubelet.service: Consumed 474ms CPU time, 109.3M memory peak. Apr 24 00:27:23.732573 containerd[1622]: time="2026-04-24T00:27:23.732509276Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:27:23.735078 containerd[1622]: time="2026-04-24T00:27:23.734898672Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22874255" Apr 24 00:27:23.737543 containerd[1622]: time="2026-04-24T00:27:23.737355281Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:27:23.742788 containerd[1622]: time="2026-04-24T00:27:23.742604401Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:27:23.743992 containerd[1622]: time="2026-04-24T00:27:23.743430750Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 3.864033119s" Apr 24 00:27:23.743992 containerd[1622]: time="2026-04-24T00:27:23.743463941Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Apr 24 00:27:27.922007 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 00:27:27.922414 systemd[1]: kubelet.service: Consumed 474ms CPU time, 109.3M memory peak. Apr 24 00:27:27.926092 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 00:27:27.984803 systemd[1]: Reload requested from client PID 2353 ('systemctl') (unit session-9.scope)... Apr 24 00:27:27.985823 systemd[1]: Reloading... Apr 24 00:27:28.236458 zram_generator::config[2394]: No configuration found. Apr 24 00:27:28.645028 systemd[1]: Reloading finished in 658 ms. Apr 24 00:27:28.782988 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 24 00:27:28.783510 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 24 00:27:28.784532 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 00:27:28.784780 systemd[1]: kubelet.service: Consumed 194ms CPU time, 98.1M memory peak. Apr 24 00:27:28.789646 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 00:27:29.099558 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 00:27:29.131109 (kubelet)[2445]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 24 00:27:29.502459 kubelet[2445]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 24 00:27:29.504364 kubelet[2445]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 00:27:29.508429 kubelet[2445]: I0424 00:27:29.504978 2445 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 24 00:27:30.359034 kubelet[2445]: I0424 00:27:30.358504 2445 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 24 00:27:30.359034 kubelet[2445]: I0424 00:27:30.358637 2445 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 24 00:27:30.359034 kubelet[2445]: I0424 00:27:30.358763 2445 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 24 00:27:30.359034 kubelet[2445]: I0424 00:27:30.358774 2445 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 24 00:27:30.359034 kubelet[2445]: I0424 00:27:30.358969 2445 server.go:956] "Client rotation is on, will bootstrap in background" Apr 24 00:27:30.425589 kubelet[2445]: E0424 00:27:30.425035 2445 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.89:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 24 00:27:30.442511 kubelet[2445]: I0424 00:27:30.441891 2445 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 24 00:27:30.459636 kubelet[2445]: I0424 00:27:30.459555 2445 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 24 00:27:30.514135 kubelet[2445]: I0424 00:27:30.513468 2445 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 24 00:27:30.527778 kubelet[2445]: I0424 00:27:30.527035 2445 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 24 00:27:30.527778 kubelet[2445]: I0424 00:27:30.527549 2445 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 24 00:27:30.532876 kubelet[2445]: I0424 00:27:30.527790 2445 topology_manager.go:138] "Creating topology manager with none policy" Apr 24 00:27:30.532876 kubelet[2445]: I0424 00:27:30.527798 2445 container_manager_linux.go:306] "Creating device plugin manager" Apr 24 00:27:30.532876 kubelet[2445]: I0424 00:27:30.528091 2445 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 24 00:27:30.535279 kubelet[2445]: I0424 00:27:30.535059 2445 state_mem.go:36] "Initialized new in-memory state store" Apr 24 00:27:30.537644 kubelet[2445]: I0424 00:27:30.537477 2445 kubelet.go:475] "Attempting to sync node with API server" Apr 24 00:27:30.537644 kubelet[2445]: I0424 00:27:30.537492 2445 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 24 00:27:30.537644 kubelet[2445]: I0424 00:27:30.537510 2445 kubelet.go:387] "Adding apiserver pod source" Apr 24 00:27:30.537644 kubelet[2445]: I0424 00:27:30.537519 2445 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 24 00:27:30.541459 kubelet[2445]: E0424 00:27:30.539388 2445 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.89:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 24 00:27:30.541459 kubelet[2445]: E0424 00:27:30.539854 2445 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.89:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 24 00:27:30.548130 kubelet[2445]: I0424 00:27:30.547394 2445 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 24 00:27:30.551832 kubelet[2445]: I0424 00:27:30.551810 2445 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 24 00:27:30.553519 kubelet[2445]: I0424 00:27:30.551904 2445 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 24 00:27:30.553519 kubelet[2445]: W0424 00:27:30.552104 2445 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 24 00:27:30.577115 kubelet[2445]: I0424 00:27:30.576953 2445 server.go:1262] "Started kubelet" Apr 24 00:27:30.577115 kubelet[2445]: I0424 00:27:30.577000 2445 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 24 00:27:30.577115 kubelet[2445]: I0424 00:27:30.577115 2445 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 24 00:27:30.579104 kubelet[2445]: I0424 00:27:30.579088 2445 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 24 00:27:30.587593 kubelet[2445]: I0424 00:27:30.586916 2445 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 24 00:27:30.588627 kubelet[2445]: I0424 00:27:30.588608 2445 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 24 00:27:30.605801 kubelet[2445]: I0424 00:27:30.605780 2445 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 24 00:27:30.615361 kubelet[2445]: I0424 00:27:30.611940 2445 server.go:310] "Adding debug handlers to kubelet server" Apr 24 00:27:30.629758 kubelet[2445]: I0424 00:27:30.629480 2445 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 24 00:27:30.637873 kubelet[2445]: E0424 00:27:30.636635 2445 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 24 00:27:30.640130 kubelet[2445]: I0424 00:27:30.639524 2445 factory.go:223] Registration of the systemd container factory successfully Apr 24 00:27:30.640130 kubelet[2445]: I0424 00:27:30.639799 2445 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 24 00:27:30.640950 kubelet[2445]: I0424 00:27:30.640937 2445 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 24 00:27:30.643578 kubelet[2445]: I0424 00:27:30.643565 2445 reconciler.go:29] "Reconciler: start to sync state" Apr 24 00:27:30.646928 kubelet[2445]: E0424 00:27:30.646908 2445 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.89:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 24 00:27:30.647898 kubelet[2445]: E0424 00:27:30.647878 2445 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.89:6443: connect: connection refused" interval="200ms" Apr 24 00:27:30.649940 kubelet[2445]: E0424 00:27:30.649929 2445 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 24 00:27:30.651484 kubelet[2445]: E0424 00:27:30.644891 2445 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.89:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.89:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a923665fe49ef2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-24 00:27:30.576826098 +0000 UTC m=+1.429028168,LastTimestamp:2026-04-24 00:27:30.576826098 +0000 UTC m=+1.429028168,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 24 00:27:30.657016 kubelet[2445]: I0424 00:27:30.656877 2445 factory.go:223] Registration of the containerd container factory successfully Apr 24 00:27:30.740087 kubelet[2445]: E0424 00:27:30.739503 2445 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 24 00:27:30.815554 kubelet[2445]: I0424 00:27:30.814910 2445 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 24 00:27:30.815554 kubelet[2445]: I0424 00:27:30.815032 2445 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 24 00:27:30.815554 kubelet[2445]: I0424 00:27:30.815046 2445 state_mem.go:36] "Initialized new in-memory state store" Apr 24 00:27:30.815807 kubelet[2445]: I0424 00:27:30.815572 2445 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 24 00:27:30.822912 kubelet[2445]: I0424 00:27:30.821780 2445 policy_none.go:49] "None policy: Start" Apr 24 00:27:30.822912 kubelet[2445]: I0424 00:27:30.821896 2445 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 24 00:27:30.822912 kubelet[2445]: I0424 00:27:30.821906 2445 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 24 00:27:30.827857 kubelet[2445]: I0424 00:27:30.826492 2445 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 24 00:27:30.827857 kubelet[2445]: I0424 00:27:30.827836 2445 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 24 00:27:30.827857 kubelet[2445]: I0424 00:27:30.827859 2445 kubelet.go:2428] "Starting kubelet main sync loop" Apr 24 00:27:30.827934 kubelet[2445]: E0424 00:27:30.827893 2445 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 24 00:27:30.830076 kubelet[2445]: I0424 00:27:30.829488 2445 policy_none.go:47] "Start" Apr 24 00:27:30.830931 kubelet[2445]: E0424 00:27:30.830807 2445 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.89:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 24 00:27:30.840350 kubelet[2445]: E0424 00:27:30.840136 2445 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 24 00:27:30.848529 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 24 00:27:30.856386 kubelet[2445]: E0424 00:27:30.856361 2445 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.89:6443: connect: connection refused" interval="400ms" Apr 24 00:27:30.893471 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 24 00:27:30.906063 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 24 00:27:30.921094 kubelet[2445]: E0424 00:27:30.921071 2445 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 24 00:27:30.921552 kubelet[2445]: I0424 00:27:30.921541 2445 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 24 00:27:30.921624 kubelet[2445]: I0424 00:27:30.921602 2445 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 24 00:27:30.925111 kubelet[2445]: I0424 00:27:30.924028 2445 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 24 00:27:30.927340 kubelet[2445]: E0424 00:27:30.926860 2445 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 24 00:27:30.927340 kubelet[2445]: E0424 00:27:30.927022 2445 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 24 00:27:30.948052 kubelet[2445]: I0424 00:27:30.947888 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b472b749f6f1369a7b4004525a5ef454-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b472b749f6f1369a7b4004525a5ef454\") " pod="kube-system/kube-apiserver-localhost" Apr 24 00:27:30.948052 kubelet[2445]: I0424 00:27:30.948018 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b472b749f6f1369a7b4004525a5ef454-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b472b749f6f1369a7b4004525a5ef454\") " pod="kube-system/kube-apiserver-localhost" Apr 24 00:27:30.948052 kubelet[2445]: I0424 00:27:30.948035 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b472b749f6f1369a7b4004525a5ef454-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b472b749f6f1369a7b4004525a5ef454\") " pod="kube-system/kube-apiserver-localhost" Apr 24 00:27:30.948554 kubelet[2445]: I0424 00:27:30.948435 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 00:27:30.948554 kubelet[2445]: I0424 00:27:30.948547 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 00:27:30.948594 kubelet[2445]: I0424 00:27:30.948560 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 00:27:30.948594 kubelet[2445]: I0424 00:27:30.948570 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 00:27:30.948594 kubelet[2445]: I0424 00:27:30.948581 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 00:27:30.970085 systemd[1]: Created slice kubepods-burstable-podb472b749f6f1369a7b4004525a5ef454.slice - libcontainer container kubepods-burstable-podb472b749f6f1369a7b4004525a5ef454.slice. Apr 24 00:27:30.990981 kubelet[2445]: E0424 00:27:30.989531 2445 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 00:27:31.002835 systemd[1]: Created slice kubepods-burstable-podc6bb8708a026256e82ca4c5631a78b5a.slice - libcontainer container kubepods-burstable-podc6bb8708a026256e82ca4c5631a78b5a.slice. Apr 24 00:27:31.007948 kubelet[2445]: E0424 00:27:31.007840 2445 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 00:27:31.023115 systemd[1]: Created slice kubepods-burstable-pod824fd89300514e351ed3b68d82c665c6.slice - libcontainer container kubepods-burstable-pod824fd89300514e351ed3b68d82c665c6.slice. Apr 24 00:27:31.027474 kubelet[2445]: E0424 00:27:31.027361 2445 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 00:27:31.031997 kubelet[2445]: I0424 00:27:31.031866 2445 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 24 00:27:31.034843 kubelet[2445]: E0424 00:27:31.034552 2445 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.89:6443/api/v1/nodes\": dial tcp 10.0.0.89:6443: connect: connection refused" node="localhost" Apr 24 00:27:31.051103 kubelet[2445]: I0424 00:27:31.050481 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/824fd89300514e351ed3b68d82c665c6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"824fd89300514e351ed3b68d82c665c6\") " pod="kube-system/kube-scheduler-localhost" Apr 24 00:27:31.192862 kubelet[2445]: E0424 00:27:31.191497 2445 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.89:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.89:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a923665fe49ef2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-24 00:27:30.576826098 +0000 UTC m=+1.429028168,LastTimestamp:2026-04-24 00:27:30.576826098 +0000 UTC m=+1.429028168,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 24 00:27:31.243341 kubelet[2445]: I0424 00:27:31.242095 2445 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 24 00:27:31.243341 kubelet[2445]: E0424 00:27:31.242918 2445 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.89:6443/api/v1/nodes\": dial tcp 10.0.0.89:6443: connect: connection refused" node="localhost" Apr 24 00:27:31.260136 kubelet[2445]: E0424 00:27:31.259129 2445 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.89:6443: connect: connection refused" interval="800ms" Apr 24 00:27:31.306000 kubelet[2445]: E0424 00:27:31.305553 2445 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:27:31.309082 containerd[1622]: time="2026-04-24T00:27:31.309049294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b472b749f6f1369a7b4004525a5ef454,Namespace:kube-system,Attempt:0,}" Apr 24 00:27:31.315774 kubelet[2445]: E0424 00:27:31.315564 2445 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:27:31.316568 containerd[1622]: time="2026-04-24T00:27:31.316521044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c6bb8708a026256e82ca4c5631a78b5a,Namespace:kube-system,Attempt:0,}" Apr 24 00:27:31.337801 kubelet[2445]: E0424 00:27:31.337016 2445 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:27:31.340459 containerd[1622]: time="2026-04-24T00:27:31.339902935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:824fd89300514e351ed3b68d82c665c6,Namespace:kube-system,Attempt:0,}" Apr 24 00:27:31.651384 kubelet[2445]: I0424 00:27:31.650924 2445 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 24 00:27:31.657875 kubelet[2445]: E0424 00:27:31.657551 2445 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.89:6443/api/v1/nodes\": dial tcp 10.0.0.89:6443: connect: connection refused" node="localhost" Apr 24 00:27:31.782028 kubelet[2445]: E0424 00:27:31.781878 2445 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.89:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 24 00:27:31.895520 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2793949293.mount: Deactivated successfully. Apr 24 00:27:31.914870 containerd[1622]: time="2026-04-24T00:27:31.913989032Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 00:27:31.922483 containerd[1622]: time="2026-04-24T00:27:31.921992205Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 00:27:31.927756 containerd[1622]: time="2026-04-24T00:27:31.927115988Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Apr 24 00:27:31.930347 containerd[1622]: time="2026-04-24T00:27:31.929817813Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 24 00:27:31.935425 containerd[1622]: time="2026-04-24T00:27:31.934121300Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 00:27:31.937542 containerd[1622]: time="2026-04-24T00:27:31.937414070Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 00:27:31.940920 containerd[1622]: time="2026-04-24T00:27:31.940881610Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 24 00:27:31.944597 containerd[1622]: time="2026-04-24T00:27:31.944452496Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 00:27:31.946443 containerd[1622]: time="2026-04-24T00:27:31.946099144Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 600.322559ms" Apr 24 00:27:31.954996 containerd[1622]: time="2026-04-24T00:27:31.954776817Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 632.937933ms" Apr 24 00:27:31.957412 containerd[1622]: time="2026-04-24T00:27:31.957387161Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 639.02683ms" Apr 24 00:27:32.053749 containerd[1622]: time="2026-04-24T00:27:32.051064207Z" level=info msg="connecting to shim 12c5814f1d53627e4c6cb2868e1ee1a79749b613dcb3fa963107ef929e1d6f75" address="unix:///run/containerd/s/29c559a17ebcbb96463726a2d1942cf378b117fbd898ab557022fafdc5cda5f9" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:27:32.053901 kubelet[2445]: E0424 00:27:32.052641 2445 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.89:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 24 00:27:32.061017 kubelet[2445]: E0424 00:27:32.060640 2445 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.89:6443: connect: connection refused" interval="1.6s" Apr 24 00:27:32.104982 kubelet[2445]: E0424 00:27:32.104393 2445 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.89:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 24 00:27:32.181842 containerd[1622]: time="2026-04-24T00:27:32.181096105Z" level=info msg="connecting to shim 88211a76180e5497b0e89f222a1cd0ee934ac779d848f0d86e9079cc0e8a739d" address="unix:///run/containerd/s/a34712804ca059f7dba4e748e2638e83da94871d0d3dacb559964039c2e3a306" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:27:32.231633 kubelet[2445]: E0424 00:27:32.231453 2445 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.89:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 24 00:27:32.246640 containerd[1622]: time="2026-04-24T00:27:32.246440124Z" level=info msg="connecting to shim 33d74f3f65685b5b00f2d95b83dab4b34be79ba7141463bd78573c9628073956" address="unix:///run/containerd/s/68813ad70036e51fb81deb0bd77fa6341de97fddeeff38f5c1b3b81267358d0d" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:27:32.311425 systemd[1]: Started cri-containerd-12c5814f1d53627e4c6cb2868e1ee1a79749b613dcb3fa963107ef929e1d6f75.scope - libcontainer container 12c5814f1d53627e4c6cb2868e1ee1a79749b613dcb3fa963107ef929e1d6f75. Apr 24 00:27:32.384481 systemd[1]: Started cri-containerd-88211a76180e5497b0e89f222a1cd0ee934ac779d848f0d86e9079cc0e8a739d.scope - libcontainer container 88211a76180e5497b0e89f222a1cd0ee934ac779d848f0d86e9079cc0e8a739d. Apr 24 00:27:32.465638 kubelet[2445]: I0424 00:27:32.463851 2445 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 24 00:27:32.465638 kubelet[2445]: E0424 00:27:32.464450 2445 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.89:6443/api/v1/nodes\": dial tcp 10.0.0.89:6443: connect: connection refused" node="localhost" Apr 24 00:27:32.514875 update_engine[1598]: I20260424 00:27:32.513545 1598 update_attempter.cc:509] Updating boot flags... Apr 24 00:27:32.544417 kubelet[2445]: E0424 00:27:32.544087 2445 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.89:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 24 00:27:32.545526 systemd[1]: Started cri-containerd-33d74f3f65685b5b00f2d95b83dab4b34be79ba7141463bd78573c9628073956.scope - libcontainer container 33d74f3f65685b5b00f2d95b83dab4b34be79ba7141463bd78573c9628073956. Apr 24 00:27:32.594008 containerd[1622]: time="2026-04-24T00:27:32.593536223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c6bb8708a026256e82ca4c5631a78b5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"88211a76180e5497b0e89f222a1cd0ee934ac779d848f0d86e9079cc0e8a739d\"" Apr 24 00:27:32.600469 kubelet[2445]: E0424 00:27:32.600401 2445 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:27:32.619389 containerd[1622]: time="2026-04-24T00:27:32.619120988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:824fd89300514e351ed3b68d82c665c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"12c5814f1d53627e4c6cb2868e1ee1a79749b613dcb3fa963107ef929e1d6f75\"" Apr 24 00:27:32.638506 kubelet[2445]: E0424 00:27:32.638088 2445 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:27:32.659845 containerd[1622]: time="2026-04-24T00:27:32.659503579Z" level=info msg="CreateContainer within sandbox \"88211a76180e5497b0e89f222a1cd0ee934ac779d848f0d86e9079cc0e8a739d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 24 00:27:32.689056 containerd[1622]: time="2026-04-24T00:27:32.689017727Z" level=info msg="CreateContainer within sandbox \"12c5814f1d53627e4c6cb2868e1ee1a79749b613dcb3fa963107ef929e1d6f75\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 24 00:27:32.754801 containerd[1622]: time="2026-04-24T00:27:32.750569430Z" level=info msg="Container b13132a7dd027b54a75c4a5f1a3a7f93c25efb5eaa827807448ffbb31c92885b: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:27:32.767104 containerd[1622]: time="2026-04-24T00:27:32.766374837Z" level=info msg="Container 19b83d42c9cfe53f0c4d2a0574a01edbcf47989d49fed213654fdb478bdd9ee9: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:27:32.847954 containerd[1622]: time="2026-04-24T00:27:32.847808039Z" level=info msg="CreateContainer within sandbox \"12c5814f1d53627e4c6cb2868e1ee1a79749b613dcb3fa963107ef929e1d6f75\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"19b83d42c9cfe53f0c4d2a0574a01edbcf47989d49fed213654fdb478bdd9ee9\"" Apr 24 00:27:32.851639 containerd[1622]: time="2026-04-24T00:27:32.851406842Z" level=info msg="StartContainer for \"19b83d42c9cfe53f0c4d2a0574a01edbcf47989d49fed213654fdb478bdd9ee9\"" Apr 24 00:27:32.851639 containerd[1622]: time="2026-04-24T00:27:32.851546659Z" level=info msg="CreateContainer within sandbox \"88211a76180e5497b0e89f222a1cd0ee934ac779d848f0d86e9079cc0e8a739d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b13132a7dd027b54a75c4a5f1a3a7f93c25efb5eaa827807448ffbb31c92885b\"" Apr 24 00:27:32.856944 containerd[1622]: time="2026-04-24T00:27:32.854519719Z" level=info msg="StartContainer for \"b13132a7dd027b54a75c4a5f1a3a7f93c25efb5eaa827807448ffbb31c92885b\"" Apr 24 00:27:32.857843 containerd[1622]: time="2026-04-24T00:27:32.857820918Z" level=info msg="connecting to shim b13132a7dd027b54a75c4a5f1a3a7f93c25efb5eaa827807448ffbb31c92885b" address="unix:///run/containerd/s/a34712804ca059f7dba4e748e2638e83da94871d0d3dacb559964039c2e3a306" protocol=ttrpc version=3 Apr 24 00:27:32.860025 containerd[1622]: time="2026-04-24T00:27:32.859863954Z" level=info msg="connecting to shim 19b83d42c9cfe53f0c4d2a0574a01edbcf47989d49fed213654fdb478bdd9ee9" address="unix:///run/containerd/s/29c559a17ebcbb96463726a2d1942cf378b117fbd898ab557022fafdc5cda5f9" protocol=ttrpc version=3 Apr 24 00:27:32.886367 containerd[1622]: time="2026-04-24T00:27:32.885876329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b472b749f6f1369a7b4004525a5ef454,Namespace:kube-system,Attempt:0,} returns sandbox id \"33d74f3f65685b5b00f2d95b83dab4b34be79ba7141463bd78573c9628073956\"" Apr 24 00:27:32.890103 kubelet[2445]: E0424 00:27:32.890086 2445 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:27:32.912033 containerd[1622]: time="2026-04-24T00:27:32.911537898Z" level=info msg="CreateContainer within sandbox \"33d74f3f65685b5b00f2d95b83dab4b34be79ba7141463bd78573c9628073956\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 24 00:27:32.943110 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount864237440.mount: Deactivated successfully. Apr 24 00:27:32.960116 containerd[1622]: time="2026-04-24T00:27:32.959487088Z" level=info msg="Container 639e51f35862b8517143ed42da869fff590e3d0c911773a154ec8cc641832b2a: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:27:32.966991 systemd[1]: Started cri-containerd-b13132a7dd027b54a75c4a5f1a3a7f93c25efb5eaa827807448ffbb31c92885b.scope - libcontainer container b13132a7dd027b54a75c4a5f1a3a7f93c25efb5eaa827807448ffbb31c92885b. Apr 24 00:27:32.989844 systemd[1]: Started cri-containerd-19b83d42c9cfe53f0c4d2a0574a01edbcf47989d49fed213654fdb478bdd9ee9.scope - libcontainer container 19b83d42c9cfe53f0c4d2a0574a01edbcf47989d49fed213654fdb478bdd9ee9. Apr 24 00:27:33.008013 containerd[1622]: time="2026-04-24T00:27:33.007428918Z" level=info msg="CreateContainer within sandbox \"33d74f3f65685b5b00f2d95b83dab4b34be79ba7141463bd78573c9628073956\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"639e51f35862b8517143ed42da869fff590e3d0c911773a154ec8cc641832b2a\"" Apr 24 00:27:33.017752 containerd[1622]: time="2026-04-24T00:27:33.017561367Z" level=info msg="StartContainer for \"639e51f35862b8517143ed42da869fff590e3d0c911773a154ec8cc641832b2a\"" Apr 24 00:27:33.019605 containerd[1622]: time="2026-04-24T00:27:33.018871893Z" level=info msg="connecting to shim 639e51f35862b8517143ed42da869fff590e3d0c911773a154ec8cc641832b2a" address="unix:///run/containerd/s/68813ad70036e51fb81deb0bd77fa6341de97fddeeff38f5c1b3b81267358d0d" protocol=ttrpc version=3 Apr 24 00:27:33.117524 systemd[1]: Started cri-containerd-639e51f35862b8517143ed42da869fff590e3d0c911773a154ec8cc641832b2a.scope - libcontainer container 639e51f35862b8517143ed42da869fff590e3d0c911773a154ec8cc641832b2a. Apr 24 00:27:33.184404 containerd[1622]: time="2026-04-24T00:27:33.184105065Z" level=info msg="StartContainer for \"b13132a7dd027b54a75c4a5f1a3a7f93c25efb5eaa827807448ffbb31c92885b\" returns successfully" Apr 24 00:27:33.225597 containerd[1622]: time="2026-04-24T00:27:33.225038393Z" level=info msg="StartContainer for \"19b83d42c9cfe53f0c4d2a0574a01edbcf47989d49fed213654fdb478bdd9ee9\" returns successfully" Apr 24 00:27:33.326335 containerd[1622]: time="2026-04-24T00:27:33.325623593Z" level=info msg="StartContainer for \"639e51f35862b8517143ed42da869fff590e3d0c911773a154ec8cc641832b2a\" returns successfully" Apr 24 00:27:33.936115 kubelet[2445]: E0424 00:27:33.933126 2445 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 00:27:33.936115 kubelet[2445]: E0424 00:27:33.933607 2445 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:27:33.955135 kubelet[2445]: E0424 00:27:33.952441 2445 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 00:27:33.955135 kubelet[2445]: E0424 00:27:33.952542 2445 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:27:33.960912 kubelet[2445]: E0424 00:27:33.960894 2445 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 00:27:33.961081 kubelet[2445]: E0424 00:27:33.961072 2445 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:27:34.072033 kubelet[2445]: I0424 00:27:34.071481 2445 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 24 00:27:34.963996 kubelet[2445]: E0424 00:27:34.962645 2445 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 00:27:34.963996 kubelet[2445]: E0424 00:27:34.963567 2445 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 00:27:34.963996 kubelet[2445]: E0424 00:27:34.963636 2445 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:27:34.965508 kubelet[2445]: E0424 00:27:34.964475 2445 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:27:34.967528 kubelet[2445]: E0424 00:27:34.965913 2445 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 00:27:34.967528 kubelet[2445]: E0424 00:27:34.966446 2445 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:27:35.971829 kubelet[2445]: E0424 00:27:35.971100 2445 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 00:27:35.971829 kubelet[2445]: E0424 00:27:35.971504 2445 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:27:35.975950 kubelet[2445]: E0424 00:27:35.975538 2445 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 00:27:35.976117 kubelet[2445]: E0424 00:27:35.976108 2445 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:27:36.342501 kubelet[2445]: E0424 00:27:36.341967 2445 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 24 00:27:36.545449 kubelet[2445]: I0424 00:27:36.544893 2445 apiserver.go:52] "Watching apiserver" Apr 24 00:27:36.645106 kubelet[2445]: I0424 00:27:36.642952 2445 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 24 00:27:36.679389 kubelet[2445]: I0424 00:27:36.679352 2445 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 24 00:27:36.741841 kubelet[2445]: I0424 00:27:36.739445 2445 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 24 00:27:36.916858 kubelet[2445]: E0424 00:27:36.914566 2445 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 24 00:27:36.916858 kubelet[2445]: I0424 00:27:36.914824 2445 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 24 00:27:36.939824 kubelet[2445]: E0424 00:27:36.939456 2445 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Apr 24 00:27:36.943795 kubelet[2445]: I0424 00:27:36.943776 2445 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 24 00:27:36.957946 kubelet[2445]: E0424 00:27:36.957628 2445 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 24 00:27:36.979484 kubelet[2445]: I0424 00:27:36.978925 2445 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 24 00:27:36.986342 kubelet[2445]: E0424 00:27:36.983089 2445 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 24 00:27:36.986342 kubelet[2445]: E0424 00:27:36.983612 2445 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:27:38.643916 kubelet[2445]: I0424 00:27:38.643572 2445 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 24 00:27:38.656832 kubelet[2445]: E0424 00:27:38.656080 2445 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:27:38.985934 kubelet[2445]: E0424 00:27:38.985062 2445 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:27:39.145955 systemd[1]: Reload requested from client PID 2755 ('systemctl') (unit session-9.scope)... Apr 24 00:27:39.146079 systemd[1]: Reloading... Apr 24 00:27:39.323574 zram_generator::config[2798]: No configuration found. Apr 24 00:27:39.718813 systemd[1]: Reloading finished in 572 ms. Apr 24 00:27:39.789930 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 00:27:39.807608 systemd[1]: kubelet.service: Deactivated successfully. Apr 24 00:27:39.809879 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 00:27:39.809997 systemd[1]: kubelet.service: Consumed 3.839s CPU time, 127.3M memory peak. Apr 24 00:27:39.816882 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 00:27:40.121046 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 00:27:40.147043 (kubelet)[2843]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 24 00:27:40.383591 sudo[2855]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 24 00:27:40.384433 sudo[2855]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 24 00:27:40.394922 kubelet[2843]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 24 00:27:40.394922 kubelet[2843]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 00:27:40.396058 kubelet[2843]: I0424 00:27:40.395866 2843 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 24 00:27:40.435539 kubelet[2843]: I0424 00:27:40.435102 2843 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 24 00:27:40.435539 kubelet[2843]: I0424 00:27:40.435534 2843 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 24 00:27:40.435793 kubelet[2843]: I0424 00:27:40.435557 2843 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 24 00:27:40.435793 kubelet[2843]: I0424 00:27:40.435566 2843 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 24 00:27:40.436553 kubelet[2843]: I0424 00:27:40.436344 2843 server.go:956] "Client rotation is on, will bootstrap in background" Apr 24 00:27:40.439531 kubelet[2843]: I0424 00:27:40.439034 2843 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 24 00:27:40.448111 kubelet[2843]: I0424 00:27:40.447841 2843 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 24 00:27:40.479580 kubelet[2843]: I0424 00:27:40.479053 2843 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 24 00:27:40.509855 kubelet[2843]: I0424 00:27:40.509451 2843 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 24 00:27:40.509855 kubelet[2843]: I0424 00:27:40.510006 2843 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 24 00:27:40.509855 kubelet[2843]: I0424 00:27:40.510031 2843 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 24 00:27:40.509855 kubelet[2843]: I0424 00:27:40.510423 2843 topology_manager.go:138] "Creating topology manager with none policy" Apr 24 00:27:40.511944 kubelet[2843]: I0424 00:27:40.510431 2843 container_manager_linux.go:306] "Creating device plugin manager" Apr 24 00:27:40.511944 kubelet[2843]: I0424 00:27:40.510460 2843 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 24 00:27:40.511944 kubelet[2843]: I0424 00:27:40.511073 2843 state_mem.go:36] "Initialized new in-memory state store" Apr 24 00:27:40.511944 kubelet[2843]: I0424 00:27:40.511489 2843 kubelet.go:475] "Attempting to sync node with API server" Apr 24 00:27:40.511944 kubelet[2843]: I0424 00:27:40.511502 2843 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 24 00:27:40.511944 kubelet[2843]: I0424 00:27:40.511525 2843 kubelet.go:387] "Adding apiserver pod source" Apr 24 00:27:40.511944 kubelet[2843]: I0424 00:27:40.511543 2843 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 24 00:27:40.521412 kubelet[2843]: I0424 00:27:40.519430 2843 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 24 00:27:40.521412 kubelet[2843]: I0424 00:27:40.520573 2843 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 24 00:27:40.521412 kubelet[2843]: I0424 00:27:40.520595 2843 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 24 00:27:40.562127 kubelet[2843]: I0424 00:27:40.562108 2843 server.go:1262] "Started kubelet" Apr 24 00:27:40.566389 kubelet[2843]: I0424 00:27:40.565839 2843 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 24 00:27:40.567555 kubelet[2843]: I0424 00:27:40.567539 2843 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 24 00:27:40.573022 kubelet[2843]: I0424 00:27:40.573007 2843 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 24 00:27:40.573122 kubelet[2843]: I0424 00:27:40.567624 2843 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 24 00:27:40.577436 kubelet[2843]: I0424 00:27:40.576577 2843 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 24 00:27:40.579929 kubelet[2843]: I0424 00:27:40.579910 2843 server.go:310] "Adding debug handlers to kubelet server" Apr 24 00:27:40.584042 kubelet[2843]: I0424 00:27:40.581635 2843 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 24 00:27:40.592812 kubelet[2843]: I0424 00:27:40.592602 2843 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 24 00:27:40.594014 kubelet[2843]: I0424 00:27:40.593379 2843 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 24 00:27:40.607478 kubelet[2843]: E0424 00:27:40.607456 2843 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 24 00:27:40.607959 kubelet[2843]: I0424 00:27:40.607947 2843 factory.go:223] Registration of the containerd container factory successfully Apr 24 00:27:40.608024 kubelet[2843]: I0424 00:27:40.608020 2843 factory.go:223] Registration of the systemd container factory successfully Apr 24 00:27:40.609649 kubelet[2843]: I0424 00:27:40.609122 2843 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 24 00:27:40.614471 kubelet[2843]: I0424 00:27:40.612902 2843 reconciler.go:29] "Reconciler: start to sync state" Apr 24 00:27:40.786816 kubelet[2843]: I0424 00:27:40.786494 2843 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 24 00:27:40.790397 kubelet[2843]: I0424 00:27:40.789119 2843 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 24 00:27:40.790592 kubelet[2843]: I0424 00:27:40.790582 2843 state_mem.go:36] "Initialized new in-memory state store" Apr 24 00:27:40.794119 kubelet[2843]: I0424 00:27:40.794033 2843 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 24 00:27:40.795073 kubelet[2843]: I0424 00:27:40.795039 2843 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 24 00:27:40.795136 kubelet[2843]: I0424 00:27:40.795129 2843 policy_none.go:49] "None policy: Start" Apr 24 00:27:40.798431 kubelet[2843]: I0424 00:27:40.797597 2843 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 24 00:27:40.798431 kubelet[2843]: I0424 00:27:40.797655 2843 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 24 00:27:40.798431 kubelet[2843]: I0424 00:27:40.798007 2843 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 24 00:27:40.798431 kubelet[2843]: I0424 00:27:40.798013 2843 policy_none.go:47] "Start" Apr 24 00:27:40.813507 kubelet[2843]: E0424 00:27:40.813490 2843 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 24 00:27:40.816009 kubelet[2843]: I0424 00:27:40.815994 2843 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 24 00:27:40.820014 kubelet[2843]: I0424 00:27:40.818031 2843 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 24 00:27:40.820014 kubelet[2843]: I0424 00:27:40.818570 2843 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 24 00:27:40.831500 kubelet[2843]: E0424 00:27:40.831484 2843 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 24 00:27:40.837882 kubelet[2843]: I0424 00:27:40.828658 2843 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 24 00:27:40.872984 kubelet[2843]: I0424 00:27:40.872964 2843 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 24 00:27:40.873097 kubelet[2843]: I0424 00:27:40.873092 2843 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 24 00:27:40.873438 kubelet[2843]: I0424 00:27:40.873428 2843 kubelet.go:2428] "Starting kubelet main sync loop" Apr 24 00:27:40.873541 kubelet[2843]: E0424 00:27:40.873532 2843 kubelet.go:2452] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Apr 24 00:27:40.972990 kubelet[2843]: I0424 00:27:40.972902 2843 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 24 00:27:40.978995 kubelet[2843]: I0424 00:27:40.978975 2843 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 24 00:27:40.980087 kubelet[2843]: I0424 00:27:40.979473 2843 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 24 00:27:40.980518 kubelet[2843]: I0424 00:27:40.979553 2843 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 24 00:27:41.111020 kubelet[2843]: E0424 00:27:41.110827 2843 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 24 00:27:41.111461 sudo[2855]: pam_unix(sudo:session): session closed for user root Apr 24 00:27:41.122869 kubelet[2843]: I0424 00:27:41.119654 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b472b749f6f1369a7b4004525a5ef454-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b472b749f6f1369a7b4004525a5ef454\") " pod="kube-system/kube-apiserver-localhost" Apr 24 00:27:41.122869 kubelet[2843]: I0424 00:27:41.119810 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b472b749f6f1369a7b4004525a5ef454-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b472b749f6f1369a7b4004525a5ef454\") " pod="kube-system/kube-apiserver-localhost" Apr 24 00:27:41.122869 kubelet[2843]: I0424 00:27:41.119824 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 00:27:41.122869 kubelet[2843]: I0424 00:27:41.119838 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 00:27:41.122869 kubelet[2843]: I0424 00:27:41.119854 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 00:27:41.128089 kubelet[2843]: I0424 00:27:41.119938 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b472b749f6f1369a7b4004525a5ef454-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b472b749f6f1369a7b4004525a5ef454\") " pod="kube-system/kube-apiserver-localhost" Apr 24 00:27:41.128089 kubelet[2843]: I0424 00:27:41.127009 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 00:27:41.128089 kubelet[2843]: I0424 00:27:41.127031 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 00:27:41.128089 kubelet[2843]: I0424 00:27:41.127047 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/824fd89300514e351ed3b68d82c665c6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"824fd89300514e351ed3b68d82c665c6\") " pod="kube-system/kube-scheduler-localhost" Apr 24 00:27:41.149352 kubelet[2843]: I0424 00:27:41.148489 2843 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 24 00:27:41.149352 kubelet[2843]: I0424 00:27:41.149003 2843 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 24 00:27:41.353343 kubelet[2843]: E0424 00:27:41.350544 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:27:41.420018 kubelet[2843]: E0424 00:27:41.415477 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:27:41.436985 kubelet[2843]: E0424 00:27:41.436548 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:27:41.516502 kubelet[2843]: I0424 00:27:41.516470 2843 apiserver.go:52] "Watching apiserver" Apr 24 00:27:41.617607 kubelet[2843]: I0424 00:27:41.616373 2843 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 24 00:27:41.821457 kubelet[2843]: I0424 00:27:41.814897 2843 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.814879744 podStartE2EDuration="1.814879744s" podCreationTimestamp="2026-04-24 00:27:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 00:27:41.796998275 +0000 UTC m=+1.638933316" watchObservedRunningTime="2026-04-24 00:27:41.814879744 +0000 UTC m=+1.656814773" Apr 24 00:27:41.868054 kubelet[2843]: I0424 00:27:41.867650 2843 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.8676369680000002 podStartE2EDuration="3.867636968s" podCreationTimestamp="2026-04-24 00:27:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 00:27:41.816061728 +0000 UTC m=+1.657996768" watchObservedRunningTime="2026-04-24 00:27:41.867636968 +0000 UTC m=+1.709572016" Apr 24 00:27:41.868054 kubelet[2843]: I0424 00:27:41.867850 2843 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.867845986 podStartE2EDuration="867.845986ms" podCreationTimestamp="2026-04-24 00:27:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 00:27:41.858502133 +0000 UTC m=+1.700437173" watchObservedRunningTime="2026-04-24 00:27:41.867845986 +0000 UTC m=+1.709781026" Apr 24 00:27:41.934950 kubelet[2843]: E0424 00:27:41.934919 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:27:41.936614 kubelet[2843]: E0424 00:27:41.935850 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:27:41.936896 kubelet[2843]: E0424 00:27:41.935942 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:27:42.939662 kubelet[2843]: E0424 00:27:42.939051 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:27:42.946543 kubelet[2843]: E0424 00:27:42.946514 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:27:43.526554 kubelet[2843]: E0424 00:27:43.525895 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:27:43.948531 kubelet[2843]: E0424 00:27:43.946928 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:27:43.948531 kubelet[2843]: E0424 00:27:43.948124 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:27:43.977469 kubelet[2843]: E0424 00:27:43.949828 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:27:44.151388 sudo[1834]: pam_unix(sudo:session): session closed for user root Apr 24 00:27:44.157928 sshd[1833]: Connection closed by 10.0.0.1 port 33902 Apr 24 00:27:44.157570 sshd-session[1830]: pam_unix(sshd:session): session closed for user core Apr 24 00:27:44.184976 systemd[1]: sshd@8-10.0.0.89:22-10.0.0.1:33902.service: Deactivated successfully. Apr 24 00:27:44.197641 systemd[1]: session-9.scope: Deactivated successfully. Apr 24 00:27:44.198535 systemd[1]: session-9.scope: Consumed 8.938s CPU time, 274.3M memory peak. Apr 24 00:27:44.202126 systemd-logind[1595]: Session 9 logged out. Waiting for processes to exit. Apr 24 00:27:44.207569 systemd-logind[1595]: Removed session 9. Apr 24 00:27:44.718612 kubelet[2843]: I0424 00:27:44.717432 2843 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 24 00:27:44.724090 containerd[1622]: time="2026-04-24T00:27:44.723104875Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 24 00:27:44.725052 kubelet[2843]: I0424 00:27:44.723933 2843 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 24 00:27:44.953858 kubelet[2843]: E0424 00:27:44.953561 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:27:45.219549 systemd[1]: Created slice kubepods-besteffort-pod8e72ad1e_d217_4ba3_b7f8_77fe514e8ba7.slice - libcontainer container kubepods-besteffort-pod8e72ad1e_d217_4ba3_b7f8_77fe514e8ba7.slice. Apr 24 00:27:45.262934 systemd[1]: Created slice kubepods-burstable-pod542ac70f_2c8b_455d_82e5_49c0c48732bd.slice - libcontainer container kubepods-burstable-pod542ac70f_2c8b_455d_82e5_49c0c48732bd.slice. Apr 24 00:27:45.302868 kubelet[2843]: I0424 00:27:45.297892 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/542ac70f-2c8b-455d-82e5-49c0c48732bd-hostproc\") pod \"cilium-qgxzr\" (UID: \"542ac70f-2c8b-455d-82e5-49c0c48732bd\") " pod="kube-system/cilium-qgxzr" Apr 24 00:27:45.302868 kubelet[2843]: I0424 00:27:45.298040 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/542ac70f-2c8b-455d-82e5-49c0c48732bd-cilium-cgroup\") pod \"cilium-qgxzr\" (UID: \"542ac70f-2c8b-455d-82e5-49c0c48732bd\") " pod="kube-system/cilium-qgxzr" Apr 24 00:27:45.302868 kubelet[2843]: I0424 00:27:45.298053 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/542ac70f-2c8b-455d-82e5-49c0c48732bd-cni-path\") pod \"cilium-qgxzr\" (UID: \"542ac70f-2c8b-455d-82e5-49c0c48732bd\") " pod="kube-system/cilium-qgxzr" Apr 24 00:27:45.302868 kubelet[2843]: I0424 00:27:45.298066 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/542ac70f-2c8b-455d-82e5-49c0c48732bd-lib-modules\") pod \"cilium-qgxzr\" (UID: \"542ac70f-2c8b-455d-82e5-49c0c48732bd\") " pod="kube-system/cilium-qgxzr" Apr 24 00:27:45.302868 kubelet[2843]: I0424 00:27:45.298078 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8e72ad1e-d217-4ba3-b7f8-77fe514e8ba7-xtables-lock\") pod \"kube-proxy-s8jk7\" (UID: \"8e72ad1e-d217-4ba3-b7f8-77fe514e8ba7\") " pod="kube-system/kube-proxy-s8jk7" Apr 24 00:27:45.302868 kubelet[2843]: I0424 00:27:45.298091 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/542ac70f-2c8b-455d-82e5-49c0c48732bd-xtables-lock\") pod \"cilium-qgxzr\" (UID: \"542ac70f-2c8b-455d-82e5-49c0c48732bd\") " pod="kube-system/cilium-qgxzr" Apr 24 00:27:45.311902 kubelet[2843]: I0424 00:27:45.298107 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/542ac70f-2c8b-455d-82e5-49c0c48732bd-host-proc-sys-net\") pod \"cilium-qgxzr\" (UID: \"542ac70f-2c8b-455d-82e5-49c0c48732bd\") " pod="kube-system/cilium-qgxzr" Apr 24 00:27:45.311902 kubelet[2843]: I0424 00:27:45.298117 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/542ac70f-2c8b-455d-82e5-49c0c48732bd-hubble-tls\") pod \"cilium-qgxzr\" (UID: \"542ac70f-2c8b-455d-82e5-49c0c48732bd\") " pod="kube-system/cilium-qgxzr" Apr 24 00:27:45.311902 kubelet[2843]: I0424 00:27:45.298785 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8e72ad1e-d217-4ba3-b7f8-77fe514e8ba7-lib-modules\") pod \"kube-proxy-s8jk7\" (UID: \"8e72ad1e-d217-4ba3-b7f8-77fe514e8ba7\") " pod="kube-system/kube-proxy-s8jk7" Apr 24 00:27:45.311902 kubelet[2843]: I0424 00:27:45.298801 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/542ac70f-2c8b-455d-82e5-49c0c48732bd-cilium-run\") pod \"cilium-qgxzr\" (UID: \"542ac70f-2c8b-455d-82e5-49c0c48732bd\") " pod="kube-system/cilium-qgxzr" Apr 24 00:27:45.311902 kubelet[2843]: I0424 00:27:45.298811 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/542ac70f-2c8b-455d-82e5-49c0c48732bd-bpf-maps\") pod \"cilium-qgxzr\" (UID: \"542ac70f-2c8b-455d-82e5-49c0c48732bd\") " pod="kube-system/cilium-qgxzr" Apr 24 00:27:45.311902 kubelet[2843]: I0424 00:27:45.298821 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/542ac70f-2c8b-455d-82e5-49c0c48732bd-etc-cni-netd\") pod \"cilium-qgxzr\" (UID: \"542ac70f-2c8b-455d-82e5-49c0c48732bd\") " pod="kube-system/cilium-qgxzr" Apr 24 00:27:45.312006 kubelet[2843]: I0424 00:27:45.298830 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/542ac70f-2c8b-455d-82e5-49c0c48732bd-clustermesh-secrets\") pod \"cilium-qgxzr\" (UID: \"542ac70f-2c8b-455d-82e5-49c0c48732bd\") " pod="kube-system/cilium-qgxzr" Apr 24 00:27:45.312006 kubelet[2843]: I0424 00:27:45.298840 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8e72ad1e-d217-4ba3-b7f8-77fe514e8ba7-kube-proxy\") pod \"kube-proxy-s8jk7\" (UID: \"8e72ad1e-d217-4ba3-b7f8-77fe514e8ba7\") " pod="kube-system/kube-proxy-s8jk7" Apr 24 00:27:45.312006 kubelet[2843]: I0424 00:27:45.298851 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cpc5\" (UniqueName: \"kubernetes.io/projected/8e72ad1e-d217-4ba3-b7f8-77fe514e8ba7-kube-api-access-4cpc5\") pod \"kube-proxy-s8jk7\" (UID: \"8e72ad1e-d217-4ba3-b7f8-77fe514e8ba7\") " pod="kube-system/kube-proxy-s8jk7" Apr 24 00:27:45.312006 kubelet[2843]: I0424 00:27:45.298865 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/542ac70f-2c8b-455d-82e5-49c0c48732bd-cilium-config-path\") pod \"cilium-qgxzr\" (UID: \"542ac70f-2c8b-455d-82e5-49c0c48732bd\") " pod="kube-system/cilium-qgxzr" Apr 24 00:27:45.312006 kubelet[2843]: I0424 00:27:45.298877 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/542ac70f-2c8b-455d-82e5-49c0c48732bd-host-proc-sys-kernel\") pod \"cilium-qgxzr\" (UID: \"542ac70f-2c8b-455d-82e5-49c0c48732bd\") " pod="kube-system/cilium-qgxzr" Apr 24 00:27:45.312087 kubelet[2843]: I0424 00:27:45.298889 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfjkw\" (UniqueName: \"kubernetes.io/projected/542ac70f-2c8b-455d-82e5-49c0c48732bd-kube-api-access-cfjkw\") pod \"cilium-qgxzr\" (UID: \"542ac70f-2c8b-455d-82e5-49c0c48732bd\") " pod="kube-system/cilium-qgxzr" Apr 24 00:27:45.583871 kubelet[2843]: E0424 00:27:45.583065 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:27:45.586979 containerd[1622]: time="2026-04-24T00:27:45.586945662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s8jk7,Uid:8e72ad1e-d217-4ba3-b7f8-77fe514e8ba7,Namespace:kube-system,Attempt:0,}" Apr 24 00:27:45.602896 kubelet[2843]: E0424 00:27:45.602375 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:27:45.606803 containerd[1622]: time="2026-04-24T00:27:45.606394444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qgxzr,Uid:542ac70f-2c8b-455d-82e5-49c0c48732bd,Namespace:kube-system,Attempt:0,}" Apr 24 00:27:45.774056 containerd[1622]: time="2026-04-24T00:27:45.774003842Z" level=info msg="connecting to shim 8997d877dc2411ff3eaa3affcb3d1dc2700b65cef165f04a3da62a46b721eb40" address="unix:///run/containerd/s/78070837d2b661c9c3466d29e6228f8a0b122bea0656c5790a5b212e1e694880" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:27:45.795036 systemd[1]: Created slice kubepods-besteffort-pod011a458d_8cdd_4c13_9c44_28c738bfa972.slice - libcontainer container kubepods-besteffort-pod011a458d_8cdd_4c13_9c44_28c738bfa972.slice. Apr 24 00:27:45.809042 kubelet[2843]: I0424 00:27:45.807994 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/011a458d-8cdd-4c13-9c44-28c738bfa972-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-mld5l\" (UID: \"011a458d-8cdd-4c13-9c44-28c738bfa972\") " pod="kube-system/cilium-operator-6f9c7c5859-mld5l" Apr 24 00:27:45.809042 kubelet[2843]: I0424 00:27:45.808819 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhsfv\" (UniqueName: \"kubernetes.io/projected/011a458d-8cdd-4c13-9c44-28c738bfa972-kube-api-access-jhsfv\") pod \"cilium-operator-6f9c7c5859-mld5l\" (UID: \"011a458d-8cdd-4c13-9c44-28c738bfa972\") " pod="kube-system/cilium-operator-6f9c7c5859-mld5l" Apr 24 00:27:45.830386 containerd[1622]: time="2026-04-24T00:27:45.829944156Z" level=info msg="connecting to shim 4c57adb0d21fe20ec42d371bbe4d51563bf3c0a7bf5edd288708b05b76adaf5b" address="unix:///run/containerd/s/d1c4e33226aca9055ba6f678dbde91dd5125c73e968bb56e84aecfcb7184e12d" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:27:45.977465 kubelet[2843]: E0424 00:27:45.974821 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:27:45.988115 systemd[1]: Started cri-containerd-4c57adb0d21fe20ec42d371bbe4d51563bf3c0a7bf5edd288708b05b76adaf5b.scope - libcontainer container 4c57adb0d21fe20ec42d371bbe4d51563bf3c0a7bf5edd288708b05b76adaf5b. Apr 24 00:27:46.011521 systemd[1]: Started cri-containerd-8997d877dc2411ff3eaa3affcb3d1dc2700b65cef165f04a3da62a46b721eb40.scope - libcontainer container 8997d877dc2411ff3eaa3affcb3d1dc2700b65cef165f04a3da62a46b721eb40. Apr 24 00:27:46.124105 containerd[1622]: time="2026-04-24T00:27:46.123080211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qgxzr,Uid:542ac70f-2c8b-455d-82e5-49c0c48732bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c57adb0d21fe20ec42d371bbe4d51563bf3c0a7bf5edd288708b05b76adaf5b\"" Apr 24 00:27:46.127656 kubelet[2843]: E0424 00:27:46.127131 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:27:46.133875 containerd[1622]: time="2026-04-24T00:27:46.132647843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-mld5l,Uid:011a458d-8cdd-4c13-9c44-28c738bfa972,Namespace:kube-system,Attempt:0,}" Apr 24 00:27:46.143124 kubelet[2843]: E0424 00:27:46.140638 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:27:46.152822 containerd[1622]: time="2026-04-24T00:27:46.152111106Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 24 00:27:46.188529 containerd[1622]: time="2026-04-24T00:27:46.188383099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s8jk7,Uid:8e72ad1e-d217-4ba3-b7f8-77fe514e8ba7,Namespace:kube-system,Attempt:0,} returns sandbox id \"8997d877dc2411ff3eaa3affcb3d1dc2700b65cef165f04a3da62a46b721eb40\"" Apr 24 00:27:46.192844 kubelet[2843]: E0424 00:27:46.190087 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:27:46.224083 containerd[1622]: time="2026-04-24T00:27:46.223055837Z" level=info msg="CreateContainer within sandbox \"8997d877dc2411ff3eaa3affcb3d1dc2700b65cef165f04a3da62a46b721eb40\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 24 00:27:46.281437 containerd[1622]: time="2026-04-24T00:27:46.274110963Z" level=info msg="connecting to shim 8556a6725afc1f33a9af6a7cde230efd4463b4d01fd0e853d641ef2e76c3eec1" address="unix:///run/containerd/s/c0ef5eae7393ad469704285592f6be5aa398958d946f854188a2fa1eb25e3c3d" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:27:46.331916 containerd[1622]: time="2026-04-24T00:27:46.330075638Z" level=info msg="Container 74834b4f6bc48f6192fe08c13b82997eea768bd6b197d0e4d12b9014ef2175c1: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:27:46.354869 containerd[1622]: time="2026-04-24T00:27:46.354827222Z" level=info msg="CreateContainer within sandbox \"8997d877dc2411ff3eaa3affcb3d1dc2700b65cef165f04a3da62a46b721eb40\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"74834b4f6bc48f6192fe08c13b82997eea768bd6b197d0e4d12b9014ef2175c1\"" Apr 24 00:27:46.361871 containerd[1622]: time="2026-04-24T00:27:46.360452821Z" level=info msg="StartContainer for \"74834b4f6bc48f6192fe08c13b82997eea768bd6b197d0e4d12b9014ef2175c1\"" Apr 24 00:27:46.388116 containerd[1622]: time="2026-04-24T00:27:46.388047151Z" level=info msg="connecting to shim 74834b4f6bc48f6192fe08c13b82997eea768bd6b197d0e4d12b9014ef2175c1" address="unix:///run/containerd/s/78070837d2b661c9c3466d29e6228f8a0b122bea0656c5790a5b212e1e694880" protocol=ttrpc version=3 Apr 24 00:27:46.410869 systemd[1]: Started cri-containerd-8556a6725afc1f33a9af6a7cde230efd4463b4d01fd0e853d641ef2e76c3eec1.scope - libcontainer container 8556a6725afc1f33a9af6a7cde230efd4463b4d01fd0e853d641ef2e76c3eec1. Apr 24 00:27:46.534832 systemd[1]: Started cri-containerd-74834b4f6bc48f6192fe08c13b82997eea768bd6b197d0e4d12b9014ef2175c1.scope - libcontainer container 74834b4f6bc48f6192fe08c13b82997eea768bd6b197d0e4d12b9014ef2175c1. Apr 24 00:27:46.668994 containerd[1622]: time="2026-04-24T00:27:46.668084009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-mld5l,Uid:011a458d-8cdd-4c13-9c44-28c738bfa972,Namespace:kube-system,Attempt:0,} returns sandbox id \"8556a6725afc1f33a9af6a7cde230efd4463b4d01fd0e853d641ef2e76c3eec1\"" Apr 24 00:27:46.678043 kubelet[2843]: E0424 00:27:46.677874 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:27:46.817956 containerd[1622]: time="2026-04-24T00:27:46.816574562Z" level=info msg="StartContainer for \"74834b4f6bc48f6192fe08c13b82997eea768bd6b197d0e4d12b9014ef2175c1\" returns successfully" Apr 24 00:27:47.022836 kubelet[2843]: E0424 00:27:47.019650 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:27:48.062557 kubelet[2843]: E0424 00:27:48.058929 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:27:50.973558 kubelet[2843]: I0424 00:27:50.956972 2843 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-s8jk7" podStartSLOduration=5.956957952 podStartE2EDuration="5.956957952s" podCreationTimestamp="2026-04-24 00:27:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 00:27:47.073962841 +0000 UTC m=+6.915897878" watchObservedRunningTime="2026-04-24 00:27:50.956957952 +0000 UTC m=+10.798892991" Apr 24 00:27:52.365409 kubelet[2843]: E0424 00:27:52.362520 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:28:04.802423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4009644003.mount: Deactivated successfully. Apr 24 00:28:17.501670 containerd[1622]: time="2026-04-24T00:28:17.499781652Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:28:17.505646 containerd[1622]: time="2026-04-24T00:28:17.504591484Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 24 00:28:17.511527 containerd[1622]: time="2026-04-24T00:28:17.510656774Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:28:17.513532 containerd[1622]: time="2026-04-24T00:28:17.513049994Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 31.357640249s" Apr 24 00:28:17.513532 containerd[1622]: time="2026-04-24T00:28:17.513085411Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 24 00:28:17.518453 containerd[1622]: time="2026-04-24T00:28:17.517684781Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 24 00:28:17.904574 containerd[1622]: time="2026-04-24T00:28:17.901605213Z" level=info msg="CreateContainer within sandbox \"4c57adb0d21fe20ec42d371bbe4d51563bf3c0a7bf5edd288708b05b76adaf5b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 24 00:28:17.935505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1020548088.mount: Deactivated successfully. Apr 24 00:28:17.945825 containerd[1622]: time="2026-04-24T00:28:17.945115396Z" level=info msg="Container f5fc516bd807cc4eeb7b1eeaa86dea41cffa874bb1ebad01c29189d91d79f1e9: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:28:17.970817 containerd[1622]: time="2026-04-24T00:28:17.970496819Z" level=info msg="CreateContainer within sandbox \"4c57adb0d21fe20ec42d371bbe4d51563bf3c0a7bf5edd288708b05b76adaf5b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f5fc516bd807cc4eeb7b1eeaa86dea41cffa874bb1ebad01c29189d91d79f1e9\"" Apr 24 00:28:17.983075 containerd[1622]: time="2026-04-24T00:28:17.982491318Z" level=info msg="StartContainer for \"f5fc516bd807cc4eeb7b1eeaa86dea41cffa874bb1ebad01c29189d91d79f1e9\"" Apr 24 00:28:17.989133 containerd[1622]: time="2026-04-24T00:28:17.988105626Z" level=info msg="connecting to shim f5fc516bd807cc4eeb7b1eeaa86dea41cffa874bb1ebad01c29189d91d79f1e9" address="unix:///run/containerd/s/d1c4e33226aca9055ba6f678dbde91dd5125c73e968bb56e84aecfcb7184e12d" protocol=ttrpc version=3 Apr 24 00:28:18.162838 systemd[1]: Started cri-containerd-f5fc516bd807cc4eeb7b1eeaa86dea41cffa874bb1ebad01c29189d91d79f1e9.scope - libcontainer container f5fc516bd807cc4eeb7b1eeaa86dea41cffa874bb1ebad01c29189d91d79f1e9. Apr 24 00:28:18.410538 containerd[1622]: time="2026-04-24T00:28:18.409630258Z" level=info msg="StartContainer for \"f5fc516bd807cc4eeb7b1eeaa86dea41cffa874bb1ebad01c29189d91d79f1e9\" returns successfully" Apr 24 00:28:18.482815 systemd[1]: cri-containerd-f5fc516bd807cc4eeb7b1eeaa86dea41cffa874bb1ebad01c29189d91d79f1e9.scope: Deactivated successfully. Apr 24 00:28:18.513506 containerd[1622]: time="2026-04-24T00:28:18.512029443Z" level=info msg="received container exit event container_id:\"f5fc516bd807cc4eeb7b1eeaa86dea41cffa874bb1ebad01c29189d91d79f1e9\" id:\"f5fc516bd807cc4eeb7b1eeaa86dea41cffa874bb1ebad01c29189d91d79f1e9\" pid:3277 exited_at:{seconds:1776990498 nanos:499822869}" Apr 24 00:28:18.936832 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f5fc516bd807cc4eeb7b1eeaa86dea41cffa874bb1ebad01c29189d91d79f1e9-rootfs.mount: Deactivated successfully. Apr 24 00:28:19.372130 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount593656166.mount: Deactivated successfully. Apr 24 00:28:19.471758 kubelet[2843]: E0424 00:28:19.471604 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:28:19.509110 containerd[1622]: time="2026-04-24T00:28:19.508827909Z" level=info msg="CreateContainer within sandbox \"4c57adb0d21fe20ec42d371bbe4d51563bf3c0a7bf5edd288708b05b76adaf5b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 24 00:28:19.565022 containerd[1622]: time="2026-04-24T00:28:19.563839717Z" level=info msg="Container 9336f1c6cb7dee6f6602d5f81dfc4e469708665362bfa9d449ebc9054bf5cbb2: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:28:19.636057 containerd[1622]: time="2026-04-24T00:28:19.635137917Z" level=info msg="CreateContainer within sandbox \"4c57adb0d21fe20ec42d371bbe4d51563bf3c0a7bf5edd288708b05b76adaf5b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9336f1c6cb7dee6f6602d5f81dfc4e469708665362bfa9d449ebc9054bf5cbb2\"" Apr 24 00:28:19.638361 containerd[1622]: time="2026-04-24T00:28:19.637642872Z" level=info msg="StartContainer for \"9336f1c6cb7dee6f6602d5f81dfc4e469708665362bfa9d449ebc9054bf5cbb2\"" Apr 24 00:28:19.640425 containerd[1622]: time="2026-04-24T00:28:19.639812882Z" level=info msg="connecting to shim 9336f1c6cb7dee6f6602d5f81dfc4e469708665362bfa9d449ebc9054bf5cbb2" address="unix:///run/containerd/s/d1c4e33226aca9055ba6f678dbde91dd5125c73e968bb56e84aecfcb7184e12d" protocol=ttrpc version=3 Apr 24 00:28:19.754499 systemd[1]: Started cri-containerd-9336f1c6cb7dee6f6602d5f81dfc4e469708665362bfa9d449ebc9054bf5cbb2.scope - libcontainer container 9336f1c6cb7dee6f6602d5f81dfc4e469708665362bfa9d449ebc9054bf5cbb2. Apr 24 00:28:19.943126 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount219992351.mount: Deactivated successfully. Apr 24 00:28:19.972670 containerd[1622]: time="2026-04-24T00:28:19.971474800Z" level=info msg="StartContainer for \"9336f1c6cb7dee6f6602d5f81dfc4e469708665362bfa9d449ebc9054bf5cbb2\" returns successfully" Apr 24 00:28:20.031565 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 24 00:28:20.033457 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 24 00:28:20.038074 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 24 00:28:20.046719 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 00:28:20.051059 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 24 00:28:20.060495 systemd[1]: cri-containerd-9336f1c6cb7dee6f6602d5f81dfc4e469708665362bfa9d449ebc9054bf5cbb2.scope: Deactivated successfully. Apr 24 00:28:20.067669 containerd[1622]: time="2026-04-24T00:28:20.067637569Z" level=info msg="received container exit event container_id:\"9336f1c6cb7dee6f6602d5f81dfc4e469708665362bfa9d449ebc9054bf5cbb2\" id:\"9336f1c6cb7dee6f6602d5f81dfc4e469708665362bfa9d449ebc9054bf5cbb2\" pid:3333 exited_at:{seconds:1776990500 nanos:59849525}" Apr 24 00:28:20.207850 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 00:28:20.258067 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9336f1c6cb7dee6f6602d5f81dfc4e469708665362bfa9d449ebc9054bf5cbb2-rootfs.mount: Deactivated successfully. Apr 24 00:28:20.501730 kubelet[2843]: E0424 00:28:20.498742 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:28:20.578058 containerd[1622]: time="2026-04-24T00:28:20.576096394Z" level=info msg="CreateContainer within sandbox \"4c57adb0d21fe20ec42d371bbe4d51563bf3c0a7bf5edd288708b05b76adaf5b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 24 00:28:20.638676 containerd[1622]: time="2026-04-24T00:28:20.636124978Z" level=info msg="Container b1ca37127fa59f999bd20895013271b1ae4b9873dbe1f2d8bce2dfdcf7f21c2b: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:28:20.691060 containerd[1622]: time="2026-04-24T00:28:20.690431012Z" level=info msg="CreateContainer within sandbox \"4c57adb0d21fe20ec42d371bbe4d51563bf3c0a7bf5edd288708b05b76adaf5b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b1ca37127fa59f999bd20895013271b1ae4b9873dbe1f2d8bce2dfdcf7f21c2b\"" Apr 24 00:28:20.706692 containerd[1622]: time="2026-04-24T00:28:20.706538167Z" level=info msg="StartContainer for \"b1ca37127fa59f999bd20895013271b1ae4b9873dbe1f2d8bce2dfdcf7f21c2b\"" Apr 24 00:28:20.712658 containerd[1622]: time="2026-04-24T00:28:20.712572642Z" level=info msg="connecting to shim b1ca37127fa59f999bd20895013271b1ae4b9873dbe1f2d8bce2dfdcf7f21c2b" address="unix:///run/containerd/s/d1c4e33226aca9055ba6f678dbde91dd5125c73e968bb56e84aecfcb7184e12d" protocol=ttrpc version=3 Apr 24 00:28:20.864061 systemd[1]: Started cri-containerd-b1ca37127fa59f999bd20895013271b1ae4b9873dbe1f2d8bce2dfdcf7f21c2b.scope - libcontainer container b1ca37127fa59f999bd20895013271b1ae4b9873dbe1f2d8bce2dfdcf7f21c2b. Apr 24 00:28:20.940790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2067613433.mount: Deactivated successfully. Apr 24 00:28:21.115129 systemd[1]: cri-containerd-b1ca37127fa59f999bd20895013271b1ae4b9873dbe1f2d8bce2dfdcf7f21c2b.scope: Deactivated successfully. Apr 24 00:28:21.118446 containerd[1622]: time="2026-04-24T00:28:21.117771340Z" level=info msg="StartContainer for \"b1ca37127fa59f999bd20895013271b1ae4b9873dbe1f2d8bce2dfdcf7f21c2b\" returns successfully" Apr 24 00:28:21.139546 containerd[1622]: time="2026-04-24T00:28:21.138554894Z" level=info msg="received container exit event container_id:\"b1ca37127fa59f999bd20895013271b1ae4b9873dbe1f2d8bce2dfdcf7f21c2b\" id:\"b1ca37127fa59f999bd20895013271b1ae4b9873dbe1f2d8bce2dfdcf7f21c2b\" pid:3381 exited_at:{seconds:1776990501 nanos:124699638}" Apr 24 00:28:21.374655 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b1ca37127fa59f999bd20895013271b1ae4b9873dbe1f2d8bce2dfdcf7f21c2b-rootfs.mount: Deactivated successfully. Apr 24 00:28:21.531104 kubelet[2843]: E0424 00:28:21.530765 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:28:21.586863 containerd[1622]: time="2026-04-24T00:28:21.586045430Z" level=info msg="CreateContainer within sandbox \"4c57adb0d21fe20ec42d371bbe4d51563bf3c0a7bf5edd288708b05b76adaf5b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 24 00:28:21.637725 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount911256636.mount: Deactivated successfully. Apr 24 00:28:21.645578 containerd[1622]: time="2026-04-24T00:28:21.637790934Z" level=info msg="Container c85ce64023bd2aa3d29c1dabee33725cf5cd1840c53046d975285477bb119673: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:28:21.666605 containerd[1622]: time="2026-04-24T00:28:21.665357359Z" level=info msg="CreateContainer within sandbox \"4c57adb0d21fe20ec42d371bbe4d51563bf3c0a7bf5edd288708b05b76adaf5b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c85ce64023bd2aa3d29c1dabee33725cf5cd1840c53046d975285477bb119673\"" Apr 24 00:28:21.669107 containerd[1622]: time="2026-04-24T00:28:21.669081004Z" level=info msg="StartContainer for \"c85ce64023bd2aa3d29c1dabee33725cf5cd1840c53046d975285477bb119673\"" Apr 24 00:28:21.675371 containerd[1622]: time="2026-04-24T00:28:21.675344892Z" level=info msg="connecting to shim c85ce64023bd2aa3d29c1dabee33725cf5cd1840c53046d975285477bb119673" address="unix:///run/containerd/s/d1c4e33226aca9055ba6f678dbde91dd5125c73e968bb56e84aecfcb7184e12d" protocol=ttrpc version=3 Apr 24 00:28:21.792785 systemd[1]: Started cri-containerd-c85ce64023bd2aa3d29c1dabee33725cf5cd1840c53046d975285477bb119673.scope - libcontainer container c85ce64023bd2aa3d29c1dabee33725cf5cd1840c53046d975285477bb119673. Apr 24 00:28:21.987828 systemd[1]: cri-containerd-c85ce64023bd2aa3d29c1dabee33725cf5cd1840c53046d975285477bb119673.scope: Deactivated successfully. Apr 24 00:28:22.010081 containerd[1622]: time="2026-04-24T00:28:22.009542727Z" level=info msg="received container exit event container_id:\"c85ce64023bd2aa3d29c1dabee33725cf5cd1840c53046d975285477bb119673\" id:\"c85ce64023bd2aa3d29c1dabee33725cf5cd1840c53046d975285477bb119673\" pid:3420 exited_at:{seconds:1776990501 nanos:996651822}" Apr 24 00:28:22.024599 containerd[1622]: time="2026-04-24T00:28:22.023034809Z" level=info msg="StartContainer for \"c85ce64023bd2aa3d29c1dabee33725cf5cd1840c53046d975285477bb119673\" returns successfully" Apr 24 00:28:22.197817 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c85ce64023bd2aa3d29c1dabee33725cf5cd1840c53046d975285477bb119673-rootfs.mount: Deactivated successfully. Apr 24 00:28:22.552681 kubelet[2843]: E0424 00:28:22.551791 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:28:22.597415 containerd[1622]: time="2026-04-24T00:28:22.596603165Z" level=info msg="CreateContainer within sandbox \"4c57adb0d21fe20ec42d371bbe4d51563bf3c0a7bf5edd288708b05b76adaf5b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 24 00:28:22.691561 containerd[1622]: time="2026-04-24T00:28:22.690752768Z" level=info msg="Container 1937a37fbe5f1e32623c2cc2ab107f6582d006341cf0460fa76dd360b2189cb2: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:28:22.740572 containerd[1622]: time="2026-04-24T00:28:22.738669848Z" level=info msg="CreateContainer within sandbox \"4c57adb0d21fe20ec42d371bbe4d51563bf3c0a7bf5edd288708b05b76adaf5b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1937a37fbe5f1e32623c2cc2ab107f6582d006341cf0460fa76dd360b2189cb2\"" Apr 24 00:28:22.741575 containerd[1622]: time="2026-04-24T00:28:22.740740086Z" level=info msg="StartContainer for \"1937a37fbe5f1e32623c2cc2ab107f6582d006341cf0460fa76dd360b2189cb2\"" Apr 24 00:28:22.744386 containerd[1622]: time="2026-04-24T00:28:22.743575587Z" level=info msg="connecting to shim 1937a37fbe5f1e32623c2cc2ab107f6582d006341cf0460fa76dd360b2189cb2" address="unix:///run/containerd/s/d1c4e33226aca9055ba6f678dbde91dd5125c73e968bb56e84aecfcb7184e12d" protocol=ttrpc version=3 Apr 24 00:28:22.856780 systemd[1]: Started cri-containerd-1937a37fbe5f1e32623c2cc2ab107f6582d006341cf0460fa76dd360b2189cb2.scope - libcontainer container 1937a37fbe5f1e32623c2cc2ab107f6582d006341cf0460fa76dd360b2189cb2. Apr 24 00:28:22.926754 containerd[1622]: time="2026-04-24T00:28:22.925453573Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:28:22.926754 containerd[1622]: time="2026-04-24T00:28:22.926597420Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 24 00:28:22.944427 containerd[1622]: time="2026-04-24T00:28:22.943520675Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:28:22.956121 containerd[1622]: time="2026-04-24T00:28:22.956085250Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 5.438099406s" Apr 24 00:28:22.957284 containerd[1622]: time="2026-04-24T00:28:22.956697875Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 24 00:28:22.986616 containerd[1622]: time="2026-04-24T00:28:22.984802742Z" level=info msg="CreateContainer within sandbox \"8556a6725afc1f33a9af6a7cde230efd4463b4d01fd0e853d641ef2e76c3eec1\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 24 00:28:23.044840 containerd[1622]: time="2026-04-24T00:28:23.044026185Z" level=info msg="Container f0153583456c2fbfc862ed1cc2e91ed94941ba8a71fda51785cbe6ab35c8528c: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:28:23.074681 containerd[1622]: time="2026-04-24T00:28:23.074618021Z" level=info msg="CreateContainer within sandbox \"8556a6725afc1f33a9af6a7cde230efd4463b4d01fd0e853d641ef2e76c3eec1\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f0153583456c2fbfc862ed1cc2e91ed94941ba8a71fda51785cbe6ab35c8528c\"" Apr 24 00:28:23.081648 containerd[1622]: time="2026-04-24T00:28:23.081401563Z" level=info msg="StartContainer for \"f0153583456c2fbfc862ed1cc2e91ed94941ba8a71fda51785cbe6ab35c8528c\"" Apr 24 00:28:23.097350 containerd[1622]: time="2026-04-24T00:28:23.093667241Z" level=info msg="connecting to shim f0153583456c2fbfc862ed1cc2e91ed94941ba8a71fda51785cbe6ab35c8528c" address="unix:///run/containerd/s/c0ef5eae7393ad469704285592f6be5aa398958d946f854188a2fa1eb25e3c3d" protocol=ttrpc version=3 Apr 24 00:28:23.097350 containerd[1622]: time="2026-04-24T00:28:23.094603344Z" level=info msg="StartContainer for \"1937a37fbe5f1e32623c2cc2ab107f6582d006341cf0460fa76dd360b2189cb2\" returns successfully" Apr 24 00:28:23.181459 systemd[1]: Started cri-containerd-f0153583456c2fbfc862ed1cc2e91ed94941ba8a71fda51785cbe6ab35c8528c.scope - libcontainer container f0153583456c2fbfc862ed1cc2e91ed94941ba8a71fda51785cbe6ab35c8528c. Apr 24 00:28:23.598749 containerd[1622]: time="2026-04-24T00:28:23.598054214Z" level=info msg="StartContainer for \"f0153583456c2fbfc862ed1cc2e91ed94941ba8a71fda51785cbe6ab35c8528c\" returns successfully" Apr 24 00:28:23.763667 kubelet[2843]: I0424 00:28:23.763353 2843 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Apr 24 00:28:23.908725 kubelet[2843]: I0424 00:28:23.907640 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppz2d\" (UniqueName: \"kubernetes.io/projected/4916e010-ad15-43a5-95a6-7298180b96d1-kube-api-access-ppz2d\") pod \"coredns-66bc5c9577-dkhqc\" (UID: \"4916e010-ad15-43a5-95a6-7298180b96d1\") " pod="kube-system/coredns-66bc5c9577-dkhqc" Apr 24 00:28:23.908725 kubelet[2843]: I0424 00:28:23.907723 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g69d5\" (UniqueName: \"kubernetes.io/projected/54b5777b-c5e7-4374-908f-912192cafb42-kube-api-access-g69d5\") pod \"coredns-66bc5c9577-vvnkw\" (UID: \"54b5777b-c5e7-4374-908f-912192cafb42\") " pod="kube-system/coredns-66bc5c9577-vvnkw" Apr 24 00:28:23.908725 kubelet[2843]: I0424 00:28:23.907742 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54b5777b-c5e7-4374-908f-912192cafb42-config-volume\") pod \"coredns-66bc5c9577-vvnkw\" (UID: \"54b5777b-c5e7-4374-908f-912192cafb42\") " pod="kube-system/coredns-66bc5c9577-vvnkw" Apr 24 00:28:23.908725 kubelet[2843]: I0424 00:28:23.907757 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4916e010-ad15-43a5-95a6-7298180b96d1-config-volume\") pod \"coredns-66bc5c9577-dkhqc\" (UID: \"4916e010-ad15-43a5-95a6-7298180b96d1\") " pod="kube-system/coredns-66bc5c9577-dkhqc" Apr 24 00:28:23.908465 systemd[1]: Created slice kubepods-burstable-pod4916e010_ad15_43a5_95a6_7298180b96d1.slice - libcontainer container kubepods-burstable-pod4916e010_ad15_43a5_95a6_7298180b96d1.slice. Apr 24 00:28:23.937072 systemd[1]: Created slice kubepods-burstable-pod54b5777b_c5e7_4374_908f_912192cafb42.slice - libcontainer container kubepods-burstable-pod54b5777b_c5e7_4374_908f_912192cafb42.slice. Apr 24 00:28:24.259427 kubelet[2843]: E0424 00:28:24.256570 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:28:24.280486 kubelet[2843]: E0424 00:28:24.279103 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:28:24.282818 containerd[1622]: time="2026-04-24T00:28:24.281862310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dkhqc,Uid:4916e010-ad15-43a5-95a6-7298180b96d1,Namespace:kube-system,Attempt:0,}" Apr 24 00:28:24.301849 containerd[1622]: time="2026-04-24T00:28:24.301444925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-vvnkw,Uid:54b5777b-c5e7-4374-908f-912192cafb42,Namespace:kube-system,Attempt:0,}" Apr 24 00:28:24.689427 kubelet[2843]: E0424 00:28:24.688578 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:28:24.711576 kubelet[2843]: E0424 00:28:24.708366 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:28:25.197736 kubelet[2843]: I0424 00:28:25.197536 2843 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-mld5l" podStartSLOduration=3.914361199 podStartE2EDuration="40.192764669s" podCreationTimestamp="2026-04-24 00:27:45 +0000 UTC" firstStartedPulling="2026-04-24 00:27:46.684382228 +0000 UTC m=+6.526317258" lastFinishedPulling="2026-04-24 00:28:22.962785699 +0000 UTC m=+42.804720728" observedRunningTime="2026-04-24 00:28:24.799849858 +0000 UTC m=+44.641784895" watchObservedRunningTime="2026-04-24 00:28:25.192764669 +0000 UTC m=+45.034699709" Apr 24 00:28:25.724540 kubelet[2843]: E0424 00:28:25.721790 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:28:25.728407 kubelet[2843]: E0424 00:28:25.721818 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:28:30.074064 systemd-networkd[1411]: cilium_host: Link UP Apr 24 00:28:30.082656 systemd-networkd[1411]: cilium_net: Link UP Apr 24 00:28:30.083959 systemd-networkd[1411]: cilium_net: Gained carrier Apr 24 00:28:30.094452 systemd-networkd[1411]: cilium_host: Gained carrier Apr 24 00:28:30.151670 kubelet[2843]: E0424 00:28:30.141978 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:28:30.324689 systemd-networkd[1411]: cilium_net: Gained IPv6LL Apr 24 00:28:30.346076 systemd-networkd[1411]: cilium_host: Gained IPv6LL Apr 24 00:28:30.860633 systemd-networkd[1411]: cilium_vxlan: Link UP Apr 24 00:28:30.860641 systemd-networkd[1411]: cilium_vxlan: Gained carrier Apr 24 00:28:31.830531 kernel: NET: Registered PF_ALG protocol family Apr 24 00:28:32.647100 systemd-networkd[1411]: cilium_vxlan: Gained IPv6LL Apr 24 00:28:35.862510 systemd-networkd[1411]: lxc_health: Link UP Apr 24 00:28:35.891508 systemd-networkd[1411]: lxc_health: Gained carrier Apr 24 00:28:36.416539 systemd-networkd[1411]: lxccf11bc704381: Link UP Apr 24 00:28:36.458035 systemd-networkd[1411]: lxc17eeb774e364: Link UP Apr 24 00:28:36.500575 kernel: eth0: renamed from tmp0bc94 Apr 24 00:28:36.531414 kernel: eth0: renamed from tmp805b1 Apr 24 00:28:36.544580 systemd-networkd[1411]: lxccf11bc704381: Gained carrier Apr 24 00:28:36.561496 systemd-networkd[1411]: lxc17eeb774e364: Gained carrier Apr 24 00:28:37.572915 systemd-networkd[1411]: lxccf11bc704381: Gained IPv6LL Apr 24 00:28:37.585007 systemd-networkd[1411]: lxc_health: Gained IPv6LL Apr 24 00:28:37.640868 kubelet[2843]: E0424 00:28:37.640513 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:28:37.720578 kubelet[2843]: I0424 00:28:37.718600 2843 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qgxzr" podStartSLOduration=21.351600941 podStartE2EDuration="52.718584756s" podCreationTimestamp="2026-04-24 00:27:45 +0000 UTC" firstStartedPulling="2026-04-24 00:27:46.149861846 +0000 UTC m=+5.991796875" lastFinishedPulling="2026-04-24 00:28:17.516845662 +0000 UTC m=+37.358780690" observedRunningTime="2026-04-24 00:28:25.204703558 +0000 UTC m=+45.046638626" watchObservedRunningTime="2026-04-24 00:28:37.718584756 +0000 UTC m=+57.560519808" Apr 24 00:28:37.892976 systemd-networkd[1411]: lxc17eeb774e364: Gained IPv6LL Apr 24 00:28:37.949093 kubelet[2843]: E0424 00:28:37.948883 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:28:42.834260 containerd[1622]: time="2026-04-24T00:28:42.833546584Z" level=info msg="connecting to shim 0bc947529027af5d3a3b8d8a7c3d3ef01db7efb87a4c81115871213dcdfbd3c6" address="unix:///run/containerd/s/b4fa6c26dfdbf64f6840827b1b25900887acfec2f2bb09f4917b7be2689c5586" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:28:42.835809 containerd[1622]: time="2026-04-24T00:28:42.835437271Z" level=info msg="connecting to shim 805b1a66053e62d6917a6c8d01ee77c58be29b3f2b5833a35be222a15d55dbed" address="unix:///run/containerd/s/fef1d2a4106992c84e2d8602bc94ed9a08d0bfb2c5e9d4bbebbd01ddc26c5fea" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:28:42.975004 systemd[1]: Started cri-containerd-805b1a66053e62d6917a6c8d01ee77c58be29b3f2b5833a35be222a15d55dbed.scope - libcontainer container 805b1a66053e62d6917a6c8d01ee77c58be29b3f2b5833a35be222a15d55dbed. Apr 24 00:28:43.005693 systemd[1]: Started cri-containerd-0bc947529027af5d3a3b8d8a7c3d3ef01db7efb87a4c81115871213dcdfbd3c6.scope - libcontainer container 0bc947529027af5d3a3b8d8a7c3d3ef01db7efb87a4c81115871213dcdfbd3c6. Apr 24 00:28:43.029408 systemd-resolved[1492]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 24 00:28:43.067363 systemd-resolved[1492]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 24 00:28:43.113916 containerd[1622]: time="2026-04-24T00:28:43.113024532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dkhqc,Uid:4916e010-ad15-43a5-95a6-7298180b96d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"805b1a66053e62d6917a6c8d01ee77c58be29b3f2b5833a35be222a15d55dbed\"" Apr 24 00:28:43.117380 kubelet[2843]: E0424 00:28:43.116565 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:28:43.134056 containerd[1622]: time="2026-04-24T00:28:43.132538651Z" level=info msg="CreateContainer within sandbox \"805b1a66053e62d6917a6c8d01ee77c58be29b3f2b5833a35be222a15d55dbed\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 24 00:28:43.186072 containerd[1622]: time="2026-04-24T00:28:43.185875902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-vvnkw,Uid:54b5777b-c5e7-4374-908f-912192cafb42,Namespace:kube-system,Attempt:0,} returns sandbox id \"0bc947529027af5d3a3b8d8a7c3d3ef01db7efb87a4c81115871213dcdfbd3c6\"" Apr 24 00:28:43.191459 kubelet[2843]: E0424 00:28:43.190951 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:28:43.202770 containerd[1622]: time="2026-04-24T00:28:43.202038593Z" level=info msg="CreateContainer within sandbox \"0bc947529027af5d3a3b8d8a7c3d3ef01db7efb87a4c81115871213dcdfbd3c6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 24 00:28:43.232568 containerd[1622]: time="2026-04-24T00:28:43.232328181Z" level=info msg="Container f465487932120cc48a25e4b542a49931f722832241667710f8e43971f371ff89: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:28:43.235109 containerd[1622]: time="2026-04-24T00:28:43.235007532Z" level=info msg="Container b11c7a7a601d416743506f79ab62a4abc5b7b15d07316d4ae63880534cbc4975: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:28:43.256964 containerd[1622]: time="2026-04-24T00:28:43.256821360Z" level=info msg="CreateContainer within sandbox \"0bc947529027af5d3a3b8d8a7c3d3ef01db7efb87a4c81115871213dcdfbd3c6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f465487932120cc48a25e4b542a49931f722832241667710f8e43971f371ff89\"" Apr 24 00:28:43.262994 containerd[1622]: time="2026-04-24T00:28:43.262679416Z" level=info msg="StartContainer for \"f465487932120cc48a25e4b542a49931f722832241667710f8e43971f371ff89\"" Apr 24 00:28:43.273440 containerd[1622]: time="2026-04-24T00:28:43.273081929Z" level=info msg="connecting to shim f465487932120cc48a25e4b542a49931f722832241667710f8e43971f371ff89" address="unix:///run/containerd/s/b4fa6c26dfdbf64f6840827b1b25900887acfec2f2bb09f4917b7be2689c5586" protocol=ttrpc version=3 Apr 24 00:28:43.277371 containerd[1622]: time="2026-04-24T00:28:43.276829316Z" level=info msg="CreateContainer within sandbox \"805b1a66053e62d6917a6c8d01ee77c58be29b3f2b5833a35be222a15d55dbed\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b11c7a7a601d416743506f79ab62a4abc5b7b15d07316d4ae63880534cbc4975\"" Apr 24 00:28:43.278098 containerd[1622]: time="2026-04-24T00:28:43.278078734Z" level=info msg="StartContainer for \"b11c7a7a601d416743506f79ab62a4abc5b7b15d07316d4ae63880534cbc4975\"" Apr 24 00:28:43.279103 containerd[1622]: time="2026-04-24T00:28:43.279081099Z" level=info msg="connecting to shim b11c7a7a601d416743506f79ab62a4abc5b7b15d07316d4ae63880534cbc4975" address="unix:///run/containerd/s/fef1d2a4106992c84e2d8602bc94ed9a08d0bfb2c5e9d4bbebbd01ddc26c5fea" protocol=ttrpc version=3 Apr 24 00:28:43.344431 systemd[1]: Started cri-containerd-f465487932120cc48a25e4b542a49931f722832241667710f8e43971f371ff89.scope - libcontainer container f465487932120cc48a25e4b542a49931f722832241667710f8e43971f371ff89. Apr 24 00:28:43.362730 systemd[1]: Started cri-containerd-b11c7a7a601d416743506f79ab62a4abc5b7b15d07316d4ae63880534cbc4975.scope - libcontainer container b11c7a7a601d416743506f79ab62a4abc5b7b15d07316d4ae63880534cbc4975. Apr 24 00:28:43.493769 containerd[1622]: time="2026-04-24T00:28:43.493127112Z" level=info msg="StartContainer for \"f465487932120cc48a25e4b542a49931f722832241667710f8e43971f371ff89\" returns successfully" Apr 24 00:28:43.548069 containerd[1622]: time="2026-04-24T00:28:43.547746494Z" level=info msg="StartContainer for \"b11c7a7a601d416743506f79ab62a4abc5b7b15d07316d4ae63880534cbc4975\" returns successfully" Apr 24 00:28:43.984913 kubelet[2843]: E0424 00:28:43.984492 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:28:44.002466 kubelet[2843]: E0424 00:28:44.002055 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:28:44.015043 kubelet[2843]: I0424 00:28:44.014676 2843 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-dkhqc" podStartSLOduration=59.014657144 podStartE2EDuration="59.014657144s" podCreationTimestamp="2026-04-24 00:27:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 00:28:44.014388164 +0000 UTC m=+63.856323233" watchObservedRunningTime="2026-04-24 00:28:44.014657144 +0000 UTC m=+63.856592191" Apr 24 00:28:44.039517 kubelet[2843]: I0424 00:28:44.038908 2843 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-vvnkw" podStartSLOduration=59.038888312 podStartE2EDuration="59.038888312s" podCreationTimestamp="2026-04-24 00:27:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 00:28:44.038722136 +0000 UTC m=+63.880657173" watchObservedRunningTime="2026-04-24 00:28:44.038888312 +0000 UTC m=+63.880823349" Apr 24 00:28:45.009288 kubelet[2843]: E0424 00:28:45.008807 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:28:46.014415 kubelet[2843]: E0424 00:28:46.013436 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:28:52.876275 kubelet[2843]: E0424 00:28:52.875712 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:28:54.003956 kubelet[2843]: E0424 00:28:54.003649 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:28:54.068468 kubelet[2843]: E0424 00:28:54.067686 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:28:57.879349 kubelet[2843]: E0424 00:28:57.878739 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:29:02.882419 kubelet[2843]: E0424 00:29:02.882049 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:29:03.137560 systemd[1]: Started sshd@9-10.0.0.89:22-10.0.0.1:55294.service - OpenSSH per-connection server daemon (10.0.0.1:55294). Apr 24 00:29:03.257051 sshd[4185]: Accepted publickey for core from 10.0.0.1 port 55294 ssh2: RSA SHA256:O5eEjr93EU6o+yIitYA6KggdYqbq1kMU8aUvK/sf8Ls Apr 24 00:29:03.259116 sshd-session[4185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:29:03.270778 systemd-logind[1595]: New session 10 of user core. Apr 24 00:29:03.277605 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 24 00:29:03.464383 sshd[4188]: Connection closed by 10.0.0.1 port 55294 Apr 24 00:29:03.464674 sshd-session[4185]: pam_unix(sshd:session): session closed for user core Apr 24 00:29:03.478723 systemd[1]: sshd@9-10.0.0.89:22-10.0.0.1:55294.service: Deactivated successfully. Apr 24 00:29:03.493881 systemd[1]: session-10.scope: Deactivated successfully. Apr 24 00:29:03.496880 systemd-logind[1595]: Session 10 logged out. Waiting for processes to exit. Apr 24 00:29:03.508408 systemd-logind[1595]: Removed session 10. Apr 24 00:29:06.877936 kubelet[2843]: E0424 00:29:06.877621 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:29:08.484529 systemd[1]: Started sshd@10-10.0.0.89:22-10.0.0.1:55300.service - OpenSSH per-connection server daemon (10.0.0.1:55300). Apr 24 00:29:08.568390 sshd[4203]: Accepted publickey for core from 10.0.0.1 port 55300 ssh2: RSA SHA256:O5eEjr93EU6o+yIitYA6KggdYqbq1kMU8aUvK/sf8Ls Apr 24 00:29:08.570533 sshd-session[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:29:08.579652 systemd-logind[1595]: New session 11 of user core. Apr 24 00:29:08.592482 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 24 00:29:08.832524 sshd[4206]: Connection closed by 10.0.0.1 port 55300 Apr 24 00:29:08.832821 sshd-session[4203]: pam_unix(sshd:session): session closed for user core Apr 24 00:29:08.837645 systemd[1]: sshd@10-10.0.0.89:22-10.0.0.1:55300.service: Deactivated successfully. Apr 24 00:29:08.840550 systemd[1]: session-11.scope: Deactivated successfully. Apr 24 00:29:08.842024 systemd-logind[1595]: Session 11 logged out. Waiting for processes to exit. Apr 24 00:29:08.845121 systemd-logind[1595]: Removed session 11. Apr 24 00:29:13.858499 systemd[1]: Started sshd@11-10.0.0.89:22-10.0.0.1:40902.service - OpenSSH per-connection server daemon (10.0.0.1:40902). Apr 24 00:29:14.004034 sshd[4221]: Accepted publickey for core from 10.0.0.1 port 40902 ssh2: RSA SHA256:O5eEjr93EU6o+yIitYA6KggdYqbq1kMU8aUvK/sf8Ls Apr 24 00:29:14.022052 sshd-session[4221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:29:14.034718 systemd-logind[1595]: New session 12 of user core. Apr 24 00:29:14.054657 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 24 00:29:14.319124 sshd[4224]: Connection closed by 10.0.0.1 port 40902 Apr 24 00:29:14.319936 sshd-session[4221]: pam_unix(sshd:session): session closed for user core Apr 24 00:29:14.326686 systemd[1]: sshd@11-10.0.0.89:22-10.0.0.1:40902.service: Deactivated successfully. Apr 24 00:29:14.329853 systemd[1]: session-12.scope: Deactivated successfully. Apr 24 00:29:14.333460 systemd-logind[1595]: Session 12 logged out. Waiting for processes to exit. Apr 24 00:29:14.335896 systemd-logind[1595]: Removed session 12. Apr 24 00:29:19.340072 systemd[1]: Started sshd@12-10.0.0.89:22-10.0.0.1:40904.service - OpenSSH per-connection server daemon (10.0.0.1:40904). Apr 24 00:29:19.426879 sshd[4240]: Accepted publickey for core from 10.0.0.1 port 40904 ssh2: RSA SHA256:O5eEjr93EU6o+yIitYA6KggdYqbq1kMU8aUvK/sf8Ls Apr 24 00:29:19.428626 sshd-session[4240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:29:19.442667 systemd-logind[1595]: New session 13 of user core. Apr 24 00:29:19.450630 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 24 00:29:19.675842 sshd[4243]: Connection closed by 10.0.0.1 port 40904 Apr 24 00:29:19.675987 sshd-session[4240]: pam_unix(sshd:session): session closed for user core Apr 24 00:29:19.700947 systemd[1]: sshd@12-10.0.0.89:22-10.0.0.1:40904.service: Deactivated successfully. Apr 24 00:29:19.708929 systemd[1]: session-13.scope: Deactivated successfully. Apr 24 00:29:19.718030 systemd-logind[1595]: Session 13 logged out. Waiting for processes to exit. Apr 24 00:29:19.721761 systemd-logind[1595]: Removed session 13. Apr 24 00:29:24.693478 systemd[1]: Started sshd@13-10.0.0.89:22-10.0.0.1:56738.service - OpenSSH per-connection server daemon (10.0.0.1:56738). Apr 24 00:29:24.786244 sshd[4257]: Accepted publickey for core from 10.0.0.1 port 56738 ssh2: RSA SHA256:O5eEjr93EU6o+yIitYA6KggdYqbq1kMU8aUvK/sf8Ls Apr 24 00:29:24.788940 sshd-session[4257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:29:24.798464 systemd-logind[1595]: New session 14 of user core. Apr 24 00:29:24.806496 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 24 00:29:25.069017 sshd[4260]: Connection closed by 10.0.0.1 port 56738 Apr 24 00:29:25.069637 sshd-session[4257]: pam_unix(sshd:session): session closed for user core Apr 24 00:29:25.085747 systemd[1]: sshd@13-10.0.0.89:22-10.0.0.1:56738.service: Deactivated successfully. Apr 24 00:29:25.094127 systemd[1]: session-14.scope: Deactivated successfully. Apr 24 00:29:25.098302 systemd-logind[1595]: Session 14 logged out. Waiting for processes to exit. Apr 24 00:29:25.106278 systemd-logind[1595]: Removed session 14. Apr 24 00:29:30.088790 systemd[1]: Started sshd@14-10.0.0.89:22-10.0.0.1:58812.service - OpenSSH per-connection server daemon (10.0.0.1:58812). Apr 24 00:29:30.181638 sshd[4274]: Accepted publickey for core from 10.0.0.1 port 58812 ssh2: RSA SHA256:O5eEjr93EU6o+yIitYA6KggdYqbq1kMU8aUvK/sf8Ls Apr 24 00:29:30.185044 sshd-session[4274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:29:30.194063 systemd-logind[1595]: New session 15 of user core. Apr 24 00:29:30.208059 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 24 00:29:30.364781 sshd[4277]: Connection closed by 10.0.0.1 port 58812 Apr 24 00:29:30.365012 sshd-session[4274]: pam_unix(sshd:session): session closed for user core Apr 24 00:29:30.370331 systemd[1]: sshd@14-10.0.0.89:22-10.0.0.1:58812.service: Deactivated successfully. Apr 24 00:29:30.372920 systemd[1]: session-15.scope: Deactivated successfully. Apr 24 00:29:30.374270 systemd-logind[1595]: Session 15 logged out. Waiting for processes to exit. Apr 24 00:29:30.380272 systemd-logind[1595]: Removed session 15. Apr 24 00:29:35.393903 systemd[1]: Started sshd@15-10.0.0.89:22-10.0.0.1:58826.service - OpenSSH per-connection server daemon (10.0.0.1:58826). Apr 24 00:29:35.494901 sshd[4292]: Accepted publickey for core from 10.0.0.1 port 58826 ssh2: RSA SHA256:O5eEjr93EU6o+yIitYA6KggdYqbq1kMU8aUvK/sf8Ls Apr 24 00:29:35.497040 sshd-session[4292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:29:35.509910 systemd-logind[1595]: New session 16 of user core. Apr 24 00:29:35.517699 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 24 00:29:35.771117 sshd[4295]: Connection closed by 10.0.0.1 port 58826 Apr 24 00:29:35.771626 sshd-session[4292]: pam_unix(sshd:session): session closed for user core Apr 24 00:29:35.785732 systemd[1]: sshd@15-10.0.0.89:22-10.0.0.1:58826.service: Deactivated successfully. Apr 24 00:29:35.788794 systemd[1]: session-16.scope: Deactivated successfully. Apr 24 00:29:35.796863 systemd-logind[1595]: Session 16 logged out. Waiting for processes to exit. Apr 24 00:29:35.800256 systemd-logind[1595]: Removed session 16. Apr 24 00:29:40.786458 systemd[1]: Started sshd@16-10.0.0.89:22-10.0.0.1:33984.service - OpenSSH per-connection server daemon (10.0.0.1:33984). Apr 24 00:29:40.880673 sshd[4311]: Accepted publickey for core from 10.0.0.1 port 33984 ssh2: RSA SHA256:O5eEjr93EU6o+yIitYA6KggdYqbq1kMU8aUvK/sf8Ls Apr 24 00:29:40.883702 sshd-session[4311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:29:40.895599 systemd-logind[1595]: New session 17 of user core. Apr 24 00:29:40.908616 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 24 00:29:41.128825 sshd[4314]: Connection closed by 10.0.0.1 port 33984 Apr 24 00:29:41.131046 sshd-session[4311]: pam_unix(sshd:session): session closed for user core Apr 24 00:29:41.142090 systemd[1]: sshd@16-10.0.0.89:22-10.0.0.1:33984.service: Deactivated successfully. Apr 24 00:29:41.145908 systemd[1]: session-17.scope: Deactivated successfully. Apr 24 00:29:41.149598 systemd-logind[1595]: Session 17 logged out. Waiting for processes to exit. Apr 24 00:29:41.152017 systemd[1]: Started sshd@17-10.0.0.89:22-10.0.0.1:33998.service - OpenSSH per-connection server daemon (10.0.0.1:33998). Apr 24 00:29:41.154791 systemd-logind[1595]: Removed session 17. Apr 24 00:29:41.260846 sshd[4331]: Accepted publickey for core from 10.0.0.1 port 33998 ssh2: RSA SHA256:O5eEjr93EU6o+yIitYA6KggdYqbq1kMU8aUvK/sf8Ls Apr 24 00:29:41.263619 sshd-session[4331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:29:41.285296 systemd-logind[1595]: New session 18 of user core. Apr 24 00:29:41.301568 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 24 00:29:41.651296 sshd[4334]: Connection closed by 10.0.0.1 port 33998 Apr 24 00:29:41.658556 sshd-session[4331]: pam_unix(sshd:session): session closed for user core Apr 24 00:29:41.677876 systemd[1]: sshd@17-10.0.0.89:22-10.0.0.1:33998.service: Deactivated successfully. Apr 24 00:29:41.686657 systemd[1]: session-18.scope: Deactivated successfully. Apr 24 00:29:41.698297 systemd-logind[1595]: Session 18 logged out. Waiting for processes to exit. Apr 24 00:29:41.727817 systemd[1]: Started sshd@18-10.0.0.89:22-10.0.0.1:34006.service - OpenSSH per-connection server daemon (10.0.0.1:34006). Apr 24 00:29:41.735028 systemd-logind[1595]: Removed session 18. Apr 24 00:29:41.911851 sshd[4346]: Accepted publickey for core from 10.0.0.1 port 34006 ssh2: RSA SHA256:O5eEjr93EU6o+yIitYA6KggdYqbq1kMU8aUvK/sf8Ls Apr 24 00:29:41.917824 sshd-session[4346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:29:41.935510 systemd-logind[1595]: New session 19 of user core. Apr 24 00:29:41.945508 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 24 00:29:42.260059 sshd[4349]: Connection closed by 10.0.0.1 port 34006 Apr 24 00:29:42.261790 sshd-session[4346]: pam_unix(sshd:session): session closed for user core Apr 24 00:29:42.278032 systemd[1]: sshd@18-10.0.0.89:22-10.0.0.1:34006.service: Deactivated successfully. Apr 24 00:29:42.287537 systemd[1]: session-19.scope: Deactivated successfully. Apr 24 00:29:42.292484 systemd-logind[1595]: Session 19 logged out. Waiting for processes to exit. Apr 24 00:29:42.296328 systemd-logind[1595]: Removed session 19. Apr 24 00:29:47.280597 systemd[1]: Started sshd@19-10.0.0.89:22-10.0.0.1:34018.service - OpenSSH per-connection server daemon (10.0.0.1:34018). Apr 24 00:29:47.422073 sshd[4364]: Accepted publickey for core from 10.0.0.1 port 34018 ssh2: RSA SHA256:O5eEjr93EU6o+yIitYA6KggdYqbq1kMU8aUvK/sf8Ls Apr 24 00:29:47.425831 sshd-session[4364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:29:47.441628 systemd-logind[1595]: New session 20 of user core. Apr 24 00:29:47.464884 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 24 00:29:47.667932 sshd[4367]: Connection closed by 10.0.0.1 port 34018 Apr 24 00:29:47.668576 sshd-session[4364]: pam_unix(sshd:session): session closed for user core Apr 24 00:29:47.675840 systemd[1]: sshd@19-10.0.0.89:22-10.0.0.1:34018.service: Deactivated successfully. Apr 24 00:29:47.679013 systemd[1]: session-20.scope: Deactivated successfully. Apr 24 00:29:47.683610 systemd-logind[1595]: Session 20 logged out. Waiting for processes to exit. Apr 24 00:29:47.687547 systemd-logind[1595]: Removed session 20. Apr 24 00:29:51.878811 kubelet[2843]: E0424 00:29:51.878043 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:29:52.708920 systemd[1]: Started sshd@20-10.0.0.89:22-10.0.0.1:44912.service - OpenSSH per-connection server daemon (10.0.0.1:44912). Apr 24 00:29:52.800552 sshd[4382]: Accepted publickey for core from 10.0.0.1 port 44912 ssh2: RSA SHA256:O5eEjr93EU6o+yIitYA6KggdYqbq1kMU8aUvK/sf8Ls Apr 24 00:29:52.804887 sshd-session[4382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:29:52.819926 systemd-logind[1595]: New session 21 of user core. Apr 24 00:29:52.825755 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 24 00:29:52.879337 kubelet[2843]: E0424 00:29:52.876340 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:29:53.095087 sshd[4385]: Connection closed by 10.0.0.1 port 44912 Apr 24 00:29:53.096103 sshd-session[4382]: pam_unix(sshd:session): session closed for user core Apr 24 00:29:53.103577 systemd[1]: sshd@20-10.0.0.89:22-10.0.0.1:44912.service: Deactivated successfully. Apr 24 00:29:53.108521 systemd[1]: session-21.scope: Deactivated successfully. Apr 24 00:29:53.111051 systemd-logind[1595]: Session 21 logged out. Waiting for processes to exit. Apr 24 00:29:53.114595 systemd-logind[1595]: Removed session 21. Apr 24 00:29:55.880728 kubelet[2843]: E0424 00:29:55.880060 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:29:58.113072 systemd[1]: Started sshd@21-10.0.0.89:22-10.0.0.1:44916.service - OpenSSH per-connection server daemon (10.0.0.1:44916). Apr 24 00:29:58.258667 sshd[4399]: Accepted publickey for core from 10.0.0.1 port 44916 ssh2: RSA SHA256:O5eEjr93EU6o+yIitYA6KggdYqbq1kMU8aUvK/sf8Ls Apr 24 00:29:58.261023 sshd-session[4399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:29:58.274630 systemd-logind[1595]: New session 22 of user core. Apr 24 00:29:58.282632 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 24 00:29:58.559124 sshd[4402]: Connection closed by 10.0.0.1 port 44916 Apr 24 00:29:58.559544 sshd-session[4399]: pam_unix(sshd:session): session closed for user core Apr 24 00:29:58.578778 systemd[1]: sshd@21-10.0.0.89:22-10.0.0.1:44916.service: Deactivated successfully. Apr 24 00:29:58.583713 systemd[1]: session-22.scope: Deactivated successfully. Apr 24 00:29:58.592562 systemd-logind[1595]: Session 22 logged out. Waiting for processes to exit. Apr 24 00:29:58.598819 systemd-logind[1595]: Removed session 22. Apr 24 00:30:01.878617 kubelet[2843]: E0424 00:30:01.877840 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:30:03.587023 systemd[1]: Started sshd@22-10.0.0.89:22-10.0.0.1:58184.service - OpenSSH per-connection server daemon (10.0.0.1:58184). Apr 24 00:30:03.697996 sshd[4415]: Accepted publickey for core from 10.0.0.1 port 58184 ssh2: RSA SHA256:O5eEjr93EU6o+yIitYA6KggdYqbq1kMU8aUvK/sf8Ls Apr 24 00:30:03.697756 sshd-session[4415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:30:03.716554 systemd-logind[1595]: New session 23 of user core. Apr 24 00:30:03.732893 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 24 00:30:04.086037 sshd[4418]: Connection closed by 10.0.0.1 port 58184 Apr 24 00:30:04.087558 sshd-session[4415]: pam_unix(sshd:session): session closed for user core Apr 24 00:30:04.100801 systemd[1]: sshd@22-10.0.0.89:22-10.0.0.1:58184.service: Deactivated successfully. Apr 24 00:30:04.104946 systemd[1]: session-23.scope: Deactivated successfully. Apr 24 00:30:04.107053 systemd-logind[1595]: Session 23 logged out. Waiting for processes to exit. Apr 24 00:30:04.110635 systemd-logind[1595]: Removed session 23. Apr 24 00:30:09.123650 systemd[1]: Started sshd@23-10.0.0.89:22-10.0.0.1:58198.service - OpenSSH per-connection server daemon (10.0.0.1:58198). Apr 24 00:30:09.256120 sshd[4431]: Accepted publickey for core from 10.0.0.1 port 58198 ssh2: RSA SHA256:O5eEjr93EU6o+yIitYA6KggdYqbq1kMU8aUvK/sf8Ls Apr 24 00:30:09.258839 sshd-session[4431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:30:09.276553 systemd-logind[1595]: New session 24 of user core. Apr 24 00:30:09.286958 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 24 00:30:09.636083 sshd[4434]: Connection closed by 10.0.0.1 port 58198 Apr 24 00:30:09.637822 sshd-session[4431]: pam_unix(sshd:session): session closed for user core Apr 24 00:30:09.655884 systemd[1]: sshd@23-10.0.0.89:22-10.0.0.1:58198.service: Deactivated successfully. Apr 24 00:30:09.659000 systemd[1]: session-24.scope: Deactivated successfully. Apr 24 00:30:09.662759 systemd-logind[1595]: Session 24 logged out. Waiting for processes to exit. Apr 24 00:30:09.666073 systemd[1]: Started sshd@24-10.0.0.89:22-10.0.0.1:33716.service - OpenSSH per-connection server daemon (10.0.0.1:33716). Apr 24 00:30:09.671118 systemd-logind[1595]: Removed session 24. Apr 24 00:30:09.759852 sshd[4447]: Accepted publickey for core from 10.0.0.1 port 33716 ssh2: RSA SHA256:O5eEjr93EU6o+yIitYA6KggdYqbq1kMU8aUvK/sf8Ls Apr 24 00:30:09.762726 sshd-session[4447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:30:09.774880 systemd-logind[1595]: New session 25 of user core. Apr 24 00:30:09.779806 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 24 00:30:10.300655 sshd[4450]: Connection closed by 10.0.0.1 port 33716 Apr 24 00:30:10.301681 sshd-session[4447]: pam_unix(sshd:session): session closed for user core Apr 24 00:30:10.312398 systemd[1]: sshd@24-10.0.0.89:22-10.0.0.1:33716.service: Deactivated successfully. Apr 24 00:30:10.316115 systemd[1]: session-25.scope: Deactivated successfully. Apr 24 00:30:10.320710 systemd-logind[1595]: Session 25 logged out. Waiting for processes to exit. Apr 24 00:30:10.335748 systemd[1]: Started sshd@25-10.0.0.89:22-10.0.0.1:33724.service - OpenSSH per-connection server daemon (10.0.0.1:33724). Apr 24 00:30:10.339790 systemd-logind[1595]: Removed session 25. Apr 24 00:30:10.432829 sshd[4461]: Accepted publickey for core from 10.0.0.1 port 33724 ssh2: RSA SHA256:O5eEjr93EU6o+yIitYA6KggdYqbq1kMU8aUvK/sf8Ls Apr 24 00:30:10.434759 sshd-session[4461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:30:10.448008 systemd-logind[1595]: New session 26 of user core. Apr 24 00:30:10.463781 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 24 00:30:10.877814 kubelet[2843]: E0424 00:30:10.876919 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:30:11.934634 sshd[4464]: Connection closed by 10.0.0.1 port 33724 Apr 24 00:30:11.935888 sshd-session[4461]: pam_unix(sshd:session): session closed for user core Apr 24 00:30:11.949655 systemd[1]: sshd@25-10.0.0.89:22-10.0.0.1:33724.service: Deactivated successfully. Apr 24 00:30:11.963688 systemd[1]: session-26.scope: Deactivated successfully. Apr 24 00:30:11.966578 systemd[1]: session-26.scope: Consumed 1.309s CPU time, 38.4M memory peak. Apr 24 00:30:11.972953 systemd-logind[1595]: Session 26 logged out. Waiting for processes to exit. Apr 24 00:30:11.987559 systemd[1]: Started sshd@26-10.0.0.89:22-10.0.0.1:33736.service - OpenSSH per-connection server daemon (10.0.0.1:33736). Apr 24 00:30:11.998933 systemd-logind[1595]: Removed session 26. Apr 24 00:30:12.155727 sshd[4482]: Accepted publickey for core from 10.0.0.1 port 33736 ssh2: RSA SHA256:O5eEjr93EU6o+yIitYA6KggdYqbq1kMU8aUvK/sf8Ls Apr 24 00:30:12.158705 sshd-session[4482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:30:12.171969 systemd-logind[1595]: New session 27 of user core. Apr 24 00:30:12.177916 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 24 00:30:13.138079 sshd[4485]: Connection closed by 10.0.0.1 port 33736 Apr 24 00:30:13.140115 sshd-session[4482]: pam_unix(sshd:session): session closed for user core Apr 24 00:30:13.200104 systemd[1]: sshd@26-10.0.0.89:22-10.0.0.1:33736.service: Deactivated successfully. Apr 24 00:30:13.213811 systemd[1]: session-27.scope: Deactivated successfully. Apr 24 00:30:13.220094 systemd-logind[1595]: Session 27 logged out. Waiting for processes to exit. Apr 24 00:30:13.225570 systemd[1]: Started sshd@27-10.0.0.89:22-10.0.0.1:33744.service - OpenSSH per-connection server daemon (10.0.0.1:33744). Apr 24 00:30:13.231063 systemd-logind[1595]: Removed session 27. Apr 24 00:30:13.380121 sshd[4498]: Accepted publickey for core from 10.0.0.1 port 33744 ssh2: RSA SHA256:O5eEjr93EU6o+yIitYA6KggdYqbq1kMU8aUvK/sf8Ls Apr 24 00:30:13.384035 sshd-session[4498]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:30:13.410048 systemd-logind[1595]: New session 28 of user core. Apr 24 00:30:13.427748 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 24 00:30:13.665708 sshd[4501]: Connection closed by 10.0.0.1 port 33744 Apr 24 00:30:13.665983 sshd-session[4498]: pam_unix(sshd:session): session closed for user core Apr 24 00:30:13.701910 systemd[1]: sshd@27-10.0.0.89:22-10.0.0.1:33744.service: Deactivated successfully. Apr 24 00:30:13.722059 systemd[1]: session-28.scope: Deactivated successfully. Apr 24 00:30:13.749629 systemd-logind[1595]: Session 28 logged out. Waiting for processes to exit. Apr 24 00:30:13.769625 systemd-logind[1595]: Removed session 28. Apr 24 00:30:14.877388 kubelet[2843]: E0424 00:30:14.876789 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:30:17.882945 kubelet[2843]: E0424 00:30:17.881999 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:30:18.696942 systemd[1]: Started sshd@28-10.0.0.89:22-10.0.0.1:33760.service - OpenSSH per-connection server daemon (10.0.0.1:33760). Apr 24 00:30:18.833821 sshd[4516]: Accepted publickey for core from 10.0.0.1 port 33760 ssh2: RSA SHA256:O5eEjr93EU6o+yIitYA6KggdYqbq1kMU8aUvK/sf8Ls Apr 24 00:30:18.837788 sshd-session[4516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:30:18.854069 systemd-logind[1595]: New session 29 of user core. Apr 24 00:30:18.862557 systemd[1]: Started session-29.scope - Session 29 of User core. Apr 24 00:30:18.875845 kubelet[2843]: E0424 00:30:18.875825 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:30:19.129137 sshd[4520]: Connection closed by 10.0.0.1 port 33760 Apr 24 00:30:19.130055 sshd-session[4516]: pam_unix(sshd:session): session closed for user core Apr 24 00:30:19.140024 systemd[1]: sshd@28-10.0.0.89:22-10.0.0.1:33760.service: Deactivated successfully. Apr 24 00:30:19.143643 systemd[1]: session-29.scope: Deactivated successfully. Apr 24 00:30:19.146052 systemd-logind[1595]: Session 29 logged out. Waiting for processes to exit. Apr 24 00:30:19.151713 systemd-logind[1595]: Removed session 29. Apr 24 00:30:24.161408 systemd[1]: Started sshd@29-10.0.0.89:22-10.0.0.1:45480.service - OpenSSH per-connection server daemon (10.0.0.1:45480). Apr 24 00:30:24.303069 sshd[4534]: Accepted publickey for core from 10.0.0.1 port 45480 ssh2: RSA SHA256:O5eEjr93EU6o+yIitYA6KggdYqbq1kMU8aUvK/sf8Ls Apr 24 00:30:24.305660 sshd-session[4534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:30:24.319675 systemd-logind[1595]: New session 30 of user core. Apr 24 00:30:24.333747 systemd[1]: Started session-30.scope - Session 30 of User core. Apr 24 00:30:24.655643 sshd[4537]: Connection closed by 10.0.0.1 port 45480 Apr 24 00:30:24.656664 sshd-session[4534]: pam_unix(sshd:session): session closed for user core Apr 24 00:30:24.664451 systemd[1]: sshd@29-10.0.0.89:22-10.0.0.1:45480.service: Deactivated successfully. Apr 24 00:30:24.667929 systemd[1]: session-30.scope: Deactivated successfully. Apr 24 00:30:24.671020 systemd-logind[1595]: Session 30 logged out. Waiting for processes to exit. Apr 24 00:30:24.675660 systemd-logind[1595]: Removed session 30. Apr 24 00:30:29.683433 systemd[1]: Started sshd@30-10.0.0.89:22-10.0.0.1:51132.service - OpenSSH per-connection server daemon (10.0.0.1:51132). Apr 24 00:30:29.979876 sshd[4550]: Accepted publickey for core from 10.0.0.1 port 51132 ssh2: RSA SHA256:O5eEjr93EU6o+yIitYA6KggdYqbq1kMU8aUvK/sf8Ls Apr 24 00:30:29.994777 sshd-session[4550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:30:30.017583 systemd-logind[1595]: New session 31 of user core. Apr 24 00:30:30.025070 systemd[1]: Started session-31.scope - Session 31 of User core. Apr 24 00:30:30.415433 sshd[4553]: Connection closed by 10.0.0.1 port 51132 Apr 24 00:30:30.416798 sshd-session[4550]: pam_unix(sshd:session): session closed for user core Apr 24 00:30:30.434853 systemd[1]: sshd@30-10.0.0.89:22-10.0.0.1:51132.service: Deactivated successfully. Apr 24 00:30:30.450914 systemd[1]: session-31.scope: Deactivated successfully. Apr 24 00:30:30.455076 systemd-logind[1595]: Session 31 logged out. Waiting for processes to exit. Apr 24 00:30:30.470452 systemd-logind[1595]: Removed session 31. Apr 24 00:30:35.438798 systemd[1]: Started sshd@31-10.0.0.89:22-10.0.0.1:51136.service - OpenSSH per-connection server daemon (10.0.0.1:51136). Apr 24 00:30:35.558961 sshd[4570]: Accepted publickey for core from 10.0.0.1 port 51136 ssh2: RSA SHA256:O5eEjr93EU6o+yIitYA6KggdYqbq1kMU8aUvK/sf8Ls Apr 24 00:30:35.562123 sshd-session[4570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:30:35.575813 systemd-logind[1595]: New session 32 of user core. Apr 24 00:30:35.584609 systemd[1]: Started session-32.scope - Session 32 of User core. Apr 24 00:30:35.857352 sshd[4573]: Connection closed by 10.0.0.1 port 51136 Apr 24 00:30:35.857101 sshd-session[4570]: pam_unix(sshd:session): session closed for user core Apr 24 00:30:35.866645 systemd[1]: sshd@31-10.0.0.89:22-10.0.0.1:51136.service: Deactivated successfully. Apr 24 00:30:35.872023 systemd[1]: session-32.scope: Deactivated successfully. Apr 24 00:30:35.876715 systemd-logind[1595]: Session 32 logged out. Waiting for processes to exit. Apr 24 00:30:35.889086 systemd-logind[1595]: Removed session 32. Apr 24 00:30:40.893296 systemd[1]: Started sshd@32-10.0.0.89:22-10.0.0.1:47238.service - OpenSSH per-connection server daemon (10.0.0.1:47238). Apr 24 00:30:40.966410 sshd[4587]: Accepted publickey for core from 10.0.0.1 port 47238 ssh2: RSA SHA256:O5eEjr93EU6o+yIitYA6KggdYqbq1kMU8aUvK/sf8Ls Apr 24 00:30:40.967731 sshd-session[4587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:30:40.974340 systemd-logind[1595]: New session 33 of user core. Apr 24 00:30:40.984454 systemd[1]: Started session-33.scope - Session 33 of User core. Apr 24 00:30:41.083684 sshd[4592]: Connection closed by 10.0.0.1 port 47238 Apr 24 00:30:41.084375 sshd-session[4587]: pam_unix(sshd:session): session closed for user core Apr 24 00:30:41.088581 systemd[1]: sshd@32-10.0.0.89:22-10.0.0.1:47238.service: Deactivated successfully. Apr 24 00:30:41.090475 systemd[1]: session-33.scope: Deactivated successfully. Apr 24 00:30:41.091505 systemd-logind[1595]: Session 33 logged out. Waiting for processes to exit. Apr 24 00:30:41.093738 systemd-logind[1595]: Removed session 33. Apr 24 00:30:41.513907 update_engine[1598]: I20260424 00:30:41.513612 1598 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 24 00:30:41.513907 update_engine[1598]: I20260424 00:30:41.513854 1598 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 24 00:30:41.515039 update_engine[1598]: I20260424 00:30:41.514678 1598 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 24 00:30:41.515702 update_engine[1598]: I20260424 00:30:41.515642 1598 omaha_request_params.cc:62] Current group set to stable Apr 24 00:30:41.515955 update_engine[1598]: I20260424 00:30:41.515892 1598 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 24 00:30:41.515955 update_engine[1598]: I20260424 00:30:41.515928 1598 update_attempter.cc:643] Scheduling an action processor start. Apr 24 00:30:41.515955 update_engine[1598]: I20260424 00:30:41.515944 1598 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 24 00:30:41.516102 update_engine[1598]: I20260424 00:30:41.515969 1598 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 24 00:30:41.516102 update_engine[1598]: I20260424 00:30:41.516072 1598 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 24 00:30:41.516102 update_engine[1598]: I20260424 00:30:41.516078 1598 omaha_request_action.cc:272] Request: Apr 24 00:30:41.516102 update_engine[1598]: Apr 24 00:30:41.516102 update_engine[1598]: Apr 24 00:30:41.516102 update_engine[1598]: Apr 24 00:30:41.516102 update_engine[1598]: Apr 24 00:30:41.516102 update_engine[1598]: Apr 24 00:30:41.516102 update_engine[1598]: Apr 24 00:30:41.516102 update_engine[1598]: Apr 24 00:30:41.516102 update_engine[1598]: Apr 24 00:30:41.516102 update_engine[1598]: I20260424 00:30:41.516083 1598 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 24 00:30:41.523089 update_engine[1598]: I20260424 00:30:41.522828 1598 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 24 00:30:41.524075 update_engine[1598]: I20260424 00:30:41.524017 1598 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 24 00:30:41.524334 locksmithd[1641]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 24 00:30:41.530590 update_engine[1598]: E20260424 00:30:41.530463 1598 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 24 00:30:41.530660 update_engine[1598]: I20260424 00:30:41.530619 1598 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 24 00:30:46.110451 systemd[1]: Started sshd@33-10.0.0.89:22-10.0.0.1:47240.service - OpenSSH per-connection server daemon (10.0.0.1:47240). Apr 24 00:30:46.175452 sshd[4605]: Accepted publickey for core from 10.0.0.1 port 47240 ssh2: RSA SHA256:O5eEjr93EU6o+yIitYA6KggdYqbq1kMU8aUvK/sf8Ls Apr 24 00:30:46.177605 sshd-session[4605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:30:46.187716 systemd-logind[1595]: New session 34 of user core. Apr 24 00:30:46.194459 systemd[1]: Started session-34.scope - Session 34 of User core. Apr 24 00:30:46.368928 sshd[4608]: Connection closed by 10.0.0.1 port 47240 Apr 24 00:30:46.369421 sshd-session[4605]: pam_unix(sshd:session): session closed for user core Apr 24 00:30:46.385832 systemd[1]: sshd@33-10.0.0.89:22-10.0.0.1:47240.service: Deactivated successfully. Apr 24 00:30:46.388447 systemd[1]: session-34.scope: Deactivated successfully. Apr 24 00:30:46.389839 systemd-logind[1595]: Session 34 logged out. Waiting for processes to exit. Apr 24 00:30:46.392850 systemd[1]: Started sshd@34-10.0.0.89:22-10.0.0.1:47244.service - OpenSSH per-connection server daemon (10.0.0.1:47244). Apr 24 00:30:46.396850 systemd-logind[1595]: Removed session 34. Apr 24 00:30:46.476367 sshd[4621]: Accepted publickey for core from 10.0.0.1 port 47244 ssh2: RSA SHA256:O5eEjr93EU6o+yIitYA6KggdYqbq1kMU8aUvK/sf8Ls Apr 24 00:30:46.478216 sshd-session[4621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:30:46.483644 systemd-logind[1595]: New session 35 of user core. Apr 24 00:30:46.495739 systemd[1]: Started session-35.scope - Session 35 of User core. Apr 24 00:30:48.324106 containerd[1622]: time="2026-04-24T00:30:48.323528876Z" level=info msg="StopContainer for \"f0153583456c2fbfc862ed1cc2e91ed94941ba8a71fda51785cbe6ab35c8528c\" with timeout 30 (s)" Apr 24 00:30:48.357911 containerd[1622]: time="2026-04-24T00:30:48.357445647Z" level=info msg="Stop container \"f0153583456c2fbfc862ed1cc2e91ed94941ba8a71fda51785cbe6ab35c8528c\" with signal terminated" Apr 24 00:30:48.422331 systemd[1]: cri-containerd-f0153583456c2fbfc862ed1cc2e91ed94941ba8a71fda51785cbe6ab35c8528c.scope: Deactivated successfully. Apr 24 00:30:48.422739 systemd[1]: cri-containerd-f0153583456c2fbfc862ed1cc2e91ed94941ba8a71fda51785cbe6ab35c8528c.scope: Consumed 3.226s CPU time, 29.5M memory peak, 4K written to disk. Apr 24 00:30:48.425442 containerd[1622]: time="2026-04-24T00:30:48.424438301Z" level=info msg="received container exit event container_id:\"f0153583456c2fbfc862ed1cc2e91ed94941ba8a71fda51785cbe6ab35c8528c\" id:\"f0153583456c2fbfc862ed1cc2e91ed94941ba8a71fda51785cbe6ab35c8528c\" pid:3500 exited_at:{seconds:1776990648 nanos:423897785}" Apr 24 00:30:48.431060 containerd[1622]: time="2026-04-24T00:30:48.430391272Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 24 00:30:48.450251 containerd[1622]: time="2026-04-24T00:30:48.449801694Z" level=info msg="StopContainer for \"1937a37fbe5f1e32623c2cc2ab107f6582d006341cf0460fa76dd360b2189cb2\" with timeout 2 (s)" Apr 24 00:30:48.452075 containerd[1622]: time="2026-04-24T00:30:48.452007531Z" level=info msg="Stop container \"1937a37fbe5f1e32623c2cc2ab107f6582d006341cf0460fa76dd360b2189cb2\" with signal terminated" Apr 24 00:30:48.467813 systemd-networkd[1411]: lxc_health: Link DOWN Apr 24 00:30:48.467823 systemd-networkd[1411]: lxc_health: Lost carrier Apr 24 00:30:48.492096 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f0153583456c2fbfc862ed1cc2e91ed94941ba8a71fda51785cbe6ab35c8528c-rootfs.mount: Deactivated successfully. Apr 24 00:30:48.495861 systemd[1]: cri-containerd-1937a37fbe5f1e32623c2cc2ab107f6582d006341cf0460fa76dd360b2189cb2.scope: Deactivated successfully. Apr 24 00:30:48.496502 systemd[1]: cri-containerd-1937a37fbe5f1e32623c2cc2ab107f6582d006341cf0460fa76dd360b2189cb2.scope: Consumed 20.563s CPU time, 123M memory peak, 184K read from disk, 13.3M written to disk. Apr 24 00:30:48.502851 containerd[1622]: time="2026-04-24T00:30:48.499132201Z" level=info msg="received container exit event container_id:\"1937a37fbe5f1e32623c2cc2ab107f6582d006341cf0460fa76dd360b2189cb2\" id:\"1937a37fbe5f1e32623c2cc2ab107f6582d006341cf0460fa76dd360b2189cb2\" pid:3462 exited_at:{seconds:1776990648 nanos:498281705}" Apr 24 00:30:48.543326 containerd[1622]: time="2026-04-24T00:30:48.543017203Z" level=info msg="StopContainer for \"f0153583456c2fbfc862ed1cc2e91ed94941ba8a71fda51785cbe6ab35c8528c\" returns successfully" Apr 24 00:30:48.558423 containerd[1622]: time="2026-04-24T00:30:48.558259957Z" level=info msg="StopPodSandbox for \"8556a6725afc1f33a9af6a7cde230efd4463b4d01fd0e853d641ef2e76c3eec1\"" Apr 24 00:30:48.558765 containerd[1622]: time="2026-04-24T00:30:48.558591933Z" level=info msg="Container to stop \"f0153583456c2fbfc862ed1cc2e91ed94941ba8a71fda51785cbe6ab35c8528c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 24 00:30:48.567786 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1937a37fbe5f1e32623c2cc2ab107f6582d006341cf0460fa76dd360b2189cb2-rootfs.mount: Deactivated successfully. Apr 24 00:30:48.584423 systemd[1]: cri-containerd-8556a6725afc1f33a9af6a7cde230efd4463b4d01fd0e853d641ef2e76c3eec1.scope: Deactivated successfully. Apr 24 00:30:48.586671 containerd[1622]: time="2026-04-24T00:30:48.586397175Z" level=info msg="StopContainer for \"1937a37fbe5f1e32623c2cc2ab107f6582d006341cf0460fa76dd360b2189cb2\" returns successfully" Apr 24 00:30:48.588296 containerd[1622]: time="2026-04-24T00:30:48.587999441Z" level=info msg="StopPodSandbox for \"4c57adb0d21fe20ec42d371bbe4d51563bf3c0a7bf5edd288708b05b76adaf5b\"" Apr 24 00:30:48.588296 containerd[1622]: time="2026-04-24T00:30:48.588057366Z" level=info msg="Container to stop \"1937a37fbe5f1e32623c2cc2ab107f6582d006341cf0460fa76dd360b2189cb2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 24 00:30:48.588296 containerd[1622]: time="2026-04-24T00:30:48.588065889Z" level=info msg="Container to stop \"f5fc516bd807cc4eeb7b1eeaa86dea41cffa874bb1ebad01c29189d91d79f1e9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 24 00:30:48.588296 containerd[1622]: time="2026-04-24T00:30:48.588072838Z" level=info msg="Container to stop \"b1ca37127fa59f999bd20895013271b1ae4b9873dbe1f2d8bce2dfdcf7f21c2b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 24 00:30:48.588296 containerd[1622]: time="2026-04-24T00:30:48.588079284Z" level=info msg="Container to stop \"c85ce64023bd2aa3d29c1dabee33725cf5cd1840c53046d975285477bb119673\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 24 00:30:48.588296 containerd[1622]: time="2026-04-24T00:30:48.588085395Z" level=info msg="Container to stop \"9336f1c6cb7dee6f6602d5f81dfc4e469708665362bfa9d449ebc9054bf5cbb2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 24 00:30:48.588702 containerd[1622]: time="2026-04-24T00:30:48.588683221Z" level=info msg="received sandbox exit event container_id:\"8556a6725afc1f33a9af6a7cde230efd4463b4d01fd0e853d641ef2e76c3eec1\" id:\"8556a6725afc1f33a9af6a7cde230efd4463b4d01fd0e853d641ef2e76c3eec1\" exit_status:137 exited_at:{seconds:1776990648 nanos:588352860}" monitor_name=podsandbox Apr 24 00:30:48.607978 systemd[1]: cri-containerd-4c57adb0d21fe20ec42d371bbe4d51563bf3c0a7bf5edd288708b05b76adaf5b.scope: Deactivated successfully. Apr 24 00:30:48.612307 containerd[1622]: time="2026-04-24T00:30:48.612086535Z" level=info msg="received sandbox exit event container_id:\"4c57adb0d21fe20ec42d371bbe4d51563bf3c0a7bf5edd288708b05b76adaf5b\" id:\"4c57adb0d21fe20ec42d371bbe4d51563bf3c0a7bf5edd288708b05b76adaf5b\" exit_status:137 exited_at:{seconds:1776990648 nanos:610606487}" monitor_name=podsandbox Apr 24 00:30:48.664707 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8556a6725afc1f33a9af6a7cde230efd4463b4d01fd0e853d641ef2e76c3eec1-rootfs.mount: Deactivated successfully. Apr 24 00:30:48.684911 containerd[1622]: time="2026-04-24T00:30:48.683791290Z" level=info msg="shim disconnected" id=8556a6725afc1f33a9af6a7cde230efd4463b4d01fd0e853d641ef2e76c3eec1 namespace=k8s.io Apr 24 00:30:48.684911 containerd[1622]: time="2026-04-24T00:30:48.683895431Z" level=warning msg="cleaning up after shim disconnected" id=8556a6725afc1f33a9af6a7cde230efd4463b4d01fd0e853d641ef2e76c3eec1 namespace=k8s.io Apr 24 00:30:48.694728 containerd[1622]: time="2026-04-24T00:30:48.683902970Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 00:30:48.755390 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c57adb0d21fe20ec42d371bbe4d51563bf3c0a7bf5edd288708b05b76adaf5b-rootfs.mount: Deactivated successfully. Apr 24 00:30:48.760918 containerd[1622]: time="2026-04-24T00:30:48.760643683Z" level=info msg="shim disconnected" id=4c57adb0d21fe20ec42d371bbe4d51563bf3c0a7bf5edd288708b05b76adaf5b namespace=k8s.io Apr 24 00:30:48.760918 containerd[1622]: time="2026-04-24T00:30:48.760673630Z" level=warning msg="cleaning up after shim disconnected" id=4c57adb0d21fe20ec42d371bbe4d51563bf3c0a7bf5edd288708b05b76adaf5b namespace=k8s.io Apr 24 00:30:48.760918 containerd[1622]: time="2026-04-24T00:30:48.760679773Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 00:30:48.795358 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8556a6725afc1f33a9af6a7cde230efd4463b4d01fd0e853d641ef2e76c3eec1-shm.mount: Deactivated successfully. Apr 24 00:30:48.799527 containerd[1622]: time="2026-04-24T00:30:48.799405576Z" level=info msg="TearDown network for sandbox \"4c57adb0d21fe20ec42d371bbe4d51563bf3c0a7bf5edd288708b05b76adaf5b\" successfully" Apr 24 00:30:48.799527 containerd[1622]: time="2026-04-24T00:30:48.799434135Z" level=info msg="StopPodSandbox for \"4c57adb0d21fe20ec42d371bbe4d51563bf3c0a7bf5edd288708b05b76adaf5b\" returns successfully" Apr 24 00:30:48.806048 containerd[1622]: time="2026-04-24T00:30:48.805873234Z" level=info msg="TearDown network for sandbox \"8556a6725afc1f33a9af6a7cde230efd4463b4d01fd0e853d641ef2e76c3eec1\" successfully" Apr 24 00:30:48.806048 containerd[1622]: time="2026-04-24T00:30:48.805921404Z" level=info msg="StopPodSandbox for \"8556a6725afc1f33a9af6a7cde230efd4463b4d01fd0e853d641ef2e76c3eec1\" returns successfully" Apr 24 00:30:48.817295 containerd[1622]: time="2026-04-24T00:30:48.817262904Z" level=info msg="received sandbox container exit event sandbox_id:\"8556a6725afc1f33a9af6a7cde230efd4463b4d01fd0e853d641ef2e76c3eec1\" exit_status:137 exited_at:{seconds:1776990648 nanos:588352860}" monitor_name=criService Apr 24 00:30:48.817849 containerd[1622]: time="2026-04-24T00:30:48.817452837Z" level=info msg="received sandbox container exit event sandbox_id:\"4c57adb0d21fe20ec42d371bbe4d51563bf3c0a7bf5edd288708b05b76adaf5b\" exit_status:137 exited_at:{seconds:1776990648 nanos:610606487}" monitor_name=criService Apr 24 00:30:48.962413 kubelet[2843]: I0424 00:30:48.960062 2843 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/542ac70f-2c8b-455d-82e5-49c0c48732bd-hostproc\") pod \"542ac70f-2c8b-455d-82e5-49c0c48732bd\" (UID: \"542ac70f-2c8b-455d-82e5-49c0c48732bd\") " Apr 24 00:30:48.962413 kubelet[2843]: I0424 00:30:48.960626 2843 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/011a458d-8cdd-4c13-9c44-28c738bfa972-cilium-config-path\") pod \"011a458d-8cdd-4c13-9c44-28c738bfa972\" (UID: \"011a458d-8cdd-4c13-9c44-28c738bfa972\") " Apr 24 00:30:48.962413 kubelet[2843]: I0424 00:30:48.960654 2843 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/542ac70f-2c8b-455d-82e5-49c0c48732bd-cilium-config-path\") pod \"542ac70f-2c8b-455d-82e5-49c0c48732bd\" (UID: \"542ac70f-2c8b-455d-82e5-49c0c48732bd\") " Apr 24 00:30:48.962413 kubelet[2843]: I0424 00:30:48.960677 2843 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhsfv\" (UniqueName: \"kubernetes.io/projected/011a458d-8cdd-4c13-9c44-28c738bfa972-kube-api-access-jhsfv\") pod \"011a458d-8cdd-4c13-9c44-28c738bfa972\" (UID: \"011a458d-8cdd-4c13-9c44-28c738bfa972\") " Apr 24 00:30:48.962413 kubelet[2843]: I0424 00:30:48.960709 2843 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/542ac70f-2c8b-455d-82e5-49c0c48732bd-cilium-cgroup\") pod \"542ac70f-2c8b-455d-82e5-49c0c48732bd\" (UID: \"542ac70f-2c8b-455d-82e5-49c0c48732bd\") " Apr 24 00:30:48.962413 kubelet[2843]: I0424 00:30:48.960723 2843 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/542ac70f-2c8b-455d-82e5-49c0c48732bd-cilium-run\") pod \"542ac70f-2c8b-455d-82e5-49c0c48732bd\" (UID: \"542ac70f-2c8b-455d-82e5-49c0c48732bd\") " Apr 24 00:30:48.964679 kubelet[2843]: I0424 00:30:48.961610 2843 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/542ac70f-2c8b-455d-82e5-49c0c48732bd-hostproc" (OuterVolumeSpecName: "hostproc") pod "542ac70f-2c8b-455d-82e5-49c0c48732bd" (UID: "542ac70f-2c8b-455d-82e5-49c0c48732bd"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 24 00:30:48.964679 kubelet[2843]: I0424 00:30:48.960736 2843 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/542ac70f-2c8b-455d-82e5-49c0c48732bd-etc-cni-netd\") pod \"542ac70f-2c8b-455d-82e5-49c0c48732bd\" (UID: \"542ac70f-2c8b-455d-82e5-49c0c48732bd\") " Apr 24 00:30:48.964679 kubelet[2843]: I0424 00:30:48.963325 2843 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/542ac70f-2c8b-455d-82e5-49c0c48732bd-clustermesh-secrets\") pod \"542ac70f-2c8b-455d-82e5-49c0c48732bd\" (UID: \"542ac70f-2c8b-455d-82e5-49c0c48732bd\") " Apr 24 00:30:48.964679 kubelet[2843]: I0424 00:30:48.963341 2843 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/542ac70f-2c8b-455d-82e5-49c0c48732bd-host-proc-sys-net\") pod \"542ac70f-2c8b-455d-82e5-49c0c48732bd\" (UID: \"542ac70f-2c8b-455d-82e5-49c0c48732bd\") " Apr 24 00:30:48.964679 kubelet[2843]: I0424 00:30:48.963357 2843 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/542ac70f-2c8b-455d-82e5-49c0c48732bd-hubble-tls\") pod \"542ac70f-2c8b-455d-82e5-49c0c48732bd\" (UID: \"542ac70f-2c8b-455d-82e5-49c0c48732bd\") " Apr 24 00:30:48.964679 kubelet[2843]: I0424 00:30:48.963370 2843 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/542ac70f-2c8b-455d-82e5-49c0c48732bd-lib-modules\") pod \"542ac70f-2c8b-455d-82e5-49c0c48732bd\" (UID: \"542ac70f-2c8b-455d-82e5-49c0c48732bd\") " Apr 24 00:30:48.964875 kubelet[2843]: I0424 00:30:48.963388 2843 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfjkw\" (UniqueName: \"kubernetes.io/projected/542ac70f-2c8b-455d-82e5-49c0c48732bd-kube-api-access-cfjkw\") pod \"542ac70f-2c8b-455d-82e5-49c0c48732bd\" (UID: \"542ac70f-2c8b-455d-82e5-49c0c48732bd\") " Apr 24 00:30:48.964875 kubelet[2843]: I0424 00:30:48.963400 2843 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/542ac70f-2c8b-455d-82e5-49c0c48732bd-xtables-lock\") pod \"542ac70f-2c8b-455d-82e5-49c0c48732bd\" (UID: \"542ac70f-2c8b-455d-82e5-49c0c48732bd\") " Apr 24 00:30:48.964875 kubelet[2843]: I0424 00:30:48.963410 2843 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/542ac70f-2c8b-455d-82e5-49c0c48732bd-host-proc-sys-kernel\") pod \"542ac70f-2c8b-455d-82e5-49c0c48732bd\" (UID: \"542ac70f-2c8b-455d-82e5-49c0c48732bd\") " Apr 24 00:30:48.964875 kubelet[2843]: I0424 00:30:48.963424 2843 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/542ac70f-2c8b-455d-82e5-49c0c48732bd-cni-path\") pod \"542ac70f-2c8b-455d-82e5-49c0c48732bd\" (UID: \"542ac70f-2c8b-455d-82e5-49c0c48732bd\") " Apr 24 00:30:48.964875 kubelet[2843]: I0424 00:30:48.963435 2843 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/542ac70f-2c8b-455d-82e5-49c0c48732bd-bpf-maps\") pod \"542ac70f-2c8b-455d-82e5-49c0c48732bd\" (UID: \"542ac70f-2c8b-455d-82e5-49c0c48732bd\") " Apr 24 00:30:48.964875 kubelet[2843]: I0424 00:30:48.963465 2843 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/542ac70f-2c8b-455d-82e5-49c0c48732bd-hostproc\") on node \"localhost\" DevicePath \"\"" Apr 24 00:30:48.965001 kubelet[2843]: I0424 00:30:48.963490 2843 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/542ac70f-2c8b-455d-82e5-49c0c48732bd-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "542ac70f-2c8b-455d-82e5-49c0c48732bd" (UID: "542ac70f-2c8b-455d-82e5-49c0c48732bd"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 24 00:30:48.965001 kubelet[2843]: I0424 00:30:48.963505 2843 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/542ac70f-2c8b-455d-82e5-49c0c48732bd-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "542ac70f-2c8b-455d-82e5-49c0c48732bd" (UID: "542ac70f-2c8b-455d-82e5-49c0c48732bd"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 24 00:30:48.965001 kubelet[2843]: I0424 00:30:48.963515 2843 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/542ac70f-2c8b-455d-82e5-49c0c48732bd-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "542ac70f-2c8b-455d-82e5-49c0c48732bd" (UID: "542ac70f-2c8b-455d-82e5-49c0c48732bd"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 24 00:30:48.965001 kubelet[2843]: I0424 00:30:48.963528 2843 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/542ac70f-2c8b-455d-82e5-49c0c48732bd-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "542ac70f-2c8b-455d-82e5-49c0c48732bd" (UID: "542ac70f-2c8b-455d-82e5-49c0c48732bd"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 24 00:30:48.966068 kubelet[2843]: I0424 00:30:48.965748 2843 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/542ac70f-2c8b-455d-82e5-49c0c48732bd-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "542ac70f-2c8b-455d-82e5-49c0c48732bd" (UID: "542ac70f-2c8b-455d-82e5-49c0c48732bd"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 24 00:30:48.966068 kubelet[2843]: I0424 00:30:48.965851 2843 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/542ac70f-2c8b-455d-82e5-49c0c48732bd-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "542ac70f-2c8b-455d-82e5-49c0c48732bd" (UID: "542ac70f-2c8b-455d-82e5-49c0c48732bd"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 24 00:30:48.966068 kubelet[2843]: I0424 00:30:48.965867 2843 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/542ac70f-2c8b-455d-82e5-49c0c48732bd-cni-path" (OuterVolumeSpecName: "cni-path") pod "542ac70f-2c8b-455d-82e5-49c0c48732bd" (UID: "542ac70f-2c8b-455d-82e5-49c0c48732bd"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 24 00:30:48.967384 kubelet[2843]: I0424 00:30:48.967311 2843 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/542ac70f-2c8b-455d-82e5-49c0c48732bd-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "542ac70f-2c8b-455d-82e5-49c0c48732bd" (UID: "542ac70f-2c8b-455d-82e5-49c0c48732bd"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 24 00:30:48.967456 kubelet[2843]: I0424 00:30:48.967387 2843 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/542ac70f-2c8b-455d-82e5-49c0c48732bd-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "542ac70f-2c8b-455d-82e5-49c0c48732bd" (UID: "542ac70f-2c8b-455d-82e5-49c0c48732bd"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 24 00:30:48.968407 kubelet[2843]: I0424 00:30:48.968367 2843 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/542ac70f-2c8b-455d-82e5-49c0c48732bd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "542ac70f-2c8b-455d-82e5-49c0c48732bd" (UID: "542ac70f-2c8b-455d-82e5-49c0c48732bd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 00:30:48.970416 kubelet[2843]: I0424 00:30:48.970366 2843 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/542ac70f-2c8b-455d-82e5-49c0c48732bd-kube-api-access-cfjkw" (OuterVolumeSpecName: "kube-api-access-cfjkw") pod "542ac70f-2c8b-455d-82e5-49c0c48732bd" (UID: "542ac70f-2c8b-455d-82e5-49c0c48732bd"). InnerVolumeSpecName "kube-api-access-cfjkw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 24 00:30:48.970711 kubelet[2843]: I0424 00:30:48.970647 2843 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/011a458d-8cdd-4c13-9c44-28c738bfa972-kube-api-access-jhsfv" (OuterVolumeSpecName: "kube-api-access-jhsfv") pod "011a458d-8cdd-4c13-9c44-28c738bfa972" (UID: "011a458d-8cdd-4c13-9c44-28c738bfa972"). InnerVolumeSpecName "kube-api-access-jhsfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 24 00:30:48.970851 kubelet[2843]: I0424 00:30:48.970784 2843 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/542ac70f-2c8b-455d-82e5-49c0c48732bd-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "542ac70f-2c8b-455d-82e5-49c0c48732bd" (UID: "542ac70f-2c8b-455d-82e5-49c0c48732bd"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 24 00:30:48.971517 kubelet[2843]: I0424 00:30:48.971450 2843 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/011a458d-8cdd-4c13-9c44-28c738bfa972-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "011a458d-8cdd-4c13-9c44-28c738bfa972" (UID: "011a458d-8cdd-4c13-9c44-28c738bfa972"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 00:30:48.972636 kubelet[2843]: I0424 00:30:48.972536 2843 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/542ac70f-2c8b-455d-82e5-49c0c48732bd-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "542ac70f-2c8b-455d-82e5-49c0c48732bd" (UID: "542ac70f-2c8b-455d-82e5-49c0c48732bd"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 24 00:30:49.064667 kubelet[2843]: I0424 00:30:49.064007 2843 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/011a458d-8cdd-4c13-9c44-28c738bfa972-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 24 00:30:49.064667 kubelet[2843]: I0424 00:30:49.064427 2843 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/542ac70f-2c8b-455d-82e5-49c0c48732bd-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 24 00:30:49.064667 kubelet[2843]: I0424 00:30:49.064437 2843 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jhsfv\" (UniqueName: \"kubernetes.io/projected/011a458d-8cdd-4c13-9c44-28c738bfa972-kube-api-access-jhsfv\") on node \"localhost\" DevicePath \"\"" Apr 24 00:30:49.064667 kubelet[2843]: I0424 00:30:49.064453 2843 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/542ac70f-2c8b-455d-82e5-49c0c48732bd-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Apr 24 00:30:49.064667 kubelet[2843]: I0424 00:30:49.064461 2843 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/542ac70f-2c8b-455d-82e5-49c0c48732bd-cilium-run\") on node \"localhost\" DevicePath \"\"" Apr 24 00:30:49.064667 kubelet[2843]: I0424 00:30:49.064467 2843 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/542ac70f-2c8b-455d-82e5-49c0c48732bd-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Apr 24 00:30:49.064667 kubelet[2843]: I0424 00:30:49.064474 2843 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/542ac70f-2c8b-455d-82e5-49c0c48732bd-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Apr 24 00:30:49.064667 kubelet[2843]: I0424 00:30:49.064481 2843 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/542ac70f-2c8b-455d-82e5-49c0c48732bd-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Apr 24 00:30:49.068881 kubelet[2843]: I0424 00:30:49.064487 2843 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/542ac70f-2c8b-455d-82e5-49c0c48732bd-hubble-tls\") on node \"localhost\" DevicePath \"\"" Apr 24 00:30:49.068881 kubelet[2843]: I0424 00:30:49.064494 2843 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/542ac70f-2c8b-455d-82e5-49c0c48732bd-lib-modules\") on node \"localhost\" DevicePath \"\"" Apr 24 00:30:49.068881 kubelet[2843]: I0424 00:30:49.064502 2843 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cfjkw\" (UniqueName: \"kubernetes.io/projected/542ac70f-2c8b-455d-82e5-49c0c48732bd-kube-api-access-cfjkw\") on node \"localhost\" DevicePath \"\"" Apr 24 00:30:49.068881 kubelet[2843]: I0424 00:30:49.064509 2843 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/542ac70f-2c8b-455d-82e5-49c0c48732bd-xtables-lock\") on node \"localhost\" DevicePath \"\"" Apr 24 00:30:49.068881 kubelet[2843]: I0424 00:30:49.064515 2843 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/542ac70f-2c8b-455d-82e5-49c0c48732bd-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Apr 24 00:30:49.068881 kubelet[2843]: I0424 00:30:49.064524 2843 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/542ac70f-2c8b-455d-82e5-49c0c48732bd-cni-path\") on node \"localhost\" DevicePath \"\"" Apr 24 00:30:49.068881 kubelet[2843]: I0424 00:30:49.064593 2843 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/542ac70f-2c8b-455d-82e5-49c0c48732bd-bpf-maps\") on node \"localhost\" DevicePath \"\"" Apr 24 00:30:49.265416 kubelet[2843]: I0424 00:30:49.265126 2843 scope.go:117] "RemoveContainer" containerID="f0153583456c2fbfc862ed1cc2e91ed94941ba8a71fda51785cbe6ab35c8528c" Apr 24 00:30:49.271659 systemd[1]: Removed slice kubepods-besteffort-pod011a458d_8cdd_4c13_9c44_28c738bfa972.slice - libcontainer container kubepods-besteffort-pod011a458d_8cdd_4c13_9c44_28c738bfa972.slice. Apr 24 00:30:49.271810 systemd[1]: kubepods-besteffort-pod011a458d_8cdd_4c13_9c44_28c738bfa972.slice: Consumed 3.310s CPU time, 29.7M memory peak, 4K written to disk. Apr 24 00:30:49.272357 containerd[1622]: time="2026-04-24T00:30:49.272330938Z" level=info msg="RemoveContainer for \"f0153583456c2fbfc862ed1cc2e91ed94941ba8a71fda51785cbe6ab35c8528c\"" Apr 24 00:30:49.281684 containerd[1622]: time="2026-04-24T00:30:49.281594692Z" level=info msg="RemoveContainer for \"f0153583456c2fbfc862ed1cc2e91ed94941ba8a71fda51785cbe6ab35c8528c\" returns successfully" Apr 24 00:30:49.282655 systemd[1]: Removed slice kubepods-burstable-pod542ac70f_2c8b_455d_82e5_49c0c48732bd.slice - libcontainer container kubepods-burstable-pod542ac70f_2c8b_455d_82e5_49c0c48732bd.slice. Apr 24 00:30:49.285512 kubelet[2843]: I0424 00:30:49.283005 2843 scope.go:117] "RemoveContainer" containerID="f0153583456c2fbfc862ed1cc2e91ed94941ba8a71fda51785cbe6ab35c8528c" Apr 24 00:30:49.282769 systemd[1]: kubepods-burstable-pod542ac70f_2c8b_455d_82e5_49c0c48732bd.slice: Consumed 21.051s CPU time, 123.4M memory peak, 505K read from disk, 13.3M written to disk. Apr 24 00:30:49.294343 containerd[1622]: time="2026-04-24T00:30:49.283387705Z" level=error msg="ContainerStatus for \"f0153583456c2fbfc862ed1cc2e91ed94941ba8a71fda51785cbe6ab35c8528c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f0153583456c2fbfc862ed1cc2e91ed94941ba8a71fda51785cbe6ab35c8528c\": not found" Apr 24 00:30:49.295241 kubelet[2843]: E0424 00:30:49.294540 2843 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f0153583456c2fbfc862ed1cc2e91ed94941ba8a71fda51785cbe6ab35c8528c\": not found" containerID="f0153583456c2fbfc862ed1cc2e91ed94941ba8a71fda51785cbe6ab35c8528c" Apr 24 00:30:49.295241 kubelet[2843]: I0424 00:30:49.294659 2843 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f0153583456c2fbfc862ed1cc2e91ed94941ba8a71fda51785cbe6ab35c8528c"} err="failed to get container status \"f0153583456c2fbfc862ed1cc2e91ed94941ba8a71fda51785cbe6ab35c8528c\": rpc error: code = NotFound desc = an error occurred when try to find container \"f0153583456c2fbfc862ed1cc2e91ed94941ba8a71fda51785cbe6ab35c8528c\": not found" Apr 24 00:30:49.295241 kubelet[2843]: I0424 00:30:49.294689 2843 scope.go:117] "RemoveContainer" containerID="1937a37fbe5f1e32623c2cc2ab107f6582d006341cf0460fa76dd360b2189cb2" Apr 24 00:30:49.298253 containerd[1622]: time="2026-04-24T00:30:49.298228975Z" level=info msg="RemoveContainer for \"1937a37fbe5f1e32623c2cc2ab107f6582d006341cf0460fa76dd360b2189cb2\"" Apr 24 00:30:49.304623 containerd[1622]: time="2026-04-24T00:30:49.304493945Z" level=info msg="RemoveContainer for \"1937a37fbe5f1e32623c2cc2ab107f6582d006341cf0460fa76dd360b2189cb2\" returns successfully" Apr 24 00:30:49.304897 kubelet[2843]: I0424 00:30:49.304728 2843 scope.go:117] "RemoveContainer" containerID="c85ce64023bd2aa3d29c1dabee33725cf5cd1840c53046d975285477bb119673" Apr 24 00:30:49.307064 containerd[1622]: time="2026-04-24T00:30:49.306944829Z" level=info msg="RemoveContainer for \"c85ce64023bd2aa3d29c1dabee33725cf5cd1840c53046d975285477bb119673\"" Apr 24 00:30:49.312058 containerd[1622]: time="2026-04-24T00:30:49.311885599Z" level=info msg="RemoveContainer for \"c85ce64023bd2aa3d29c1dabee33725cf5cd1840c53046d975285477bb119673\" returns successfully" Apr 24 00:30:49.312354 kubelet[2843]: I0424 00:30:49.312116 2843 scope.go:117] "RemoveContainer" containerID="b1ca37127fa59f999bd20895013271b1ae4b9873dbe1f2d8bce2dfdcf7f21c2b" Apr 24 00:30:49.315592 containerd[1622]: time="2026-04-24T00:30:49.315534116Z" level=info msg="RemoveContainer for \"b1ca37127fa59f999bd20895013271b1ae4b9873dbe1f2d8bce2dfdcf7f21c2b\"" Apr 24 00:30:49.334500 containerd[1622]: time="2026-04-24T00:30:49.334127714Z" level=info msg="RemoveContainer for \"b1ca37127fa59f999bd20895013271b1ae4b9873dbe1f2d8bce2dfdcf7f21c2b\" returns successfully" Apr 24 00:30:49.335409 kubelet[2843]: I0424 00:30:49.335109 2843 scope.go:117] "RemoveContainer" containerID="9336f1c6cb7dee6f6602d5f81dfc4e469708665362bfa9d449ebc9054bf5cbb2" Apr 24 00:30:49.337232 containerd[1622]: time="2026-04-24T00:30:49.337209729Z" level=info msg="RemoveContainer for \"9336f1c6cb7dee6f6602d5f81dfc4e469708665362bfa9d449ebc9054bf5cbb2\"" Apr 24 00:30:49.342628 containerd[1622]: time="2026-04-24T00:30:49.342060536Z" level=info msg="RemoveContainer for \"9336f1c6cb7dee6f6602d5f81dfc4e469708665362bfa9d449ebc9054bf5cbb2\" returns successfully" Apr 24 00:30:49.343805 kubelet[2843]: I0424 00:30:49.343248 2843 scope.go:117] "RemoveContainer" containerID="f5fc516bd807cc4eeb7b1eeaa86dea41cffa874bb1ebad01c29189d91d79f1e9" Apr 24 00:30:49.346728 containerd[1622]: time="2026-04-24T00:30:49.346658581Z" level=info msg="RemoveContainer for \"f5fc516bd807cc4eeb7b1eeaa86dea41cffa874bb1ebad01c29189d91d79f1e9\"" Apr 24 00:30:49.351266 containerd[1622]: time="2026-04-24T00:30:49.351087753Z" level=info msg="RemoveContainer for \"f5fc516bd807cc4eeb7b1eeaa86dea41cffa874bb1ebad01c29189d91d79f1e9\" returns successfully" Apr 24 00:30:49.351405 kubelet[2843]: I0424 00:30:49.351352 2843 scope.go:117] "RemoveContainer" containerID="1937a37fbe5f1e32623c2cc2ab107f6582d006341cf0460fa76dd360b2189cb2" Apr 24 00:30:49.351731 containerd[1622]: time="2026-04-24T00:30:49.351653241Z" level=error msg="ContainerStatus for \"1937a37fbe5f1e32623c2cc2ab107f6582d006341cf0460fa76dd360b2189cb2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1937a37fbe5f1e32623c2cc2ab107f6582d006341cf0460fa76dd360b2189cb2\": not found" Apr 24 00:30:49.352224 kubelet[2843]: E0424 00:30:49.351981 2843 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1937a37fbe5f1e32623c2cc2ab107f6582d006341cf0460fa76dd360b2189cb2\": not found" containerID="1937a37fbe5f1e32623c2cc2ab107f6582d006341cf0460fa76dd360b2189cb2" Apr 24 00:30:49.352224 kubelet[2843]: I0424 00:30:49.352104 2843 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1937a37fbe5f1e32623c2cc2ab107f6582d006341cf0460fa76dd360b2189cb2"} err="failed to get container status \"1937a37fbe5f1e32623c2cc2ab107f6582d006341cf0460fa76dd360b2189cb2\": rpc error: code = NotFound desc = an error occurred when try to find container \"1937a37fbe5f1e32623c2cc2ab107f6582d006341cf0460fa76dd360b2189cb2\": not found" Apr 24 00:30:49.352224 kubelet[2843]: I0424 00:30:49.352122 2843 scope.go:117] "RemoveContainer" containerID="c85ce64023bd2aa3d29c1dabee33725cf5cd1840c53046d975285477bb119673" Apr 24 00:30:49.352623 containerd[1622]: time="2026-04-24T00:30:49.352588050Z" level=error msg="ContainerStatus for \"c85ce64023bd2aa3d29c1dabee33725cf5cd1840c53046d975285477bb119673\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c85ce64023bd2aa3d29c1dabee33725cf5cd1840c53046d975285477bb119673\": not found" Apr 24 00:30:49.352825 kubelet[2843]: E0424 00:30:49.352767 2843 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c85ce64023bd2aa3d29c1dabee33725cf5cd1840c53046d975285477bb119673\": not found" containerID="c85ce64023bd2aa3d29c1dabee33725cf5cd1840c53046d975285477bb119673" Apr 24 00:30:49.352860 kubelet[2843]: I0424 00:30:49.352820 2843 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c85ce64023bd2aa3d29c1dabee33725cf5cd1840c53046d975285477bb119673"} err="failed to get container status \"c85ce64023bd2aa3d29c1dabee33725cf5cd1840c53046d975285477bb119673\": rpc error: code = NotFound desc = an error occurred when try to find container \"c85ce64023bd2aa3d29c1dabee33725cf5cd1840c53046d975285477bb119673\": not found" Apr 24 00:30:49.352860 kubelet[2843]: I0424 00:30:49.352842 2843 scope.go:117] "RemoveContainer" containerID="b1ca37127fa59f999bd20895013271b1ae4b9873dbe1f2d8bce2dfdcf7f21c2b" Apr 24 00:30:49.353104 containerd[1622]: time="2026-04-24T00:30:49.353049237Z" level=error msg="ContainerStatus for \"b1ca37127fa59f999bd20895013271b1ae4b9873dbe1f2d8bce2dfdcf7f21c2b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b1ca37127fa59f999bd20895013271b1ae4b9873dbe1f2d8bce2dfdcf7f21c2b\": not found" Apr 24 00:30:49.353288 kubelet[2843]: E0424 00:30:49.353239 2843 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b1ca37127fa59f999bd20895013271b1ae4b9873dbe1f2d8bce2dfdcf7f21c2b\": not found" containerID="b1ca37127fa59f999bd20895013271b1ae4b9873dbe1f2d8bce2dfdcf7f21c2b" Apr 24 00:30:49.353316 kubelet[2843]: I0424 00:30:49.353294 2843 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b1ca37127fa59f999bd20895013271b1ae4b9873dbe1f2d8bce2dfdcf7f21c2b"} err="failed to get container status \"b1ca37127fa59f999bd20895013271b1ae4b9873dbe1f2d8bce2dfdcf7f21c2b\": rpc error: code = NotFound desc = an error occurred when try to find container \"b1ca37127fa59f999bd20895013271b1ae4b9873dbe1f2d8bce2dfdcf7f21c2b\": not found" Apr 24 00:30:49.353316 kubelet[2843]: I0424 00:30:49.353309 2843 scope.go:117] "RemoveContainer" containerID="9336f1c6cb7dee6f6602d5f81dfc4e469708665362bfa9d449ebc9054bf5cbb2" Apr 24 00:30:49.353541 containerd[1622]: time="2026-04-24T00:30:49.353488300Z" level=error msg="ContainerStatus for \"9336f1c6cb7dee6f6602d5f81dfc4e469708665362bfa9d449ebc9054bf5cbb2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9336f1c6cb7dee6f6602d5f81dfc4e469708665362bfa9d449ebc9054bf5cbb2\": not found" Apr 24 00:30:49.353836 kubelet[2843]: E0424 00:30:49.353792 2843 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9336f1c6cb7dee6f6602d5f81dfc4e469708665362bfa9d449ebc9054bf5cbb2\": not found" containerID="9336f1c6cb7dee6f6602d5f81dfc4e469708665362bfa9d449ebc9054bf5cbb2" Apr 24 00:30:49.353867 kubelet[2843]: I0424 00:30:49.353843 2843 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9336f1c6cb7dee6f6602d5f81dfc4e469708665362bfa9d449ebc9054bf5cbb2"} err="failed to get container status \"9336f1c6cb7dee6f6602d5f81dfc4e469708665362bfa9d449ebc9054bf5cbb2\": rpc error: code = NotFound desc = an error occurred when try to find container \"9336f1c6cb7dee6f6602d5f81dfc4e469708665362bfa9d449ebc9054bf5cbb2\": not found" Apr 24 00:30:49.353867 kubelet[2843]: I0424 00:30:49.353855 2843 scope.go:117] "RemoveContainer" containerID="f5fc516bd807cc4eeb7b1eeaa86dea41cffa874bb1ebad01c29189d91d79f1e9" Apr 24 00:30:49.354224 containerd[1622]: time="2026-04-24T00:30:49.354055997Z" level=error msg="ContainerStatus for \"f5fc516bd807cc4eeb7b1eeaa86dea41cffa874bb1ebad01c29189d91d79f1e9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f5fc516bd807cc4eeb7b1eeaa86dea41cffa874bb1ebad01c29189d91d79f1e9\": not found" Apr 24 00:30:49.354469 kubelet[2843]: E0424 00:30:49.354408 2843 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f5fc516bd807cc4eeb7b1eeaa86dea41cffa874bb1ebad01c29189d91d79f1e9\": not found" containerID="f5fc516bd807cc4eeb7b1eeaa86dea41cffa874bb1ebad01c29189d91d79f1e9" Apr 24 00:30:49.354469 kubelet[2843]: I0424 00:30:49.354464 2843 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f5fc516bd807cc4eeb7b1eeaa86dea41cffa874bb1ebad01c29189d91d79f1e9"} err="failed to get container status \"f5fc516bd807cc4eeb7b1eeaa86dea41cffa874bb1ebad01c29189d91d79f1e9\": rpc error: code = NotFound desc = an error occurred when try to find container \"f5fc516bd807cc4eeb7b1eeaa86dea41cffa874bb1ebad01c29189d91d79f1e9\": not found" Apr 24 00:30:49.496779 systemd[1]: var-lib-kubelet-pods-011a458d\x2d8cdd\x2d4c13\x2d9c44\x2d28c738bfa972-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djhsfv.mount: Deactivated successfully. Apr 24 00:30:49.497046 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4c57adb0d21fe20ec42d371bbe4d51563bf3c0a7bf5edd288708b05b76adaf5b-shm.mount: Deactivated successfully. Apr 24 00:30:49.497230 systemd[1]: var-lib-kubelet-pods-542ac70f\x2d2c8b\x2d455d\x2d82e5\x2d49c0c48732bd-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 24 00:30:49.497341 systemd[1]: var-lib-kubelet-pods-542ac70f\x2d2c8b\x2d455d\x2d82e5\x2d49c0c48732bd-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 24 00:30:49.497383 systemd[1]: var-lib-kubelet-pods-542ac70f\x2d2c8b\x2d455d\x2d82e5\x2d49c0c48732bd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcfjkw.mount: Deactivated successfully. Apr 24 00:30:50.240611 sshd[4624]: Connection closed by 10.0.0.1 port 47244 Apr 24 00:30:50.243786 sshd-session[4621]: pam_unix(sshd:session): session closed for user core Apr 24 00:30:50.256420 systemd[1]: sshd@34-10.0.0.89:22-10.0.0.1:47244.service: Deactivated successfully. Apr 24 00:30:50.258446 systemd[1]: session-35.scope: Deactivated successfully. Apr 24 00:30:50.258774 systemd[1]: session-35.scope: Consumed 1.184s CPU time, 27.3M memory peak. Apr 24 00:30:50.259888 systemd-logind[1595]: Session 35 logged out. Waiting for processes to exit. Apr 24 00:30:50.266882 systemd[1]: Started sshd@35-10.0.0.89:22-10.0.0.1:42062.service - OpenSSH per-connection server daemon (10.0.0.1:42062). Apr 24 00:30:50.272760 systemd-logind[1595]: Removed session 35. Apr 24 00:30:50.383055 sshd[4769]: Accepted publickey for core from 10.0.0.1 port 42062 ssh2: RSA SHA256:O5eEjr93EU6o+yIitYA6KggdYqbq1kMU8aUvK/sf8Ls Apr 24 00:30:50.387998 sshd-session[4769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:30:50.394702 systemd-logind[1595]: New session 36 of user core. Apr 24 00:30:50.404447 systemd[1]: Started session-36.scope - Session 36 of User core. Apr 24 00:30:50.885220 kubelet[2843]: I0424 00:30:50.885004 2843 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="011a458d-8cdd-4c13-9c44-28c738bfa972" path="/var/lib/kubelet/pods/011a458d-8cdd-4c13-9c44-28c738bfa972/volumes" Apr 24 00:30:50.886052 kubelet[2843]: I0424 00:30:50.885869 2843 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="542ac70f-2c8b-455d-82e5-49c0c48732bd" path="/var/lib/kubelet/pods/542ac70f-2c8b-455d-82e5-49c0c48732bd/volumes" Apr 24 00:30:51.070768 kubelet[2843]: E0424 00:30:51.070627 2843 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 24 00:30:51.124855 sshd[4772]: Connection closed by 10.0.0.1 port 42062 Apr 24 00:30:51.129934 sshd-session[4769]: pam_unix(sshd:session): session closed for user core Apr 24 00:30:51.146755 systemd[1]: sshd@35-10.0.0.89:22-10.0.0.1:42062.service: Deactivated successfully. Apr 24 00:30:51.152738 systemd[1]: session-36.scope: Deactivated successfully. Apr 24 00:30:51.173913 systemd-logind[1595]: Session 36 logged out. Waiting for processes to exit. Apr 24 00:30:51.210292 systemd[1]: Started sshd@36-10.0.0.89:22-10.0.0.1:42076.service - OpenSSH per-connection server daemon (10.0.0.1:42076). Apr 24 00:30:51.243026 systemd-logind[1595]: Removed session 36. Apr 24 00:30:51.305718 systemd[1]: Created slice kubepods-burstable-poda2ad9299_c847_48d3_b447_b66a135d0460.slice - libcontainer container kubepods-burstable-poda2ad9299_c847_48d3_b447_b66a135d0460.slice. Apr 24 00:30:51.371045 sshd[4784]: Accepted publickey for core from 10.0.0.1 port 42076 ssh2: RSA SHA256:O5eEjr93EU6o+yIitYA6KggdYqbq1kMU8aUvK/sf8Ls Apr 24 00:30:51.373054 sshd-session[4784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:30:51.381682 systemd-logind[1595]: New session 37 of user core. Apr 24 00:30:51.390301 systemd[1]: Started session-37.scope - Session 37 of User core. Apr 24 00:30:51.394627 kubelet[2843]: I0424 00:30:51.394437 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a2ad9299-c847-48d3-b447-b66a135d0460-host-proc-sys-net\") pod \"cilium-8dj96\" (UID: \"a2ad9299-c847-48d3-b447-b66a135d0460\") " pod="kube-system/cilium-8dj96" Apr 24 00:30:51.394627 kubelet[2843]: I0424 00:30:51.394625 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a2ad9299-c847-48d3-b447-b66a135d0460-bpf-maps\") pod \"cilium-8dj96\" (UID: \"a2ad9299-c847-48d3-b447-b66a135d0460\") " pod="kube-system/cilium-8dj96" Apr 24 00:30:51.394627 kubelet[2843]: I0424 00:30:51.394645 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a2ad9299-c847-48d3-b447-b66a135d0460-cni-path\") pod \"cilium-8dj96\" (UID: \"a2ad9299-c847-48d3-b447-b66a135d0460\") " pod="kube-system/cilium-8dj96" Apr 24 00:30:51.394627 kubelet[2843]: I0424 00:30:51.394658 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a2ad9299-c847-48d3-b447-b66a135d0460-cilium-config-path\") pod \"cilium-8dj96\" (UID: \"a2ad9299-c847-48d3-b447-b66a135d0460\") " pod="kube-system/cilium-8dj96" Apr 24 00:30:51.394627 kubelet[2843]: I0424 00:30:51.394671 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a2ad9299-c847-48d3-b447-b66a135d0460-cilium-cgroup\") pod \"cilium-8dj96\" (UID: \"a2ad9299-c847-48d3-b447-b66a135d0460\") " pod="kube-system/cilium-8dj96" Apr 24 00:30:51.394856 kubelet[2843]: I0424 00:30:51.394688 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a2ad9299-c847-48d3-b447-b66a135d0460-hubble-tls\") pod \"cilium-8dj96\" (UID: \"a2ad9299-c847-48d3-b447-b66a135d0460\") " pod="kube-system/cilium-8dj96" Apr 24 00:30:51.394856 kubelet[2843]: I0424 00:30:51.394702 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a2ad9299-c847-48d3-b447-b66a135d0460-cilium-ipsec-secrets\") pod \"cilium-8dj96\" (UID: \"a2ad9299-c847-48d3-b447-b66a135d0460\") " pod="kube-system/cilium-8dj96" Apr 24 00:30:51.394856 kubelet[2843]: I0424 00:30:51.394713 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a2ad9299-c847-48d3-b447-b66a135d0460-host-proc-sys-kernel\") pod \"cilium-8dj96\" (UID: \"a2ad9299-c847-48d3-b447-b66a135d0460\") " pod="kube-system/cilium-8dj96" Apr 24 00:30:51.394856 kubelet[2843]: I0424 00:30:51.394724 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5tjz\" (UniqueName: \"kubernetes.io/projected/a2ad9299-c847-48d3-b447-b66a135d0460-kube-api-access-q5tjz\") pod \"cilium-8dj96\" (UID: \"a2ad9299-c847-48d3-b447-b66a135d0460\") " pod="kube-system/cilium-8dj96" Apr 24 00:30:51.394856 kubelet[2843]: I0424 00:30:51.394737 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a2ad9299-c847-48d3-b447-b66a135d0460-hostproc\") pod \"cilium-8dj96\" (UID: \"a2ad9299-c847-48d3-b447-b66a135d0460\") " pod="kube-system/cilium-8dj96" Apr 24 00:30:51.394856 kubelet[2843]: I0424 00:30:51.394760 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a2ad9299-c847-48d3-b447-b66a135d0460-etc-cni-netd\") pod \"cilium-8dj96\" (UID: \"a2ad9299-c847-48d3-b447-b66a135d0460\") " pod="kube-system/cilium-8dj96" Apr 24 00:30:51.394973 kubelet[2843]: I0424 00:30:51.394773 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a2ad9299-c847-48d3-b447-b66a135d0460-cilium-run\") pod \"cilium-8dj96\" (UID: \"a2ad9299-c847-48d3-b447-b66a135d0460\") " pod="kube-system/cilium-8dj96" Apr 24 00:30:51.394973 kubelet[2843]: I0424 00:30:51.394784 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a2ad9299-c847-48d3-b447-b66a135d0460-clustermesh-secrets\") pod \"cilium-8dj96\" (UID: \"a2ad9299-c847-48d3-b447-b66a135d0460\") " pod="kube-system/cilium-8dj96" Apr 24 00:30:51.394973 kubelet[2843]: I0424 00:30:51.394796 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2ad9299-c847-48d3-b447-b66a135d0460-xtables-lock\") pod \"cilium-8dj96\" (UID: \"a2ad9299-c847-48d3-b447-b66a135d0460\") " pod="kube-system/cilium-8dj96" Apr 24 00:30:51.394973 kubelet[2843]: I0424 00:30:51.394807 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2ad9299-c847-48d3-b447-b66a135d0460-lib-modules\") pod \"cilium-8dj96\" (UID: \"a2ad9299-c847-48d3-b447-b66a135d0460\") " pod="kube-system/cilium-8dj96" Apr 24 00:30:51.408085 sshd[4787]: Connection closed by 10.0.0.1 port 42076 Apr 24 00:30:51.409692 sshd-session[4784]: pam_unix(sshd:session): session closed for user core Apr 24 00:30:51.423503 systemd[1]: sshd@36-10.0.0.89:22-10.0.0.1:42076.service: Deactivated successfully. Apr 24 00:30:51.426289 systemd[1]: session-37.scope: Deactivated successfully. Apr 24 00:30:51.427815 systemd-logind[1595]: Session 37 logged out. Waiting for processes to exit. Apr 24 00:30:51.431533 systemd[1]: Started sshd@37-10.0.0.89:22-10.0.0.1:42092.service - OpenSSH per-connection server daemon (10.0.0.1:42092). Apr 24 00:30:51.433236 systemd-logind[1595]: Removed session 37. Apr 24 00:30:51.512895 update_engine[1598]: I20260424 00:30:51.512414 1598 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 24 00:30:51.514420 update_engine[1598]: I20260424 00:30:51.513056 1598 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 24 00:30:51.514420 update_engine[1598]: I20260424 00:30:51.514048 1598 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 24 00:30:51.522449 update_engine[1598]: E20260424 00:30:51.522094 1598 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 24 00:30:51.522449 update_engine[1598]: I20260424 00:30:51.522329 1598 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 24 00:30:51.546708 sshd[4794]: Accepted publickey for core from 10.0.0.1 port 42092 ssh2: RSA SHA256:O5eEjr93EU6o+yIitYA6KggdYqbq1kMU8aUvK/sf8Ls Apr 24 00:30:51.551371 sshd-session[4794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:30:51.566651 systemd-logind[1595]: New session 38 of user core. Apr 24 00:30:51.578454 systemd[1]: Started session-38.scope - Session 38 of User core. Apr 24 00:30:51.634608 kubelet[2843]: E0424 00:30:51.632981 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:30:51.636781 containerd[1622]: time="2026-04-24T00:30:51.636410214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8dj96,Uid:a2ad9299-c847-48d3-b447-b66a135d0460,Namespace:kube-system,Attempt:0,}" Apr 24 00:30:51.698803 containerd[1622]: time="2026-04-24T00:30:51.695260206Z" level=info msg="connecting to shim b7b1d362d7cac307000c884ef4c3b8b679386a3af0496bd815c4ef4b474fc384" address="unix:///run/containerd/s/fc1bec9720d3cb332c34ccc517abcc0b3bc5fb146ec05ef3b123066261b1cf6d" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:30:51.742542 systemd[1]: Started cri-containerd-b7b1d362d7cac307000c884ef4c3b8b679386a3af0496bd815c4ef4b474fc384.scope - libcontainer container b7b1d362d7cac307000c884ef4c3b8b679386a3af0496bd815c4ef4b474fc384. Apr 24 00:30:51.801979 containerd[1622]: time="2026-04-24T00:30:51.801653984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8dj96,Uid:a2ad9299-c847-48d3-b447-b66a135d0460,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7b1d362d7cac307000c884ef4c3b8b679386a3af0496bd815c4ef4b474fc384\"" Apr 24 00:30:51.803894 kubelet[2843]: E0424 00:30:51.803412 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:30:51.816350 containerd[1622]: time="2026-04-24T00:30:51.815423601Z" level=info msg="CreateContainer within sandbox \"b7b1d362d7cac307000c884ef4c3b8b679386a3af0496bd815c4ef4b474fc384\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 24 00:30:51.840451 containerd[1622]: time="2026-04-24T00:30:51.840074286Z" level=info msg="Container e91e42fa33f61536dd94ea17c72581feef853a3c99c11cba3a0e63653a5d3c3d: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:30:51.875465 containerd[1622]: time="2026-04-24T00:30:51.875288987Z" level=info msg="CreateContainer within sandbox \"b7b1d362d7cac307000c884ef4c3b8b679386a3af0496bd815c4ef4b474fc384\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e91e42fa33f61536dd94ea17c72581feef853a3c99c11cba3a0e63653a5d3c3d\"" Apr 24 00:30:51.876534 containerd[1622]: time="2026-04-24T00:30:51.876433005Z" level=info msg="StartContainer for \"e91e42fa33f61536dd94ea17c72581feef853a3c99c11cba3a0e63653a5d3c3d\"" Apr 24 00:30:51.878859 containerd[1622]: time="2026-04-24T00:30:51.878652784Z" level=info msg="connecting to shim e91e42fa33f61536dd94ea17c72581feef853a3c99c11cba3a0e63653a5d3c3d" address="unix:///run/containerd/s/fc1bec9720d3cb332c34ccc517abcc0b3bc5fb146ec05ef3b123066261b1cf6d" protocol=ttrpc version=3 Apr 24 00:30:51.935809 systemd[1]: Started cri-containerd-e91e42fa33f61536dd94ea17c72581feef853a3c99c11cba3a0e63653a5d3c3d.scope - libcontainer container e91e42fa33f61536dd94ea17c72581feef853a3c99c11cba3a0e63653a5d3c3d. Apr 24 00:30:51.992311 containerd[1622]: time="2026-04-24T00:30:51.991752297Z" level=info msg="StartContainer for \"e91e42fa33f61536dd94ea17c72581feef853a3c99c11cba3a0e63653a5d3c3d\" returns successfully" Apr 24 00:30:52.054328 systemd[1]: cri-containerd-e91e42fa33f61536dd94ea17c72581feef853a3c99c11cba3a0e63653a5d3c3d.scope: Deactivated successfully. Apr 24 00:30:52.056692 containerd[1622]: time="2026-04-24T00:30:52.056495060Z" level=info msg="received container exit event container_id:\"e91e42fa33f61536dd94ea17c72581feef853a3c99c11cba3a0e63653a5d3c3d\" id:\"e91e42fa33f61536dd94ea17c72581feef853a3c99c11cba3a0e63653a5d3c3d\" pid:4868 exited_at:{seconds:1776990652 nanos:55775913}" Apr 24 00:30:52.336100 kubelet[2843]: E0424 00:30:52.335982 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:30:52.345978 containerd[1622]: time="2026-04-24T00:30:52.345860611Z" level=info msg="CreateContainer within sandbox \"b7b1d362d7cac307000c884ef4c3b8b679386a3af0496bd815c4ef4b474fc384\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 24 00:30:52.367485 containerd[1622]: time="2026-04-24T00:30:52.367304124Z" level=info msg="Container 39056df499dd80465e50f7b66755113c23afd2946c39d30c51dbf6907910328e: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:30:52.376716 containerd[1622]: time="2026-04-24T00:30:52.376642651Z" level=info msg="CreateContainer within sandbox \"b7b1d362d7cac307000c884ef4c3b8b679386a3af0496bd815c4ef4b474fc384\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"39056df499dd80465e50f7b66755113c23afd2946c39d30c51dbf6907910328e\"" Apr 24 00:30:52.378207 containerd[1622]: time="2026-04-24T00:30:52.377985960Z" level=info msg="StartContainer for \"39056df499dd80465e50f7b66755113c23afd2946c39d30c51dbf6907910328e\"" Apr 24 00:30:52.379527 containerd[1622]: time="2026-04-24T00:30:52.378966562Z" level=info msg="connecting to shim 39056df499dd80465e50f7b66755113c23afd2946c39d30c51dbf6907910328e" address="unix:///run/containerd/s/fc1bec9720d3cb332c34ccc517abcc0b3bc5fb146ec05ef3b123066261b1cf6d" protocol=ttrpc version=3 Apr 24 00:30:52.407461 systemd[1]: Started cri-containerd-39056df499dd80465e50f7b66755113c23afd2946c39d30c51dbf6907910328e.scope - libcontainer container 39056df499dd80465e50f7b66755113c23afd2946c39d30c51dbf6907910328e. Apr 24 00:30:52.464467 containerd[1622]: time="2026-04-24T00:30:52.464210724Z" level=info msg="StartContainer for \"39056df499dd80465e50f7b66755113c23afd2946c39d30c51dbf6907910328e\" returns successfully" Apr 24 00:30:52.478342 systemd[1]: cri-containerd-39056df499dd80465e50f7b66755113c23afd2946c39d30c51dbf6907910328e.scope: Deactivated successfully. Apr 24 00:30:52.479841 containerd[1622]: time="2026-04-24T00:30:52.479807039Z" level=info msg="received container exit event container_id:\"39056df499dd80465e50f7b66755113c23afd2946c39d30c51dbf6907910328e\" id:\"39056df499dd80465e50f7b66755113c23afd2946c39d30c51dbf6907910328e\" pid:4913 exited_at:{seconds:1776990652 nanos:478411050}" Apr 24 00:30:53.348132 kubelet[2843]: E0424 00:30:53.347921 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:30:53.363799 containerd[1622]: time="2026-04-24T00:30:53.362020266Z" level=info msg="CreateContainer within sandbox \"b7b1d362d7cac307000c884ef4c3b8b679386a3af0496bd815c4ef4b474fc384\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 24 00:30:53.394313 containerd[1622]: time="2026-04-24T00:30:53.393842064Z" level=info msg="Container f54dd6d228892ab03de6361d050af8ef98eed1b383afc145c8ca3de8cb15c1b1: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:30:53.395091 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2822721015.mount: Deactivated successfully. Apr 24 00:30:53.412928 containerd[1622]: time="2026-04-24T00:30:53.412737903Z" level=info msg="CreateContainer within sandbox \"b7b1d362d7cac307000c884ef4c3b8b679386a3af0496bd815c4ef4b474fc384\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f54dd6d228892ab03de6361d050af8ef98eed1b383afc145c8ca3de8cb15c1b1\"" Apr 24 00:30:53.422750 containerd[1622]: time="2026-04-24T00:30:53.422478946Z" level=info msg="StartContainer for \"f54dd6d228892ab03de6361d050af8ef98eed1b383afc145c8ca3de8cb15c1b1\"" Apr 24 00:30:53.424990 containerd[1622]: time="2026-04-24T00:30:53.424963115Z" level=info msg="connecting to shim f54dd6d228892ab03de6361d050af8ef98eed1b383afc145c8ca3de8cb15c1b1" address="unix:///run/containerd/s/fc1bec9720d3cb332c34ccc517abcc0b3bc5fb146ec05ef3b123066261b1cf6d" protocol=ttrpc version=3 Apr 24 00:30:53.463499 systemd[1]: Started cri-containerd-f54dd6d228892ab03de6361d050af8ef98eed1b383afc145c8ca3de8cb15c1b1.scope - libcontainer container f54dd6d228892ab03de6361d050af8ef98eed1b383afc145c8ca3de8cb15c1b1. Apr 24 00:30:53.567333 containerd[1622]: time="2026-04-24T00:30:53.567110271Z" level=info msg="StartContainer for \"f54dd6d228892ab03de6361d050af8ef98eed1b383afc145c8ca3de8cb15c1b1\" returns successfully" Apr 24 00:30:53.569515 systemd[1]: cri-containerd-f54dd6d228892ab03de6361d050af8ef98eed1b383afc145c8ca3de8cb15c1b1.scope: Deactivated successfully. Apr 24 00:30:53.578448 containerd[1622]: time="2026-04-24T00:30:53.577886438Z" level=info msg="received container exit event container_id:\"f54dd6d228892ab03de6361d050af8ef98eed1b383afc145c8ca3de8cb15c1b1\" id:\"f54dd6d228892ab03de6361d050af8ef98eed1b383afc145c8ca3de8cb15c1b1\" pid:4957 exited_at:{seconds:1776990653 nanos:571674367}" Apr 24 00:30:53.620792 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f54dd6d228892ab03de6361d050af8ef98eed1b383afc145c8ca3de8cb15c1b1-rootfs.mount: Deactivated successfully. Apr 24 00:30:54.359242 kubelet[2843]: E0424 00:30:54.358929 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:30:54.367553 containerd[1622]: time="2026-04-24T00:30:54.367490066Z" level=info msg="CreateContainer within sandbox \"b7b1d362d7cac307000c884ef4c3b8b679386a3af0496bd815c4ef4b474fc384\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 24 00:30:54.511567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1867202817.mount: Deactivated successfully. Apr 24 00:30:54.516987 containerd[1622]: time="2026-04-24T00:30:54.515134438Z" level=info msg="Container 89ce757963f5e34a44b687207e8f089452d8f0b874daba57424a73b2f3809ab5: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:30:54.541690 containerd[1622]: time="2026-04-24T00:30:54.541450014Z" level=info msg="CreateContainer within sandbox \"b7b1d362d7cac307000c884ef4c3b8b679386a3af0496bd815c4ef4b474fc384\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"89ce757963f5e34a44b687207e8f089452d8f0b874daba57424a73b2f3809ab5\"" Apr 24 00:30:54.545658 containerd[1622]: time="2026-04-24T00:30:54.544740145Z" level=info msg="StartContainer for \"89ce757963f5e34a44b687207e8f089452d8f0b874daba57424a73b2f3809ab5\"" Apr 24 00:30:54.551980 containerd[1622]: time="2026-04-24T00:30:54.551788209Z" level=info msg="connecting to shim 89ce757963f5e34a44b687207e8f089452d8f0b874daba57424a73b2f3809ab5" address="unix:///run/containerd/s/fc1bec9720d3cb332c34ccc517abcc0b3bc5fb146ec05ef3b123066261b1cf6d" protocol=ttrpc version=3 Apr 24 00:30:54.606493 systemd[1]: Started cri-containerd-89ce757963f5e34a44b687207e8f089452d8f0b874daba57424a73b2f3809ab5.scope - libcontainer container 89ce757963f5e34a44b687207e8f089452d8f0b874daba57424a73b2f3809ab5. Apr 24 00:30:54.732406 containerd[1622]: time="2026-04-24T00:30:54.726004223Z" level=info msg="StartContainer for \"89ce757963f5e34a44b687207e8f089452d8f0b874daba57424a73b2f3809ab5\" returns successfully" Apr 24 00:30:54.745046 systemd[1]: cri-containerd-89ce757963f5e34a44b687207e8f089452d8f0b874daba57424a73b2f3809ab5.scope: Deactivated successfully. Apr 24 00:30:54.748911 containerd[1622]: time="2026-04-24T00:30:54.748652926Z" level=info msg="received container exit event container_id:\"89ce757963f5e34a44b687207e8f089452d8f0b874daba57424a73b2f3809ab5\" id:\"89ce757963f5e34a44b687207e8f089452d8f0b874daba57424a73b2f3809ab5\" pid:4997 exited_at:{seconds:1776990654 nanos:747446479}" Apr 24 00:30:54.833559 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89ce757963f5e34a44b687207e8f089452d8f0b874daba57424a73b2f3809ab5-rootfs.mount: Deactivated successfully. Apr 24 00:30:55.370353 kubelet[2843]: E0424 00:30:55.370121 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:30:55.393044 containerd[1622]: time="2026-04-24T00:30:55.392820951Z" level=info msg="CreateContainer within sandbox \"b7b1d362d7cac307000c884ef4c3b8b679386a3af0496bd815c4ef4b474fc384\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 24 00:30:55.420975 containerd[1622]: time="2026-04-24T00:30:55.419860077Z" level=info msg="Container c410ecb4b3a309a1281eab325e7ea646eeb6e40056e3902e92af2fbf6b9ada18: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:30:55.436813 containerd[1622]: time="2026-04-24T00:30:55.436637987Z" level=info msg="CreateContainer within sandbox \"b7b1d362d7cac307000c884ef4c3b8b679386a3af0496bd815c4ef4b474fc384\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c410ecb4b3a309a1281eab325e7ea646eeb6e40056e3902e92af2fbf6b9ada18\"" Apr 24 00:30:55.438564 containerd[1622]: time="2026-04-24T00:30:55.438300361Z" level=info msg="StartContainer for \"c410ecb4b3a309a1281eab325e7ea646eeb6e40056e3902e92af2fbf6b9ada18\"" Apr 24 00:30:55.439997 containerd[1622]: time="2026-04-24T00:30:55.439929355Z" level=info msg="connecting to shim c410ecb4b3a309a1281eab325e7ea646eeb6e40056e3902e92af2fbf6b9ada18" address="unix:///run/containerd/s/fc1bec9720d3cb332c34ccc517abcc0b3bc5fb146ec05ef3b123066261b1cf6d" protocol=ttrpc version=3 Apr 24 00:30:55.470818 systemd[1]: Started cri-containerd-c410ecb4b3a309a1281eab325e7ea646eeb6e40056e3902e92af2fbf6b9ada18.scope - libcontainer container c410ecb4b3a309a1281eab325e7ea646eeb6e40056e3902e92af2fbf6b9ada18. Apr 24 00:30:55.573854 containerd[1622]: time="2026-04-24T00:30:55.572882470Z" level=info msg="StartContainer for \"c410ecb4b3a309a1281eab325e7ea646eeb6e40056e3902e92af2fbf6b9ada18\" returns successfully" Apr 24 00:30:56.267515 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-vaes-avx10_256)) Apr 24 00:30:56.390911 kubelet[2843]: E0424 00:30:56.390534 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:30:57.623470 kubelet[2843]: E0424 00:30:57.623013 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:31:01.518292 update_engine[1598]: I20260424 00:31:01.517580 1598 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 24 00:31:01.518292 update_engine[1598]: I20260424 00:31:01.518258 1598 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 24 00:31:01.521276 update_engine[1598]: I20260424 00:31:01.520912 1598 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 24 00:31:01.528778 update_engine[1598]: E20260424 00:31:01.528690 1598 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 24 00:31:01.528960 update_engine[1598]: I20260424 00:31:01.528901 1598 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 24 00:31:01.825993 systemd-networkd[1411]: lxc_health: Link UP Apr 24 00:31:01.841452 systemd-networkd[1411]: lxc_health: Gained carrier Apr 24 00:31:02.839849 kubelet[2843]: E0424 00:31:02.838844 2843 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:39808->127.0.0.1:34331: write tcp 127.0.0.1:39808->127.0.0.1:34331: write: broken pipe Apr 24 00:31:03.041084 systemd-networkd[1411]: lxc_health: Gained IPv6LL Apr 24 00:31:03.632390 kubelet[2843]: E0424 00:31:03.632008 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:31:03.670073 kubelet[2843]: I0424 00:31:03.669927 2843 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8dj96" podStartSLOduration=12.66991451 podStartE2EDuration="12.66991451s" podCreationTimestamp="2026-04-24 00:30:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 00:30:56.435070982 +0000 UTC m=+196.277006022" watchObservedRunningTime="2026-04-24 00:31:03.66991451 +0000 UTC m=+203.511849546" Apr 24 00:31:04.427403 kubelet[2843]: E0424 00:31:04.426987 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:31:05.434770 kubelet[2843]: E0424 00:31:05.434460 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:31:05.877769 kubelet[2843]: E0424 00:31:05.877112 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 00:31:07.723528 sshd[4801]: Connection closed by 10.0.0.1 port 42092 Apr 24 00:31:07.724230 sshd-session[4794]: pam_unix(sshd:session): session closed for user core Apr 24 00:31:07.728822 systemd[1]: sshd@37-10.0.0.89:22-10.0.0.1:42092.service: Deactivated successfully. Apr 24 00:31:07.730849 systemd[1]: session-38.scope: Deactivated successfully. Apr 24 00:31:07.735503 systemd-logind[1595]: Session 38 logged out. Waiting for processes to exit. Apr 24 00:31:07.738586 systemd-logind[1595]: Removed session 38. Apr 24 00:31:07.877272 kubelet[2843]: E0424 00:31:07.876755 2843 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"