Apr 28 00:52:00.297000 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 27 22:40:10 -00 2026 Apr 28 00:52:00.297056 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=dba81bba70fdc18951de51911456386ac86d38187268d44374f74ed6158168ec Apr 28 00:52:00.297065 kernel: BIOS-provided physical RAM map: Apr 28 00:52:00.297070 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 28 00:52:00.297074 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 28 00:52:00.297079 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 28 00:52:00.297084 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Apr 28 00:52:00.297088 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 28 00:52:00.297093 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Apr 28 00:52:00.297108 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Apr 28 00:52:00.297116 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Apr 28 00:52:00.297120 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Apr 28 00:52:00.297148 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Apr 28 00:52:00.297163 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Apr 28 00:52:00.297188 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Apr 28 00:52:00.297194 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 28 00:52:00.297201 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Apr 28 00:52:00.297205 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Apr 28 00:52:00.297210 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 28 00:52:00.297243 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 28 00:52:00.297248 kernel: NX (Execute Disable) protection: active Apr 28 00:52:00.297253 kernel: APIC: Static calls initialized Apr 28 00:52:00.297257 kernel: efi: EFI v2.7 by EDK II Apr 28 00:52:00.297262 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Apr 28 00:52:00.297267 kernel: SMBIOS 2.8 present. Apr 28 00:52:00.297272 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Apr 28 00:52:00.297276 kernel: Hypervisor detected: KVM Apr 28 00:52:00.297295 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 28 00:52:00.297300 kernel: kvm-clock: using sched offset of 7690595270 cycles Apr 28 00:52:00.297305 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 28 00:52:00.297310 kernel: tsc: Detected 2793.438 MHz processor Apr 28 00:52:00.297315 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 28 00:52:00.297321 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 28 00:52:00.297325 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x10000000000 Apr 28 00:52:00.297330 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 28 00:52:00.297335 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 28 00:52:00.297342 kernel: Using GB pages for direct mapping Apr 28 00:52:00.297347 kernel: Secure boot disabled Apr 28 00:52:00.297352 kernel: ACPI: Early table checksum verification disabled Apr 28 00:52:00.297357 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Apr 28 00:52:00.297365 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 28 00:52:00.297370 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:52:00.297375 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:52:00.297382 kernel: ACPI: FACS 0x000000009CBDD000 000040 Apr 28 00:52:00.297396 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:52:00.297402 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:52:00.297407 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:52:00.297412 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 28 00:52:00.297417 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 28 00:52:00.297422 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Apr 28 00:52:00.297429 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Apr 28 00:52:00.297434 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Apr 28 00:52:00.297439 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Apr 28 00:52:00.297444 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Apr 28 00:52:00.297450 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Apr 28 00:52:00.297455 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Apr 28 00:52:00.297460 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Apr 28 00:52:00.297465 kernel: No NUMA configuration found Apr 28 00:52:00.297478 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Apr 28 00:52:00.297485 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Apr 28 00:52:00.297490 kernel: Zone ranges: Apr 28 00:52:00.297495 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 28 00:52:00.297500 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Apr 28 00:52:00.297505 kernel: Normal empty Apr 28 00:52:00.297510 kernel: Movable zone start for each node Apr 28 00:52:00.297515 kernel: Early memory node ranges Apr 28 00:52:00.297520 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 28 00:52:00.297525 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Apr 28 00:52:00.297530 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Apr 28 00:52:00.297537 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Apr 28 00:52:00.297542 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Apr 28 00:52:00.297547 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Apr 28 00:52:00.297560 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Apr 28 00:52:00.297565 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 28 00:52:00.297570 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 28 00:52:00.297575 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Apr 28 00:52:00.297580 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 28 00:52:00.297585 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Apr 28 00:52:00.297593 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Apr 28 00:52:00.297598 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Apr 28 00:52:00.297603 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 28 00:52:00.297608 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 28 00:52:00.297613 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 28 00:52:00.297618 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 28 00:52:00.297623 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 28 00:52:00.297628 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 28 00:52:00.297633 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 28 00:52:00.297640 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 28 00:52:00.297645 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 28 00:52:00.297650 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 28 00:52:00.297655 kernel: TSC deadline timer available Apr 28 00:52:00.297660 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 28 00:52:00.297665 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 28 00:52:00.297670 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 28 00:52:00.297675 kernel: kvm-guest: setup PV sched yield Apr 28 00:52:00.297680 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Apr 28 00:52:00.297687 kernel: Booting paravirtualized kernel on KVM Apr 28 00:52:00.297694 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 28 00:52:00.297702 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 28 00:52:00.297710 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 28 00:52:00.297719 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 28 00:52:00.297727 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 28 00:52:00.297735 kernel: kvm-guest: PV spinlocks enabled Apr 28 00:52:00.297742 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 28 00:52:00.297751 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=dba81bba70fdc18951de51911456386ac86d38187268d44374f74ed6158168ec Apr 28 00:52:00.297784 kernel: random: crng init done Apr 28 00:52:00.297790 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 28 00:52:00.297796 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 28 00:52:00.297801 kernel: Fallback order for Node 0: 0 Apr 28 00:52:00.297806 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Apr 28 00:52:00.297811 kernel: Policy zone: DMA32 Apr 28 00:52:00.297816 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 28 00:52:00.297821 kernel: Memory: 2399656K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 167140K reserved, 0K cma-reserved) Apr 28 00:52:00.297829 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 28 00:52:00.297834 kernel: ftrace: allocating 37996 entries in 149 pages Apr 28 00:52:00.297839 kernel: ftrace: allocated 149 pages with 4 groups Apr 28 00:52:00.297844 kernel: Dynamic Preempt: voluntary Apr 28 00:52:00.297849 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 28 00:52:00.298135 kernel: rcu: RCU event tracing is enabled. Apr 28 00:52:00.298155 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 28 00:52:00.298162 kernel: Trampoline variant of Tasks RCU enabled. Apr 28 00:52:00.298178 kernel: Rude variant of Tasks RCU enabled. Apr 28 00:52:00.298184 kernel: Tracing variant of Tasks RCU enabled. Apr 28 00:52:00.298189 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 28 00:52:00.298195 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 28 00:52:00.298203 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 28 00:52:00.298209 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 28 00:52:00.298238 kernel: Console: colour dummy device 80x25 Apr 28 00:52:00.298244 kernel: printk: console [ttyS0] enabled Apr 28 00:52:00.298260 kernel: ACPI: Core revision 20230628 Apr 28 00:52:00.298269 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 28 00:52:00.298274 kernel: APIC: Switch to symmetric I/O mode setup Apr 28 00:52:00.298280 kernel: x2apic enabled Apr 28 00:52:00.298286 kernel: APIC: Switched APIC routing to: physical x2apic Apr 28 00:52:00.298292 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 28 00:52:00.298308 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 28 00:52:00.298314 kernel: kvm-guest: setup PV IPIs Apr 28 00:52:00.298320 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 28 00:52:00.298325 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 28 00:52:00.298334 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 28 00:52:00.298339 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 28 00:52:00.298345 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 28 00:52:00.298351 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 28 00:52:00.298356 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 28 00:52:00.298362 kernel: Spectre V2 : Mitigation: Retpolines Apr 28 00:52:00.298367 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 28 00:52:00.298373 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 28 00:52:00.298379 kernel: RETBleed: Vulnerable Apr 28 00:52:00.298386 kernel: Speculative Store Bypass: Vulnerable Apr 28 00:52:00.298392 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 28 00:52:00.298398 kernel: GDS: Unknown: Dependent on hypervisor status Apr 28 00:52:00.298413 kernel: active return thunk: its_return_thunk Apr 28 00:52:00.298419 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 28 00:52:00.298425 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 28 00:52:00.298430 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 28 00:52:00.298436 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 28 00:52:00.298441 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 28 00:52:00.298449 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 28 00:52:00.298455 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 28 00:52:00.298461 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 28 00:52:00.298466 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 28 00:52:00.298472 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 28 00:52:00.298478 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 28 00:52:00.298483 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 28 00:52:00.298489 kernel: Freeing SMP alternatives memory: 32K Apr 28 00:52:00.298494 kernel: pid_max: default: 32768 minimum: 301 Apr 28 00:52:00.298502 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 28 00:52:00.298507 kernel: landlock: Up and running. Apr 28 00:52:00.298513 kernel: SELinux: Initializing. Apr 28 00:52:00.298519 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 28 00:52:00.298524 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 28 00:52:00.298530 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 28 00:52:00.298536 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 28 00:52:00.298542 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 28 00:52:00.298547 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 28 00:52:00.298555 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 28 00:52:00.298560 kernel: signal: max sigframe size: 3632 Apr 28 00:52:00.298566 kernel: rcu: Hierarchical SRCU implementation. Apr 28 00:52:00.298572 kernel: rcu: Max phase no-delay instances is 400. Apr 28 00:52:00.298578 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 28 00:52:00.298583 kernel: smp: Bringing up secondary CPUs ... Apr 28 00:52:00.298589 kernel: smpboot: x86: Booting SMP configuration: Apr 28 00:52:00.298595 kernel: .... node #0, CPUs: #1 #2 #3 Apr 28 00:52:00.298600 kernel: smp: Brought up 1 node, 4 CPUs Apr 28 00:52:00.298608 kernel: smpboot: Max logical packages: 1 Apr 28 00:52:00.298613 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 28 00:52:00.298619 kernel: devtmpfs: initialized Apr 28 00:52:00.298633 kernel: x86/mm: Memory block size: 128MB Apr 28 00:52:00.298639 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Apr 28 00:52:00.298645 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Apr 28 00:52:00.298651 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Apr 28 00:52:00.298657 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Apr 28 00:52:00.298662 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Apr 28 00:52:00.298670 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 28 00:52:00.298676 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 28 00:52:00.298682 kernel: pinctrl core: initialized pinctrl subsystem Apr 28 00:52:00.298687 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 28 00:52:00.298693 kernel: audit: initializing netlink subsys (disabled) Apr 28 00:52:00.298699 kernel: audit: type=2000 audit(1777337517.411:1): state=initialized audit_enabled=0 res=1 Apr 28 00:52:00.298705 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 28 00:52:00.298710 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 28 00:52:00.298717 kernel: cpuidle: using governor menu Apr 28 00:52:00.298723 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 28 00:52:00.298728 kernel: dca service started, version 1.12.1 Apr 28 00:52:00.298734 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 28 00:52:00.298740 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 28 00:52:00.298746 kernel: PCI: Using configuration type 1 for base access Apr 28 00:52:00.298751 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 28 00:52:00.298757 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 28 00:52:00.298763 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 28 00:52:00.298770 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 28 00:52:00.298776 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 28 00:52:00.298781 kernel: ACPI: Added _OSI(Module Device) Apr 28 00:52:00.298787 kernel: ACPI: Added _OSI(Processor Device) Apr 28 00:52:00.298792 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 28 00:52:00.298798 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 28 00:52:00.298804 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 28 00:52:00.298809 kernel: ACPI: Interpreter enabled Apr 28 00:52:00.298815 kernel: ACPI: PM: (supports S0 S3 S5) Apr 28 00:52:00.298822 kernel: ACPI: Using IOAPIC for interrupt routing Apr 28 00:52:00.298828 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 28 00:52:00.298834 kernel: PCI: Using E820 reservations for host bridge windows Apr 28 00:52:00.298839 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 28 00:52:00.298845 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 28 00:52:00.300898 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 28 00:52:00.301853 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 28 00:52:00.302643 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 28 00:52:00.302677 kernel: PCI host bridge to bus 0000:00 Apr 28 00:52:00.302797 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 28 00:52:00.302857 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 28 00:52:00.303294 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 28 00:52:00.303375 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 28 00:52:00.303431 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 28 00:52:00.303487 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Apr 28 00:52:00.303548 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 28 00:52:00.303680 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 28 00:52:00.303824 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 28 00:52:00.303917 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Apr 28 00:52:00.303983 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Apr 28 00:52:00.304044 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 28 00:52:00.304109 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Apr 28 00:52:00.307810 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 28 00:52:00.352320 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 28 00:52:00.353140 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Apr 28 00:52:00.353277 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Apr 28 00:52:00.353343 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Apr 28 00:52:00.353438 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 28 00:52:00.353511 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Apr 28 00:52:00.353573 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Apr 28 00:52:00.353635 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Apr 28 00:52:00.354122 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 28 00:52:00.354204 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Apr 28 00:52:00.354298 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Apr 28 00:52:00.354363 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Apr 28 00:52:00.354430 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Apr 28 00:52:00.354843 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 28 00:52:00.373570 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 28 00:52:00.373769 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 28 00:52:00.373840 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Apr 28 00:52:00.373925 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Apr 28 00:52:00.374021 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 28 00:52:00.374100 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Apr 28 00:52:00.374107 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 28 00:52:00.374114 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 28 00:52:00.374120 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 28 00:52:00.374125 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 28 00:52:00.374131 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 28 00:52:00.374137 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 28 00:52:00.374143 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 28 00:52:00.374162 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 28 00:52:00.374168 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 28 00:52:00.374182 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 28 00:52:00.374187 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 28 00:52:00.374193 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 28 00:52:00.374206 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 28 00:52:00.374212 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 28 00:52:00.374303 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 28 00:52:00.374309 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 28 00:52:00.374317 kernel: iommu: Default domain type: Translated Apr 28 00:52:00.374323 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 28 00:52:00.374329 kernel: efivars: Registered efivars operations Apr 28 00:52:00.374335 kernel: PCI: Using ACPI for IRQ routing Apr 28 00:52:00.374340 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 28 00:52:00.374346 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Apr 28 00:52:00.374352 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Apr 28 00:52:00.374358 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Apr 28 00:52:00.374363 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Apr 28 00:52:00.374454 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 28 00:52:00.374518 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 28 00:52:00.374595 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 28 00:52:00.374602 kernel: vgaarb: loaded Apr 28 00:52:00.374608 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 28 00:52:00.374614 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 28 00:52:00.374629 kernel: clocksource: Switched to clocksource kvm-clock Apr 28 00:52:00.374635 kernel: VFS: Disk quotas dquot_6.6.0 Apr 28 00:52:00.374641 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 28 00:52:00.374650 kernel: pnp: PnP ACPI init Apr 28 00:52:00.375051 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 28 00:52:00.375089 kernel: pnp: PnP ACPI: found 6 devices Apr 28 00:52:00.375096 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 28 00:52:00.375112 kernel: NET: Registered PF_INET protocol family Apr 28 00:52:00.375118 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 28 00:52:00.375124 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 28 00:52:00.375130 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 28 00:52:00.375140 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 28 00:52:00.375146 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 28 00:52:00.375152 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 28 00:52:00.375158 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 28 00:52:00.375164 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 28 00:52:00.375169 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 28 00:52:00.375175 kernel: NET: Registered PF_XDP protocol family Apr 28 00:52:00.375287 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Apr 28 00:52:00.375356 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Apr 28 00:52:00.375471 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 28 00:52:00.375530 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 28 00:52:00.375585 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 28 00:52:00.375641 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 28 00:52:00.375697 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 28 00:52:00.375752 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Apr 28 00:52:00.375759 kernel: PCI: CLS 0 bytes, default 64 Apr 28 00:52:00.376194 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 28 00:52:00.376201 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 28 00:52:00.376207 kernel: Initialise system trusted keyrings Apr 28 00:52:00.376246 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 28 00:52:00.376252 kernel: Key type asymmetric registered Apr 28 00:52:00.376258 kernel: Asymmetric key parser 'x509' registered Apr 28 00:52:00.376264 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 28 00:52:00.376270 kernel: io scheduler mq-deadline registered Apr 28 00:52:00.376285 kernel: io scheduler kyber registered Apr 28 00:52:00.376303 kernel: io scheduler bfq registered Apr 28 00:52:00.376308 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 28 00:52:00.376315 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 28 00:52:00.376321 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 28 00:52:00.376327 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 28 00:52:00.376333 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 28 00:52:00.376339 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 28 00:52:00.376345 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 28 00:52:00.376351 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 28 00:52:00.376358 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 28 00:52:00.377128 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 28 00:52:00.377197 kernel: rtc_cmos 00:04: registered as rtc0 Apr 28 00:52:00.377310 kernel: rtc_cmos 00:04: setting system clock to 2026-04-28T00:51:59 UTC (1777337519) Apr 28 00:52:00.377319 kernel: hrtimer: interrupt took 17681400 ns Apr 28 00:52:00.377379 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Apr 28 00:52:00.377387 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 28 00:52:00.377392 kernel: intel_pstate: CPU model not supported Apr 28 00:52:00.377405 kernel: efifb: probing for efifb Apr 28 00:52:00.377411 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Apr 28 00:52:00.377416 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Apr 28 00:52:00.377422 kernel: efifb: scrolling: redraw Apr 28 00:52:00.377428 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Apr 28 00:52:00.377434 kernel: Console: switching to colour frame buffer device 100x37 Apr 28 00:52:00.377453 kernel: fb0: EFI VGA frame buffer device Apr 28 00:52:00.377461 kernel: pstore: Using crash dump compression: deflate Apr 28 00:52:00.377467 kernel: pstore: Registered efi_pstore as persistent store backend Apr 28 00:52:00.377479 kernel: NET: Registered PF_INET6 protocol family Apr 28 00:52:00.377489 kernel: Segment Routing with IPv6 Apr 28 00:52:00.377499 kernel: In-situ OAM (IOAM) with IPv6 Apr 28 00:52:00.377507 kernel: NET: Registered PF_PACKET protocol family Apr 28 00:52:00.377513 kernel: Key type dns_resolver registered Apr 28 00:52:00.377518 kernel: IPI shorthand broadcast: enabled Apr 28 00:52:00.377524 kernel: sched_clock: Marking stable (2559020752, 512702694)->(3355988939, -284265493) Apr 28 00:52:00.377530 kernel: registered taskstats version 1 Apr 28 00:52:00.377536 kernel: Loading compiled-in X.509 certificates Apr 28 00:52:00.377544 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 40b5c5a01382737457e1eae3e889ae587960eb18' Apr 28 00:52:00.379590 kernel: Key type .fscrypt registered Apr 28 00:52:00.379610 kernel: Key type fscrypt-provisioning registered Apr 28 00:52:00.379617 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 28 00:52:00.379623 kernel: ima: Allocated hash algorithm: sha1 Apr 28 00:52:00.379629 kernel: ima: No architecture policies found Apr 28 00:52:00.379635 kernel: clk: Disabling unused clocks Apr 28 00:52:00.379641 kernel: Freeing unused kernel image (initmem) memory: 42884K Apr 28 00:52:00.379647 kernel: Write protecting the kernel read-only data: 36864k Apr 28 00:52:00.379660 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 28 00:52:00.379666 kernel: Run /init as init process Apr 28 00:52:00.379672 kernel: with arguments: Apr 28 00:52:00.379678 kernel: /init Apr 28 00:52:00.379684 kernel: with environment: Apr 28 00:52:00.379689 kernel: HOME=/ Apr 28 00:52:00.379695 kernel: TERM=linux Apr 28 00:52:00.379717 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 28 00:52:00.379728 systemd[1]: Detected virtualization kvm. Apr 28 00:52:00.379737 systemd[1]: Detected architecture x86-64. Apr 28 00:52:00.379743 systemd[1]: Running in initrd. Apr 28 00:52:00.379750 systemd[1]: No hostname configured, using default hostname. Apr 28 00:52:00.379756 systemd[1]: Hostname set to . Apr 28 00:52:00.379765 systemd[1]: Initializing machine ID from VM UUID. Apr 28 00:52:00.379771 systemd[1]: Queued start job for default target initrd.target. Apr 28 00:52:00.379778 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 28 00:52:00.379785 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 28 00:52:00.379792 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 28 00:52:00.379798 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 28 00:52:00.379805 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 28 00:52:00.379811 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 28 00:52:00.379821 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 28 00:52:00.379827 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 28 00:52:00.379834 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 28 00:52:00.379840 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 28 00:52:00.379846 systemd[1]: Reached target paths.target - Path Units. Apr 28 00:52:00.379853 systemd[1]: Reached target slices.target - Slice Units. Apr 28 00:52:00.379859 systemd[1]: Reached target swap.target - Swaps. Apr 28 00:52:00.379867 systemd[1]: Reached target timers.target - Timer Units. Apr 28 00:52:00.379874 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 28 00:52:00.379897 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 28 00:52:00.379904 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 28 00:52:00.380296 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 28 00:52:00.380384 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 28 00:52:00.380391 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 28 00:52:00.380398 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 28 00:52:00.380419 systemd[1]: Reached target sockets.target - Socket Units. Apr 28 00:52:00.380425 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 28 00:52:00.380431 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 28 00:52:00.380438 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 28 00:52:00.380444 systemd[1]: Starting systemd-fsck-usr.service... Apr 28 00:52:00.380451 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 28 00:52:00.380457 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 28 00:52:00.380464 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 00:52:00.380470 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 28 00:52:00.380479 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 28 00:52:00.380485 systemd[1]: Finished systemd-fsck-usr.service. Apr 28 00:52:00.380594 systemd-journald[194]: Collecting audit messages is disabled. Apr 28 00:52:00.380635 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 28 00:52:00.380642 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 00:52:00.380650 systemd-journald[194]: Journal started Apr 28 00:52:00.380668 systemd-journald[194]: Runtime Journal (/run/log/journal/41cbaabb421340bd9878d123b3ed9f40) is 6.0M, max 48.3M, 42.2M free. Apr 28 00:52:00.296070 systemd-modules-load[195]: Inserted module 'overlay' Apr 28 00:52:00.392585 systemd[1]: Started systemd-journald.service - Journal Service. Apr 28 00:52:00.408427 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 28 00:52:00.412372 systemd-modules-load[195]: Inserted module 'br_netfilter' Apr 28 00:52:00.414235 kernel: Bridge firewalling registered Apr 28 00:52:00.419911 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 28 00:52:00.421174 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 28 00:52:00.421816 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 28 00:52:00.422282 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 28 00:52:00.426624 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 28 00:52:00.443615 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 28 00:52:00.448562 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 00:52:00.454409 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 28 00:52:00.460799 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 28 00:52:00.465620 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 28 00:52:00.468837 dracut-cmdline[227]: dracut-dracut-053 Apr 28 00:52:00.470963 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 28 00:52:00.472738 dracut-cmdline[227]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=dba81bba70fdc18951de51911456386ac86d38187268d44374f74ed6158168ec Apr 28 00:52:00.488369 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 28 00:52:00.601085 systemd-resolved[256]: Positive Trust Anchors: Apr 28 00:52:00.601162 systemd-resolved[256]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 28 00:52:00.601205 systemd-resolved[256]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 28 00:52:00.605089 systemd-resolved[256]: Defaulting to hostname 'linux'. Apr 28 00:52:00.608474 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 28 00:52:00.623779 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 28 00:52:00.682407 kernel: SCSI subsystem initialized Apr 28 00:52:00.700067 kernel: Loading iSCSI transport class v2.0-870. Apr 28 00:52:00.715364 kernel: iscsi: registered transport (tcp) Apr 28 00:52:00.742571 kernel: iscsi: registered transport (qla4xxx) Apr 28 00:52:00.742916 kernel: QLogic iSCSI HBA Driver Apr 28 00:52:00.801496 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 28 00:52:00.814438 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 28 00:52:00.844208 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 28 00:52:00.844428 kernel: device-mapper: uevent: version 1.0.3 Apr 28 00:52:00.844439 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 28 00:52:00.896590 kernel: raid6: avx512x4 gen() 38187 MB/s Apr 28 00:52:00.913524 kernel: raid6: avx512x2 gen() 41686 MB/s Apr 28 00:52:00.930510 kernel: raid6: avx512x1 gen() 43851 MB/s Apr 28 00:52:00.947539 kernel: raid6: avx2x4 gen() 34517 MB/s Apr 28 00:52:00.965353 kernel: raid6: avx2x2 gen() 35589 MB/s Apr 28 00:52:00.986684 kernel: raid6: avx2x1 gen() 17081 MB/s Apr 28 00:52:00.986977 kernel: raid6: using algorithm avx512x1 gen() 43851 MB/s Apr 28 00:52:01.014866 kernel: raid6: .... xor() 15455 MB/s, rmw enabled Apr 28 00:52:01.015432 kernel: raid6: using avx512x2 recovery algorithm Apr 28 00:52:01.053376 kernel: xor: automatically using best checksumming function avx Apr 28 00:52:01.305464 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 28 00:52:01.380507 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 28 00:52:01.394834 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 28 00:52:01.427242 systemd-udevd[415]: Using default interface naming scheme 'v255'. Apr 28 00:52:01.436748 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 28 00:52:01.451927 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 28 00:52:01.468285 dracut-pre-trigger[424]: rd.md=0: removing MD RAID activation Apr 28 00:52:01.513196 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 28 00:52:01.533348 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 28 00:52:01.597146 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 28 00:52:01.612351 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 28 00:52:01.674419 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 28 00:52:01.678398 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 28 00:52:01.682107 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 28 00:52:01.685804 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 28 00:52:01.698370 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 28 00:52:01.704335 kernel: cryptd: max_cpu_qlen set to 1000 Apr 28 00:52:01.704360 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 28 00:52:01.711261 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 28 00:52:01.714444 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 28 00:52:01.723023 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 28 00:52:01.723064 kernel: GPT:9289727 != 19775487 Apr 28 00:52:01.723087 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 28 00:52:01.723114 kernel: GPT:9289727 != 19775487 Apr 28 00:52:01.723136 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 28 00:52:01.723151 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 28 00:52:01.728250 kernel: libata version 3.00 loaded. Apr 28 00:52:01.728847 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 28 00:52:01.747821 kernel: ahci 0000:00:1f.2: version 3.0 Apr 28 00:52:01.748090 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 28 00:52:01.748102 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 28 00:52:01.748206 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 28 00:52:01.748319 kernel: AVX2 version of gcm_enc/dec engaged. Apr 28 00:52:01.729017 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 00:52:01.752624 kernel: scsi host0: ahci Apr 28 00:52:01.733914 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 28 00:52:01.735635 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 28 00:52:01.763537 kernel: scsi host1: ahci Apr 28 00:52:01.763864 kernel: scsi host2: ahci Apr 28 00:52:01.764027 kernel: scsi host3: ahci Apr 28 00:52:01.764104 kernel: scsi host4: ahci Apr 28 00:52:01.735802 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 00:52:01.738725 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 00:52:01.774657 kernel: scsi host5: ahci Apr 28 00:52:01.774947 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 31 Apr 28 00:52:01.774958 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 31 Apr 28 00:52:01.774966 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 31 Apr 28 00:52:01.756085 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 00:52:01.784348 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 31 Apr 28 00:52:01.784375 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 31 Apr 28 00:52:01.784383 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 31 Apr 28 00:52:01.784390 kernel: BTRFS: device fsid c393bc7b-9362-4bef-afe6-6491ed4d6c93 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (463) Apr 28 00:52:01.784398 kernel: AES CTR mode by8 optimization enabled Apr 28 00:52:01.786718 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (470) Apr 28 00:52:01.799151 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 28 00:52:01.811121 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 28 00:52:01.815690 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 28 00:52:01.821025 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 28 00:52:01.821128 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 28 00:52:01.839466 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 28 00:52:01.839568 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 28 00:52:01.839615 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 00:52:01.844350 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 00:52:01.848852 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 00:52:01.858653 disk-uuid[557]: Primary Header is updated. Apr 28 00:52:01.858653 disk-uuid[557]: Secondary Entries is updated. Apr 28 00:52:01.858653 disk-uuid[557]: Secondary Header is updated. Apr 28 00:52:01.864270 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 28 00:52:01.864304 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 28 00:52:01.879671 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 00:52:01.891466 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 28 00:52:01.924600 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 00:52:02.096632 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 28 00:52:02.096992 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 28 00:52:02.098638 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 28 00:52:02.099286 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 28 00:52:02.100283 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 28 00:52:02.102424 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 28 00:52:02.102465 kernel: ata3.00: applying bridge limits Apr 28 00:52:02.105271 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 28 00:52:02.105286 kernel: ata3.00: configured for UDMA/100 Apr 28 00:52:02.111334 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 28 00:52:02.171668 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 28 00:52:02.172201 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 28 00:52:02.185742 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 28 00:52:02.870281 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 28 00:52:02.870386 disk-uuid[558]: The operation has completed successfully. Apr 28 00:52:02.910182 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 28 00:52:02.910438 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 28 00:52:02.958580 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 28 00:52:02.963861 sh[599]: Success Apr 28 00:52:02.980241 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 28 00:52:03.026380 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 28 00:52:03.048050 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 28 00:52:03.051830 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 28 00:52:03.094735 kernel: BTRFS info (device dm-0): first mount of filesystem c393bc7b-9362-4bef-afe6-6491ed4d6c93 Apr 28 00:52:03.096083 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 28 00:52:03.096149 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 28 00:52:03.097613 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 28 00:52:03.098604 kernel: BTRFS info (device dm-0): using free space tree Apr 28 00:52:03.118861 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 28 00:52:03.121499 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 28 00:52:03.168842 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 28 00:52:03.169770 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 28 00:52:03.218633 kernel: BTRFS info (device vda6): first mount of filesystem 00ce5520-a395-45f5-887a-de6bb1d2f08f Apr 28 00:52:03.218926 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 28 00:52:03.218965 kernel: BTRFS info (device vda6): using free space tree Apr 28 00:52:03.226513 kernel: BTRFS info (device vda6): auto enabling async discard Apr 28 00:52:03.239322 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 28 00:52:03.242910 kernel: BTRFS info (device vda6): last unmount of filesystem 00ce5520-a395-45f5-887a-de6bb1d2f08f Apr 28 00:52:03.248920 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 28 00:52:03.255674 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 28 00:52:03.350455 ignition[689]: Ignition 2.19.0 Apr 28 00:52:03.350494 ignition[689]: Stage: fetch-offline Apr 28 00:52:03.350521 ignition[689]: no configs at "/usr/lib/ignition/base.d" Apr 28 00:52:03.350529 ignition[689]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 00:52:03.350609 ignition[689]: parsed url from cmdline: "" Apr 28 00:52:03.350612 ignition[689]: no config URL provided Apr 28 00:52:03.350616 ignition[689]: reading system config file "/usr/lib/ignition/user.ign" Apr 28 00:52:03.350622 ignition[689]: no config at "/usr/lib/ignition/user.ign" Apr 28 00:52:03.350661 ignition[689]: op(1): [started] loading QEMU firmware config module Apr 28 00:52:03.350666 ignition[689]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 28 00:52:03.359560 ignition[689]: op(1): [finished] loading QEMU firmware config module Apr 28 00:52:03.399090 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 28 00:52:03.425318 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 28 00:52:03.480352 systemd-networkd[787]: lo: Link UP Apr 28 00:52:03.480367 systemd-networkd[787]: lo: Gained carrier Apr 28 00:52:03.483270 systemd-networkd[787]: Enumeration completed Apr 28 00:52:03.484867 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 00:52:03.484870 systemd-networkd[787]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 28 00:52:03.486680 systemd-networkd[787]: eth0: Link UP Apr 28 00:52:03.486711 systemd-networkd[787]: eth0: Gained carrier Apr 28 00:52:03.486718 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 00:52:03.487674 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 28 00:52:03.490691 systemd[1]: Reached target network.target - Network. Apr 28 00:52:03.519482 systemd-networkd[787]: eth0: DHCPv4 address 10.0.0.98/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 28 00:52:03.544349 ignition[689]: parsing config with SHA512: 5e8934137404bcba69d95e8fc1033793c8b0372315710f1127fc135154cde2e3e1dec3240493814af343496a13291efca9a291c2ce257ba52a67854b44f45ae1 Apr 28 00:52:03.555925 unknown[689]: fetched base config from "system" Apr 28 00:52:03.556159 unknown[689]: fetched user config from "qemu" Apr 28 00:52:03.556652 ignition[689]: fetch-offline: fetch-offline passed Apr 28 00:52:03.560642 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 28 00:52:03.557756 ignition[689]: Ignition finished successfully Apr 28 00:52:03.564140 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 28 00:52:03.576644 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 28 00:52:03.599053 ignition[791]: Ignition 2.19.0 Apr 28 00:52:03.599123 ignition[791]: Stage: kargs Apr 28 00:52:03.599306 ignition[791]: no configs at "/usr/lib/ignition/base.d" Apr 28 00:52:03.599316 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 00:52:03.599997 ignition[791]: kargs: kargs passed Apr 28 00:52:03.600035 ignition[791]: Ignition finished successfully Apr 28 00:52:03.608762 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 28 00:52:03.621449 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 28 00:52:03.659499 ignition[800]: Ignition 2.19.0 Apr 28 00:52:03.659551 ignition[800]: Stage: disks Apr 28 00:52:03.659745 ignition[800]: no configs at "/usr/lib/ignition/base.d" Apr 28 00:52:03.663177 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 28 00:52:03.659757 ignition[800]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 00:52:03.667817 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 28 00:52:03.660594 ignition[800]: disks: disks passed Apr 28 00:52:03.678729 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 28 00:52:03.660637 ignition[800]: Ignition finished successfully Apr 28 00:52:03.685921 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 28 00:52:03.690667 systemd[1]: Reached target sysinit.target - System Initialization. Apr 28 00:52:03.696024 systemd[1]: Reached target basic.target - Basic System. Apr 28 00:52:03.730992 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 28 00:52:03.826921 systemd-fsck[811]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 28 00:52:03.838994 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 28 00:52:03.850863 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 28 00:52:04.024336 kernel: EXT4-fs (vda9): mounted filesystem f590d1f8-5181-4682-9e04-fe65400dca5c r/w with ordered data mode. Quota mode: none. Apr 28 00:52:04.025490 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 28 00:52:04.028034 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 28 00:52:04.038417 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 28 00:52:04.041384 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 28 00:52:04.049180 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (819) Apr 28 00:52:04.042874 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 28 00:52:04.042993 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 28 00:52:04.062147 kernel: BTRFS info (device vda6): first mount of filesystem 00ce5520-a395-45f5-887a-de6bb1d2f08f Apr 28 00:52:04.062168 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 28 00:52:04.062176 kernel: BTRFS info (device vda6): using free space tree Apr 28 00:52:04.043013 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 28 00:52:04.050710 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 28 00:52:04.056562 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 28 00:52:04.069333 kernel: BTRFS info (device vda6): auto enabling async discard Apr 28 00:52:04.072055 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 28 00:52:04.113030 initrd-setup-root[843]: cut: /sysroot/etc/passwd: No such file or directory Apr 28 00:52:04.116866 initrd-setup-root[850]: cut: /sysroot/etc/group: No such file or directory Apr 28 00:52:04.123670 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory Apr 28 00:52:04.129914 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory Apr 28 00:52:04.241039 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 28 00:52:04.248549 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 28 00:52:04.251025 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 28 00:52:04.262882 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 28 00:52:04.266493 kernel: BTRFS info (device vda6): last unmount of filesystem 00ce5520-a395-45f5-887a-de6bb1d2f08f Apr 28 00:52:04.300409 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 28 00:52:04.351700 ignition[934]: INFO : Ignition 2.19.0 Apr 28 00:52:04.351700 ignition[934]: INFO : Stage: mount Apr 28 00:52:04.357458 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 28 00:52:04.357458 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 00:52:04.357458 ignition[934]: INFO : mount: mount passed Apr 28 00:52:04.357458 ignition[934]: INFO : Ignition finished successfully Apr 28 00:52:04.354181 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 28 00:52:04.372796 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 28 00:52:04.410317 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 28 00:52:04.426341 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (946) Apr 28 00:52:04.431153 kernel: BTRFS info (device vda6): first mount of filesystem 00ce5520-a395-45f5-887a-de6bb1d2f08f Apr 28 00:52:04.431494 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 28 00:52:04.431512 kernel: BTRFS info (device vda6): using free space tree Apr 28 00:52:04.437654 kernel: BTRFS info (device vda6): auto enabling async discard Apr 28 00:52:04.439140 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 28 00:52:04.499278 ignition[963]: INFO : Ignition 2.19.0 Apr 28 00:52:04.499278 ignition[963]: INFO : Stage: files Apr 28 00:52:04.503565 ignition[963]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 28 00:52:04.503565 ignition[963]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 00:52:04.503565 ignition[963]: DEBUG : files: compiled without relabeling support, skipping Apr 28 00:52:04.509401 ignition[963]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 28 00:52:04.509401 ignition[963]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 28 00:52:04.509401 ignition[963]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 28 00:52:04.509401 ignition[963]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 28 00:52:04.518839 ignition[963]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 28 00:52:04.518839 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 28 00:52:04.518839 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 28 00:52:04.509721 unknown[963]: wrote ssh authorized keys file for user: core Apr 28 00:52:04.553350 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 28 00:52:04.759704 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 28 00:52:04.759704 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 28 00:52:04.759704 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 28 00:52:05.111040 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 28 00:52:05.467964 systemd-networkd[787]: eth0: Gained IPv6LL Apr 28 00:52:05.684883 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 28 00:52:05.684883 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 28 00:52:05.696193 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 28 00:52:05.696193 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 28 00:52:05.696193 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 28 00:52:05.696193 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 28 00:52:05.696193 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 28 00:52:05.696193 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 28 00:52:05.696193 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 28 00:52:05.696193 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 28 00:52:05.696193 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 28 00:52:05.696193 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 28 00:52:05.696193 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 28 00:52:05.696193 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 28 00:52:05.696193 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Apr 28 00:52:06.000999 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 28 00:52:07.299424 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 28 00:52:07.299424 ignition[963]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 28 00:52:07.309695 ignition[963]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 28 00:52:07.309695 ignition[963]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 28 00:52:07.309695 ignition[963]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 28 00:52:07.309695 ignition[963]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Apr 28 00:52:07.309695 ignition[963]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 28 00:52:07.309695 ignition[963]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 28 00:52:07.309695 ignition[963]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Apr 28 00:52:07.309695 ignition[963]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Apr 28 00:52:07.363383 ignition[963]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 28 00:52:07.377834 ignition[963]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 28 00:52:07.383500 ignition[963]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Apr 28 00:52:07.383500 ignition[963]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Apr 28 00:52:07.391032 ignition[963]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Apr 28 00:52:07.391032 ignition[963]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 28 00:52:07.391032 ignition[963]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 28 00:52:07.391032 ignition[963]: INFO : files: files passed Apr 28 00:52:07.391032 ignition[963]: INFO : Ignition finished successfully Apr 28 00:52:07.405579 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 28 00:52:07.419550 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 28 00:52:07.426522 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 28 00:52:07.431493 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 28 00:52:07.431606 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 28 00:52:07.438569 initrd-setup-root-after-ignition[991]: grep: /sysroot/oem/oem-release: No such file or directory Apr 28 00:52:07.445272 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 28 00:52:07.445272 initrd-setup-root-after-ignition[993]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 28 00:52:07.450026 initrd-setup-root-after-ignition[997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 28 00:52:07.457081 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 28 00:52:07.459089 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 28 00:52:07.487128 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 28 00:52:07.529726 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 28 00:52:07.529873 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 28 00:52:07.536183 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 28 00:52:07.540451 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 28 00:52:07.544635 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 28 00:52:07.545947 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 28 00:52:07.569970 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 28 00:52:07.584108 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 28 00:52:07.599026 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 28 00:52:07.599392 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 28 00:52:07.604053 systemd[1]: Stopped target timers.target - Timer Units. Apr 28 00:52:07.609636 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 28 00:52:07.609767 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 28 00:52:07.619833 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 28 00:52:07.664477 systemd[1]: Stopped target basic.target - Basic System. Apr 28 00:52:07.668180 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 28 00:52:07.668556 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 28 00:52:07.671577 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 28 00:52:07.677757 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 28 00:52:07.681748 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 28 00:52:07.685968 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 28 00:52:07.689878 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 28 00:52:07.693585 systemd[1]: Stopped target swap.target - Swaps. Apr 28 00:52:07.697062 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 28 00:52:07.697287 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 28 00:52:07.706811 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 28 00:52:07.709081 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 28 00:52:07.713847 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 28 00:52:07.716646 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 28 00:52:07.720590 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 28 00:52:07.721641 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 28 00:52:07.730967 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 28 00:52:07.731335 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 28 00:52:07.733612 systemd[1]: Stopped target paths.target - Path Units. Apr 28 00:52:07.741360 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 28 00:52:07.743944 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 28 00:52:07.748570 systemd[1]: Stopped target slices.target - Slice Units. Apr 28 00:52:07.753604 systemd[1]: Stopped target sockets.target - Socket Units. Apr 28 00:52:07.758701 systemd[1]: iscsid.socket: Deactivated successfully. Apr 28 00:52:07.758827 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 28 00:52:07.762902 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 28 00:52:07.763054 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 28 00:52:07.766410 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 28 00:52:07.766563 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 28 00:52:07.769905 systemd[1]: ignition-files.service: Deactivated successfully. Apr 28 00:52:07.770094 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 28 00:52:07.789620 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 28 00:52:07.796288 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 28 00:52:07.800952 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 28 00:52:07.801824 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 28 00:52:07.807390 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 28 00:52:07.810740 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 28 00:52:07.819340 ignition[1017]: INFO : Ignition 2.19.0 Apr 28 00:52:07.819340 ignition[1017]: INFO : Stage: umount Apr 28 00:52:07.832368 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 28 00:52:07.838458 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 28 00:52:07.838458 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 28 00:52:07.838458 ignition[1017]: INFO : umount: umount passed Apr 28 00:52:07.838458 ignition[1017]: INFO : Ignition finished successfully Apr 28 00:52:07.832470 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 28 00:52:07.840559 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 28 00:52:07.840693 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 28 00:52:07.846736 systemd[1]: Stopped target network.target - Network. Apr 28 00:52:07.850312 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 28 00:52:07.850445 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 28 00:52:07.855594 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 28 00:52:07.855694 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 28 00:52:07.860321 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 28 00:52:07.860433 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 28 00:52:07.865245 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 28 00:52:07.865402 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 28 00:52:07.866350 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 28 00:52:07.869602 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 28 00:52:07.871757 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 28 00:52:07.872407 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 28 00:52:07.872509 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 28 00:52:07.876042 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 28 00:52:07.876159 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 28 00:52:07.882380 systemd-networkd[787]: eth0: DHCPv6 lease lost Apr 28 00:52:07.885454 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 28 00:52:07.885604 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 28 00:52:07.886896 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 28 00:52:07.887045 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 28 00:52:07.899628 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 28 00:52:07.899909 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 28 00:52:07.962002 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 28 00:52:07.966642 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 28 00:52:07.966803 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 28 00:52:07.972211 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 28 00:52:07.972330 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 28 00:52:07.975539 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 28 00:52:07.975600 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 28 00:52:07.976905 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 28 00:52:07.976984 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 28 00:52:07.977854 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 28 00:52:07.997141 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 28 00:52:07.997407 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 28 00:52:08.002996 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 28 00:52:08.003066 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 28 00:52:08.003935 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 28 00:52:08.003978 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 28 00:52:08.009825 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 28 00:52:08.010001 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 28 00:52:08.015783 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 28 00:52:08.015872 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 28 00:52:08.022323 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 28 00:52:08.022449 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 28 00:52:08.042569 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 28 00:52:08.044567 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 28 00:52:08.044647 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 28 00:52:08.049103 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 28 00:52:08.049165 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 28 00:52:08.055543 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 28 00:52:08.055640 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 28 00:52:08.060276 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 28 00:52:08.060378 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 00:52:08.067749 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 28 00:52:08.067879 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 28 00:52:08.071043 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 28 00:52:08.071125 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 28 00:52:08.078122 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 28 00:52:08.098297 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 28 00:52:08.109628 systemd[1]: Switching root. Apr 28 00:52:08.138491 systemd-journald[194]: Journal stopped Apr 28 00:52:09.285719 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Apr 28 00:52:09.285775 kernel: SELinux: policy capability network_peer_controls=1 Apr 28 00:52:09.285789 kernel: SELinux: policy capability open_perms=1 Apr 28 00:52:09.285797 kernel: SELinux: policy capability extended_socket_class=1 Apr 28 00:52:09.285805 kernel: SELinux: policy capability always_check_network=0 Apr 28 00:52:09.285813 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 28 00:52:09.285821 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 28 00:52:09.285828 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 28 00:52:09.285840 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 28 00:52:09.285848 kernel: audit: type=1403 audit(1777337528.315:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 28 00:52:09.285860 systemd[1]: Successfully loaded SELinux policy in 45.175ms. Apr 28 00:52:09.285879 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 18.808ms. Apr 28 00:52:09.285888 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 28 00:52:09.285897 systemd[1]: Detected virtualization kvm. Apr 28 00:52:09.285906 systemd[1]: Detected architecture x86-64. Apr 28 00:52:09.285914 systemd[1]: Detected first boot. Apr 28 00:52:09.285939 systemd[1]: Initializing machine ID from VM UUID. Apr 28 00:52:09.285951 zram_generator::config[1065]: No configuration found. Apr 28 00:52:09.285966 systemd[1]: Populated /etc with preset unit settings. Apr 28 00:52:09.285974 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 28 00:52:09.285983 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 28 00:52:09.285991 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 28 00:52:09.286000 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 28 00:52:09.286010 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 28 00:52:09.286018 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 28 00:52:09.286028 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 28 00:52:09.286037 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 28 00:52:09.286045 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 28 00:52:09.286054 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 28 00:52:09.286062 systemd[1]: Created slice user.slice - User and Session Slice. Apr 28 00:52:09.286070 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 28 00:52:09.286078 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 28 00:52:09.286087 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 28 00:52:09.286095 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 28 00:52:09.286105 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 28 00:52:09.286114 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 28 00:52:09.286122 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 28 00:52:09.286130 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 28 00:52:09.286139 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 28 00:52:09.286146 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 28 00:52:09.286154 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 28 00:52:09.286164 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 28 00:52:09.286172 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 28 00:52:09.286180 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 28 00:52:09.286189 systemd[1]: Reached target slices.target - Slice Units. Apr 28 00:52:09.286197 systemd[1]: Reached target swap.target - Swaps. Apr 28 00:52:09.286205 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 28 00:52:09.286234 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 28 00:52:09.286244 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 28 00:52:09.286252 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 28 00:52:09.286260 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 28 00:52:09.286270 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 28 00:52:09.286278 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 28 00:52:09.286286 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 28 00:52:09.286295 systemd[1]: Mounting media.mount - External Media Directory... Apr 28 00:52:09.286303 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 00:52:09.286311 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 28 00:52:09.286320 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 28 00:52:09.286328 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 28 00:52:09.286338 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 28 00:52:09.286346 systemd[1]: Reached target machines.target - Containers. Apr 28 00:52:09.286354 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 28 00:52:09.286363 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 28 00:52:09.286371 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 28 00:52:09.286380 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 28 00:52:09.286389 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 28 00:52:09.286397 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 28 00:52:09.286405 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 28 00:52:09.286415 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 28 00:52:09.286423 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 28 00:52:09.286431 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 28 00:52:09.286439 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 28 00:52:09.286447 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 28 00:52:09.286455 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 28 00:52:09.286463 systemd[1]: Stopped systemd-fsck-usr.service. Apr 28 00:52:09.286470 kernel: fuse: init (API version 7.39) Apr 28 00:52:09.286480 kernel: loop: module loaded Apr 28 00:52:09.286487 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 28 00:52:09.286496 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 28 00:52:09.286504 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 28 00:52:09.286512 kernel: ACPI: bus type drm_connector registered Apr 28 00:52:09.286532 systemd-journald[1149]: Collecting audit messages is disabled. Apr 28 00:52:09.286551 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 28 00:52:09.286563 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 28 00:52:09.286573 systemd-journald[1149]: Journal started Apr 28 00:52:09.286590 systemd-journald[1149]: Runtime Journal (/run/log/journal/41cbaabb421340bd9878d123b3ed9f40) is 6.0M, max 48.3M, 42.2M free. Apr 28 00:52:08.901501 systemd[1]: Queued start job for default target multi-user.target. Apr 28 00:52:08.926985 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 28 00:52:08.927441 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 28 00:52:09.293769 systemd[1]: verity-setup.service: Deactivated successfully. Apr 28 00:52:09.293864 systemd[1]: Stopped verity-setup.service. Apr 28 00:52:09.299262 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 00:52:09.302454 systemd[1]: Started systemd-journald.service - Journal Service. Apr 28 00:52:09.302957 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 28 00:52:09.304537 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 28 00:52:09.306445 systemd[1]: Mounted media.mount - External Media Directory. Apr 28 00:52:09.312689 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 28 00:52:09.318639 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 28 00:52:09.350065 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 28 00:52:09.351899 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 28 00:52:09.354625 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 28 00:52:09.357731 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 28 00:52:09.358637 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 28 00:52:09.360734 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 28 00:52:09.360914 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 28 00:52:09.362818 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 28 00:52:09.362987 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 28 00:52:09.365398 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 28 00:52:09.365575 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 28 00:52:09.367683 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 28 00:52:09.367823 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 28 00:52:09.371189 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 28 00:52:09.377936 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 28 00:52:09.383407 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 28 00:52:09.388992 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 28 00:52:09.391406 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 28 00:52:09.399490 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 28 00:52:09.409690 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 28 00:52:09.430006 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 28 00:52:09.433438 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 28 00:52:09.435035 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 28 00:52:09.435071 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 28 00:52:09.437345 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 28 00:52:09.440507 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 28 00:52:09.443556 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 28 00:52:09.445102 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 28 00:52:09.449037 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 28 00:52:09.451442 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 28 00:52:09.453050 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 28 00:52:09.453834 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 28 00:52:09.456389 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 28 00:52:09.458487 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 28 00:52:09.462452 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 28 00:52:09.465646 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 28 00:52:09.470763 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 28 00:52:09.475613 systemd-journald[1149]: Time spent on flushing to /var/log/journal/41cbaabb421340bd9878d123b3ed9f40 is 38.792ms for 1004 entries. Apr 28 00:52:09.475613 systemd-journald[1149]: System Journal (/var/log/journal/41cbaabb421340bd9878d123b3ed9f40) is 8.0M, max 195.6M, 187.6M free. Apr 28 00:52:09.532083 systemd-journald[1149]: Received client request to flush runtime journal. Apr 28 00:52:09.532120 kernel: loop0: detected capacity change from 0 to 140768 Apr 28 00:52:09.532131 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 28 00:52:09.475469 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 28 00:52:09.479041 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 28 00:52:09.481396 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 28 00:52:09.493105 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 28 00:52:09.495480 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 28 00:52:09.504463 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 28 00:52:09.506458 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 28 00:52:09.516143 udevadm[1183]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 28 00:52:09.524869 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Apr 28 00:52:09.524878 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Apr 28 00:52:09.528514 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 28 00:52:09.544462 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 28 00:52:09.549579 kernel: loop1: detected capacity change from 0 to 217752 Apr 28 00:52:09.546452 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 28 00:52:09.558207 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 28 00:52:09.559050 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 28 00:52:09.574168 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 28 00:52:09.583482 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 28 00:52:09.603336 kernel: loop2: detected capacity change from 0 to 142488 Apr 28 00:52:09.606061 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. Apr 28 00:52:09.606079 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. Apr 28 00:52:09.613259 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 28 00:52:09.657248 kernel: loop3: detected capacity change from 0 to 140768 Apr 28 00:52:09.678250 kernel: loop4: detected capacity change from 0 to 217752 Apr 28 00:52:09.689523 kernel: loop5: detected capacity change from 0 to 142488 Apr 28 00:52:09.701870 (sd-merge)[1207]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 28 00:52:09.702371 (sd-merge)[1207]: Merged extensions into '/usr'. Apr 28 00:52:09.706966 systemd[1]: Reloading requested from client PID 1180 ('systemd-sysext') (unit systemd-sysext.service)... Apr 28 00:52:09.706981 systemd[1]: Reloading... Apr 28 00:52:09.810293 zram_generator::config[1230]: No configuration found. Apr 28 00:52:09.870824 ldconfig[1175]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 28 00:52:09.918897 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 28 00:52:09.951296 systemd[1]: Reloading finished in 243 ms. Apr 28 00:52:10.001637 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 28 00:52:10.003764 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 28 00:52:10.007353 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 28 00:52:10.031473 systemd[1]: Starting ensure-sysext.service... Apr 28 00:52:10.033837 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 28 00:52:10.036697 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 28 00:52:10.040117 systemd[1]: Reloading requested from client PID 1271 ('systemctl') (unit ensure-sysext.service)... Apr 28 00:52:10.040138 systemd[1]: Reloading... Apr 28 00:52:10.055683 systemd-tmpfiles[1273]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 28 00:52:10.056025 systemd-tmpfiles[1273]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 28 00:52:10.056860 systemd-tmpfiles[1273]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 28 00:52:10.057082 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. Apr 28 00:52:10.057160 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. Apr 28 00:52:10.058712 systemd-udevd[1274]: Using default interface naming scheme 'v255'. Apr 28 00:52:10.059332 systemd-tmpfiles[1273]: Detected autofs mount point /boot during canonicalization of boot. Apr 28 00:52:10.059345 systemd-tmpfiles[1273]: Skipping /boot Apr 28 00:52:10.066273 systemd-tmpfiles[1273]: Detected autofs mount point /boot during canonicalization of boot. Apr 28 00:52:10.066282 systemd-tmpfiles[1273]: Skipping /boot Apr 28 00:52:10.093256 zram_generator::config[1299]: No configuration found. Apr 28 00:52:10.165732 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1301) Apr 28 00:52:10.213344 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 28 00:52:10.213420 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 28 00:52:10.215466 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 28 00:52:10.216646 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 28 00:52:10.216782 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 28 00:52:10.221320 kernel: ACPI: button: Power Button [PWRF] Apr 28 00:52:10.232803 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Apr 28 00:52:10.245257 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 28 00:52:10.259257 kernel: mousedev: PS/2 mouse device common for all mice Apr 28 00:52:10.297592 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 28 00:52:10.299662 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 28 00:52:10.299899 systemd[1]: Reloading finished in 259 ms. Apr 28 00:52:10.396150 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 28 00:52:10.406738 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 28 00:52:10.450375 systemd[1]: Finished ensure-sysext.service. Apr 28 00:52:10.464787 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 28 00:52:10.471013 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 00:52:10.481551 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 28 00:52:10.486007 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 28 00:52:10.488601 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 28 00:52:10.490842 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 28 00:52:10.493435 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 28 00:52:10.497556 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 28 00:52:10.503968 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 28 00:52:10.507591 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 28 00:52:10.509200 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 28 00:52:10.510808 lvm[1374]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 28 00:52:10.512143 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 28 00:52:10.516499 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 28 00:52:10.520920 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 28 00:52:10.532601 augenrules[1396]: No rules Apr 28 00:52:10.534413 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 28 00:52:10.539250 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 28 00:52:10.542061 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 28 00:52:10.547908 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 28 00:52:10.549604 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 28 00:52:10.551190 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 28 00:52:10.553369 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 28 00:52:10.556825 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 28 00:52:10.559752 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 28 00:52:10.560027 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 28 00:52:10.561949 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 28 00:52:10.562091 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 28 00:52:10.564035 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 28 00:52:10.564161 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 28 00:52:10.567155 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 28 00:52:10.567483 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 28 00:52:10.570293 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 28 00:52:10.570608 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 28 00:52:10.582068 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 28 00:52:10.590400 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 28 00:52:10.590501 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 28 00:52:10.590558 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 28 00:52:10.594377 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 28 00:52:10.594536 lvm[1415]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 28 00:52:10.597437 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 28 00:52:10.598904 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 28 00:52:10.599407 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 28 00:52:10.604044 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 28 00:52:10.616059 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 28 00:52:10.627149 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 28 00:52:10.644072 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 28 00:52:10.699329 systemd-resolved[1395]: Positive Trust Anchors: Apr 28 00:52:10.699349 systemd-resolved[1395]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 28 00:52:10.699375 systemd-resolved[1395]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 28 00:52:10.703704 systemd-resolved[1395]: Defaulting to hostname 'linux'. Apr 28 00:52:10.706475 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 28 00:52:10.708251 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 28 00:52:10.711327 systemd-networkd[1392]: lo: Link UP Apr 28 00:52:10.711348 systemd-networkd[1392]: lo: Gained carrier Apr 28 00:52:10.712352 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 28 00:52:10.712761 systemd-networkd[1392]: Enumeration completed Apr 28 00:52:10.713531 systemd-networkd[1392]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 00:52:10.713549 systemd-networkd[1392]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 28 00:52:10.714176 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 28 00:52:10.716296 systemd-networkd[1392]: eth0: Link UP Apr 28 00:52:10.716314 systemd-networkd[1392]: eth0: Gained carrier Apr 28 00:52:10.716334 systemd-networkd[1392]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 28 00:52:10.716642 systemd[1]: Reached target network.target - Network. Apr 28 00:52:10.718098 systemd[1]: Reached target sysinit.target - System Initialization. Apr 28 00:52:10.721155 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 28 00:52:10.723504 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 28 00:52:10.725478 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 28 00:52:10.728706 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 28 00:52:10.728771 systemd[1]: Reached target paths.target - Path Units. Apr 28 00:52:10.730148 systemd[1]: Reached target time-set.target - System Time Set. Apr 28 00:52:10.731832 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 28 00:52:10.740434 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 28 00:52:10.742755 systemd[1]: Reached target timers.target - Timer Units. Apr 28 00:52:10.750400 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 28 00:52:10.754206 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 28 00:52:10.754292 systemd-networkd[1392]: eth0: DHCPv4 address 10.0.0.98/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 28 00:52:10.756564 systemd-timesyncd[1401]: Network configuration changed, trying to establish connection. Apr 28 00:52:11.437633 systemd-timesyncd[1401]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 28 00:52:11.437640 systemd-resolved[1395]: Clock change detected. Flushing caches. Apr 28 00:52:11.437671 systemd-timesyncd[1401]: Initial clock synchronization to Tue 2026-04-28 00:52:11.436441 UTC. Apr 28 00:52:11.446623 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 28 00:52:11.449423 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 28 00:52:11.451783 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 28 00:52:11.454038 systemd[1]: Reached target sockets.target - Socket Units. Apr 28 00:52:11.455666 systemd[1]: Reached target basic.target - Basic System. Apr 28 00:52:11.455772 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 28 00:52:11.455789 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 28 00:52:11.457316 systemd[1]: Starting containerd.service - containerd container runtime... Apr 28 00:52:11.461255 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 28 00:52:11.464342 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 28 00:52:11.469105 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 28 00:52:11.471496 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 28 00:52:11.473757 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 28 00:52:11.474362 jq[1438]: false Apr 28 00:52:11.475906 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 28 00:52:11.481376 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 28 00:52:11.487040 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 28 00:52:11.495100 extend-filesystems[1439]: Found loop3 Apr 28 00:52:11.495100 extend-filesystems[1439]: Found loop4 Apr 28 00:52:11.495100 extend-filesystems[1439]: Found loop5 Apr 28 00:52:11.495100 extend-filesystems[1439]: Found sr0 Apr 28 00:52:11.495100 extend-filesystems[1439]: Found vda Apr 28 00:52:11.495100 extend-filesystems[1439]: Found vda1 Apr 28 00:52:11.495100 extend-filesystems[1439]: Found vda2 Apr 28 00:52:11.495100 extend-filesystems[1439]: Found vda3 Apr 28 00:52:11.495100 extend-filesystems[1439]: Found usr Apr 28 00:52:11.495100 extend-filesystems[1439]: Found vda4 Apr 28 00:52:11.495100 extend-filesystems[1439]: Found vda6 Apr 28 00:52:11.495100 extend-filesystems[1439]: Found vda7 Apr 28 00:52:11.495100 extend-filesystems[1439]: Found vda9 Apr 28 00:52:11.495100 extend-filesystems[1439]: Checking size of /dev/vda9 Apr 28 00:52:11.494850 dbus-daemon[1437]: [system] SELinux support is enabled Apr 28 00:52:11.492777 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 28 00:52:11.545680 extend-filesystems[1439]: Resized partition /dev/vda9 Apr 28 00:52:11.502142 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 28 00:52:11.508673 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 28 00:52:11.549926 extend-filesystems[1458]: resize2fs 1.47.1 (20-May-2024) Apr 28 00:52:11.562619 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 28 00:52:11.515512 systemd[1]: Starting update-engine.service - Update Engine... Apr 28 00:52:11.526471 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 28 00:52:11.569188 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1301) Apr 28 00:52:11.569267 update_engine[1451]: I20260428 00:52:11.552260 1451 main.cc:92] Flatcar Update Engine starting Apr 28 00:52:11.569267 update_engine[1451]: I20260428 00:52:11.556697 1451 update_check_scheduler.cc:74] Next update check in 7m6s Apr 28 00:52:11.534586 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 28 00:52:11.569773 jq[1456]: true Apr 28 00:52:11.555174 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 28 00:52:11.555399 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 28 00:52:11.561132 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 28 00:52:11.561350 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 28 00:52:11.580409 systemd[1]: motdgen.service: Deactivated successfully. Apr 28 00:52:11.580421 systemd-logind[1445]: Watching system buttons on /dev/input/event1 (Power Button) Apr 28 00:52:11.580453 systemd-logind[1445]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 28 00:52:11.580744 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 28 00:52:11.581685 systemd-logind[1445]: New seat seat0. Apr 28 00:52:11.592314 systemd[1]: Started systemd-logind.service - User Login Management. Apr 28 00:52:11.596367 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 28 00:52:11.595983 (ntainerd)[1465]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 28 00:52:11.598675 jq[1464]: true Apr 28 00:52:11.604573 tar[1462]: linux-amd64/LICENSE Apr 28 00:52:11.604397 systemd[1]: Started update-engine.service - Update Engine. Apr 28 00:52:11.611144 tar[1462]: linux-amd64/helm Apr 28 00:52:11.612597 extend-filesystems[1458]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 28 00:52:11.612597 extend-filesystems[1458]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 28 00:52:11.612597 extend-filesystems[1458]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 28 00:52:11.623264 extend-filesystems[1439]: Resized filesystem in /dev/vda9 Apr 28 00:52:11.617731 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 28 00:52:11.617939 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 28 00:52:11.627861 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 28 00:52:11.630389 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 28 00:52:11.633419 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 28 00:52:11.633553 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 28 00:52:11.643398 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 28 00:52:11.677193 bash[1492]: Updated "/home/core/.ssh/authorized_keys" Apr 28 00:52:11.678797 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 28 00:52:11.683549 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 28 00:52:11.697351 locksmithd[1485]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 28 00:52:11.720871 sshd_keygen[1455]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 28 00:52:11.748318 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 28 00:52:11.757398 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 28 00:52:11.766840 systemd[1]: issuegen.service: Deactivated successfully. Apr 28 00:52:11.767117 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 28 00:52:11.775308 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 28 00:52:11.782658 containerd[1465]: time="2026-04-28T00:52:11.782555809Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 28 00:52:11.786634 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 28 00:52:11.799570 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 28 00:52:11.802975 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 28 00:52:11.805911 systemd[1]: Reached target getty.target - Login Prompts. Apr 28 00:52:11.818666 containerd[1465]: time="2026-04-28T00:52:11.818427852Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 28 00:52:11.822196 containerd[1465]: time="2026-04-28T00:52:11.822025708Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 28 00:52:11.822196 containerd[1465]: time="2026-04-28T00:52:11.822123965Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 28 00:52:11.822196 containerd[1465]: time="2026-04-28T00:52:11.822141188Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 28 00:52:11.822390 containerd[1465]: time="2026-04-28T00:52:11.822284390Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 28 00:52:11.822390 containerd[1465]: time="2026-04-28T00:52:11.822299130Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 28 00:52:11.822390 containerd[1465]: time="2026-04-28T00:52:11.822347359Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 28 00:52:11.822390 containerd[1465]: time="2026-04-28T00:52:11.822356989Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 28 00:52:11.822555 containerd[1465]: time="2026-04-28T00:52:11.822521855Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 28 00:52:11.822555 containerd[1465]: time="2026-04-28T00:52:11.822545457Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 28 00:52:11.822585 containerd[1465]: time="2026-04-28T00:52:11.822556386Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 28 00:52:11.822585 containerd[1465]: time="2026-04-28T00:52:11.822563792Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 28 00:52:11.822645 containerd[1465]: time="2026-04-28T00:52:11.822619378Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 28 00:52:11.822851 containerd[1465]: time="2026-04-28T00:52:11.822819311Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 28 00:52:11.822971 containerd[1465]: time="2026-04-28T00:52:11.822939626Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 28 00:52:11.822971 containerd[1465]: time="2026-04-28T00:52:11.822955462Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 28 00:52:11.823086 containerd[1465]: time="2026-04-28T00:52:11.823057342Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 28 00:52:11.823123 containerd[1465]: time="2026-04-28T00:52:11.823104970Z" level=info msg="metadata content store policy set" policy=shared Apr 28 00:52:11.828832 containerd[1465]: time="2026-04-28T00:52:11.828528091Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 28 00:52:11.828832 containerd[1465]: time="2026-04-28T00:52:11.828595409Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 28 00:52:11.828832 containerd[1465]: time="2026-04-28T00:52:11.828614470Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 28 00:52:11.828832 containerd[1465]: time="2026-04-28T00:52:11.828638839Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 28 00:52:11.828832 containerd[1465]: time="2026-04-28T00:52:11.828659696Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 28 00:52:11.828832 containerd[1465]: time="2026-04-28T00:52:11.828938936Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 28 00:52:11.829359 containerd[1465]: time="2026-04-28T00:52:11.829274408Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 28 00:52:11.829436 containerd[1465]: time="2026-04-28T00:52:11.829385315Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 28 00:52:11.829436 containerd[1465]: time="2026-04-28T00:52:11.829404152Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 28 00:52:11.829436 containerd[1465]: time="2026-04-28T00:52:11.829420085Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 28 00:52:11.829516 containerd[1465]: time="2026-04-28T00:52:11.829436394Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 28 00:52:11.829516 containerd[1465]: time="2026-04-28T00:52:11.829454312Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 28 00:52:11.829516 containerd[1465]: time="2026-04-28T00:52:11.829470986Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 28 00:52:11.829516 containerd[1465]: time="2026-04-28T00:52:11.829488233Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 28 00:52:11.829516 containerd[1465]: time="2026-04-28T00:52:11.829505141Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 28 00:52:11.829623 containerd[1465]: time="2026-04-28T00:52:11.829521053Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 28 00:52:11.829623 containerd[1465]: time="2026-04-28T00:52:11.829545462Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 28 00:52:11.829623 containerd[1465]: time="2026-04-28T00:52:11.829561379Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 28 00:52:11.829623 containerd[1465]: time="2026-04-28T00:52:11.829585352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 28 00:52:11.829623 containerd[1465]: time="2026-04-28T00:52:11.829601802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 28 00:52:11.829623 containerd[1465]: time="2026-04-28T00:52:11.829616968Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 28 00:52:11.829797 containerd[1465]: time="2026-04-28T00:52:11.829630857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 28 00:52:11.829797 containerd[1465]: time="2026-04-28T00:52:11.829644436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 28 00:52:11.829797 containerd[1465]: time="2026-04-28T00:52:11.829659850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 28 00:52:11.829797 containerd[1465]: time="2026-04-28T00:52:11.829682627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 28 00:52:11.829797 containerd[1465]: time="2026-04-28T00:52:11.829700596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 28 00:52:11.829797 containerd[1465]: time="2026-04-28T00:52:11.829768411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 28 00:52:11.829797 containerd[1465]: time="2026-04-28T00:52:11.829790396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 28 00:52:11.829952 containerd[1465]: time="2026-04-28T00:52:11.829805872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 28 00:52:11.829952 containerd[1465]: time="2026-04-28T00:52:11.829824264Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 28 00:52:11.829952 containerd[1465]: time="2026-04-28T00:52:11.829840020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 28 00:52:11.829952 containerd[1465]: time="2026-04-28T00:52:11.829859268Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 28 00:52:11.829952 containerd[1465]: time="2026-04-28T00:52:11.829883579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 28 00:52:11.829952 containerd[1465]: time="2026-04-28T00:52:11.829899417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 28 00:52:11.829952 containerd[1465]: time="2026-04-28T00:52:11.829913738Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 28 00:52:11.831267 containerd[1465]: time="2026-04-28T00:52:11.829968581Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 28 00:52:11.831267 containerd[1465]: time="2026-04-28T00:52:11.830349839Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 28 00:52:11.831267 containerd[1465]: time="2026-04-28T00:52:11.830489599Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 28 00:52:11.831267 containerd[1465]: time="2026-04-28T00:52:11.830545955Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 28 00:52:11.831267 containerd[1465]: time="2026-04-28T00:52:11.830558118Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 28 00:52:11.831267 containerd[1465]: time="2026-04-28T00:52:11.830577112Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 28 00:52:11.831267 containerd[1465]: time="2026-04-28T00:52:11.830590013Z" level=info msg="NRI interface is disabled by configuration." Apr 28 00:52:11.831267 containerd[1465]: time="2026-04-28T00:52:11.830603960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 28 00:52:11.831461 containerd[1465]: time="2026-04-28T00:52:11.830974980Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 28 00:52:11.831461 containerd[1465]: time="2026-04-28T00:52:11.831343018Z" level=info msg="Connect containerd service" Apr 28 00:52:11.831461 containerd[1465]: time="2026-04-28T00:52:11.831385593Z" level=info msg="using legacy CRI server" Apr 28 00:52:11.831461 containerd[1465]: time="2026-04-28T00:52:11.831395676Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 28 00:52:11.831762 containerd[1465]: time="2026-04-28T00:52:11.831545084Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 28 00:52:11.834101 containerd[1465]: time="2026-04-28T00:52:11.833692865Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 28 00:52:11.834101 containerd[1465]: time="2026-04-28T00:52:11.834066374Z" level=info msg="Start subscribing containerd event" Apr 28 00:52:11.834186 containerd[1465]: time="2026-04-28T00:52:11.834127810Z" level=info msg="Start recovering state" Apr 28 00:52:11.834229 containerd[1465]: time="2026-04-28T00:52:11.834191901Z" level=info msg="Start event monitor" Apr 28 00:52:11.834229 containerd[1465]: time="2026-04-28T00:52:11.834213179Z" level=info msg="Start snapshots syncer" Apr 28 00:52:11.834229 containerd[1465]: time="2026-04-28T00:52:11.834223100Z" level=info msg="Start cni network conf syncer for default" Apr 28 00:52:11.834289 containerd[1465]: time="2026-04-28T00:52:11.834231637Z" level=info msg="Start streaming server" Apr 28 00:52:11.834760 containerd[1465]: time="2026-04-28T00:52:11.834728496Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 28 00:52:11.834811 containerd[1465]: time="2026-04-28T00:52:11.834796561Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 28 00:52:11.837271 containerd[1465]: time="2026-04-28T00:52:11.837065488Z" level=info msg="containerd successfully booted in 0.055245s" Apr 28 00:52:11.837446 systemd[1]: Started containerd.service - containerd container runtime. Apr 28 00:52:12.214556 tar[1462]: linux-amd64/README.md Apr 28 00:52:12.240244 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 28 00:52:12.930181 systemd-networkd[1392]: eth0: Gained IPv6LL Apr 28 00:52:12.935065 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 28 00:52:12.938506 systemd[1]: Reached target network-online.target - Network is Online. Apr 28 00:52:12.957983 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 28 00:52:12.960981 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:52:12.963515 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 28 00:52:12.985674 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 28 00:52:12.985921 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 28 00:52:12.987933 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 28 00:52:12.992598 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 28 00:52:14.933379 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 28 00:52:14.959094 systemd[1]: Started sshd@0-10.0.0.98:22-10.0.0.1:51178.service - OpenSSH per-connection server daemon (10.0.0.1:51178). Apr 28 00:52:15.067537 sshd[1544]: Accepted publickey for core from 10.0.0.1 port 51178 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 00:52:15.071147 sshd[1544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:52:15.123747 systemd-logind[1445]: New session 1 of user core. Apr 28 00:52:15.124788 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 28 00:52:15.132764 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 28 00:52:15.144431 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:52:15.149966 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 28 00:52:15.150146 (kubelet)[1551]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:52:15.157812 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 28 00:52:15.171314 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 28 00:52:15.196618 (systemd)[1554]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 28 00:52:15.453785 systemd[1554]: Queued start job for default target default.target. Apr 28 00:52:15.462844 systemd[1554]: Created slice app.slice - User Application Slice. Apr 28 00:52:15.462874 systemd[1554]: Reached target paths.target - Paths. Apr 28 00:52:15.462885 systemd[1554]: Reached target timers.target - Timers. Apr 28 00:52:15.466404 systemd[1554]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 28 00:52:15.489297 systemd[1554]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 28 00:52:15.489426 systemd[1554]: Reached target sockets.target - Sockets. Apr 28 00:52:15.489442 systemd[1554]: Reached target basic.target - Basic System. Apr 28 00:52:15.489495 systemd[1554]: Reached target default.target - Main User Target. Apr 28 00:52:15.489527 systemd[1554]: Startup finished in 285ms. Apr 28 00:52:15.489685 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 28 00:52:15.501799 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 28 00:52:15.504714 systemd[1]: Startup finished in 2.759s (kernel) + 8.340s (initrd) + 6.553s (userspace) = 17.653s. Apr 28 00:52:15.629769 systemd[1]: Started sshd@1-10.0.0.98:22-10.0.0.1:44260.service - OpenSSH per-connection server daemon (10.0.0.1:44260). Apr 28 00:52:15.686637 sshd[1574]: Accepted publickey for core from 10.0.0.1 port 44260 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 00:52:15.688825 sshd[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:52:15.693662 systemd-logind[1445]: New session 2 of user core. Apr 28 00:52:15.706580 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 28 00:52:15.771252 sshd[1574]: pam_unix(sshd:session): session closed for user core Apr 28 00:52:15.784472 systemd[1]: sshd@1-10.0.0.98:22-10.0.0.1:44260.service: Deactivated successfully. Apr 28 00:52:15.812484 systemd[1]: session-2.scope: Deactivated successfully. Apr 28 00:52:15.814593 systemd-logind[1445]: Session 2 logged out. Waiting for processes to exit. Apr 28 00:52:15.838878 systemd[1]: Started sshd@2-10.0.0.98:22-10.0.0.1:44274.service - OpenSSH per-connection server daemon (10.0.0.1:44274). Apr 28 00:52:15.840189 systemd-logind[1445]: Removed session 2. Apr 28 00:52:15.871088 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 44274 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 00:52:15.873414 sshd[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:52:15.878324 systemd-logind[1445]: New session 3 of user core. Apr 28 00:52:15.892771 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 28 00:52:15.898597 kubelet[1551]: E0428 00:52:15.898522 1551 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:52:15.906403 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:52:15.906549 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:52:15.906885 systemd[1]: kubelet.service: Consumed 1.596s CPU time. Apr 28 00:52:15.954693 sshd[1583]: pam_unix(sshd:session): session closed for user core Apr 28 00:52:15.974460 systemd[1]: sshd@2-10.0.0.98:22-10.0.0.1:44274.service: Deactivated successfully. Apr 28 00:52:15.976085 systemd[1]: session-3.scope: Deactivated successfully. Apr 28 00:52:15.979678 systemd-logind[1445]: Session 3 logged out. Waiting for processes to exit. Apr 28 00:52:15.998115 systemd[1]: Started sshd@3-10.0.0.98:22-10.0.0.1:44286.service - OpenSSH per-connection server daemon (10.0.0.1:44286). Apr 28 00:52:15.999079 systemd-logind[1445]: Removed session 3. Apr 28 00:52:16.074044 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 44286 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 00:52:16.075558 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:52:16.088081 systemd-logind[1445]: New session 4 of user core. Apr 28 00:52:16.112082 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 28 00:52:16.191676 sshd[1591]: pam_unix(sshd:session): session closed for user core Apr 28 00:52:16.206155 systemd[1]: sshd@3-10.0.0.98:22-10.0.0.1:44286.service: Deactivated successfully. Apr 28 00:52:16.207407 systemd[1]: session-4.scope: Deactivated successfully. Apr 28 00:52:16.208473 systemd-logind[1445]: Session 4 logged out. Waiting for processes to exit. Apr 28 00:52:16.209583 systemd[1]: Started sshd@4-10.0.0.98:22-10.0.0.1:44302.service - OpenSSH per-connection server daemon (10.0.0.1:44302). Apr 28 00:52:16.210243 systemd-logind[1445]: Removed session 4. Apr 28 00:52:16.244101 sshd[1598]: Accepted publickey for core from 10.0.0.1 port 44302 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 00:52:16.245296 sshd[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:52:16.251625 systemd-logind[1445]: New session 5 of user core. Apr 28 00:52:16.268366 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 28 00:52:16.327255 sudo[1601]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 28 00:52:16.327485 sudo[1601]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 28 00:52:16.344107 sudo[1601]: pam_unix(sudo:session): session closed for user root Apr 28 00:52:16.347907 sshd[1598]: pam_unix(sshd:session): session closed for user core Apr 28 00:52:16.365194 systemd[1]: sshd@4-10.0.0.98:22-10.0.0.1:44302.service: Deactivated successfully. Apr 28 00:52:16.366579 systemd[1]: session-5.scope: Deactivated successfully. Apr 28 00:52:16.367753 systemd-logind[1445]: Session 5 logged out. Waiting for processes to exit. Apr 28 00:52:16.368897 systemd[1]: Started sshd@5-10.0.0.98:22-10.0.0.1:44312.service - OpenSSH per-connection server daemon (10.0.0.1:44312). Apr 28 00:52:16.369462 systemd-logind[1445]: Removed session 5. Apr 28 00:52:16.423917 sshd[1606]: Accepted publickey for core from 10.0.0.1 port 44312 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 00:52:16.425550 sshd[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:52:16.435821 systemd-logind[1445]: New session 6 of user core. Apr 28 00:52:16.453707 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 28 00:52:16.510295 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 28 00:52:16.510568 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 28 00:52:16.514173 sudo[1610]: pam_unix(sudo:session): session closed for user root Apr 28 00:52:16.520567 sudo[1609]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 28 00:52:16.520935 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 28 00:52:16.553192 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 28 00:52:16.555837 auditctl[1613]: No rules Apr 28 00:52:16.556204 systemd[1]: audit-rules.service: Deactivated successfully. Apr 28 00:52:16.556401 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 28 00:52:16.560443 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 28 00:52:16.595586 augenrules[1631]: No rules Apr 28 00:52:16.596372 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 28 00:52:16.597345 sudo[1609]: pam_unix(sudo:session): session closed for user root Apr 28 00:52:16.598881 sshd[1606]: pam_unix(sshd:session): session closed for user core Apr 28 00:52:16.615577 systemd[1]: sshd@5-10.0.0.98:22-10.0.0.1:44312.service: Deactivated successfully. Apr 28 00:52:16.617327 systemd[1]: session-6.scope: Deactivated successfully. Apr 28 00:52:16.618892 systemd-logind[1445]: Session 6 logged out. Waiting for processes to exit. Apr 28 00:52:16.637359 systemd[1]: Started sshd@6-10.0.0.98:22-10.0.0.1:44328.service - OpenSSH per-connection server daemon (10.0.0.1:44328). Apr 28 00:52:16.638228 systemd-logind[1445]: Removed session 6. Apr 28 00:52:16.667275 sshd[1639]: Accepted publickey for core from 10.0.0.1 port 44328 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 00:52:16.670188 sshd[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:52:16.673971 systemd-logind[1445]: New session 7 of user core. Apr 28 00:52:16.680172 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 28 00:52:16.744129 sudo[1642]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 28 00:52:16.744402 sudo[1642]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 28 00:52:17.188567 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 28 00:52:17.189493 (dockerd)[1660]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 28 00:52:19.198970 dockerd[1660]: time="2026-04-28T00:52:19.198588203Z" level=info msg="Starting up" Apr 28 00:52:19.883591 dockerd[1660]: time="2026-04-28T00:52:19.883218797Z" level=info msg="Loading containers: start." Apr 28 00:52:20.752071 kernel: Initializing XFRM netlink socket Apr 28 00:52:20.978210 systemd-networkd[1392]: docker0: Link UP Apr 28 00:52:21.045691 dockerd[1660]: time="2026-04-28T00:52:21.043873109Z" level=info msg="Loading containers: done." Apr 28 00:52:21.457075 dockerd[1660]: time="2026-04-28T00:52:21.456912566Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 28 00:52:21.457439 dockerd[1660]: time="2026-04-28T00:52:21.457413233Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 28 00:52:21.457592 dockerd[1660]: time="2026-04-28T00:52:21.457555983Z" level=info msg="Daemon has completed initialization" Apr 28 00:52:21.457681 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck643374825-merged.mount: Deactivated successfully. Apr 28 00:52:21.770818 dockerd[1660]: time="2026-04-28T00:52:21.769242003Z" level=info msg="API listen on /run/docker.sock" Apr 28 00:52:21.770937 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 28 00:52:24.282658 containerd[1465]: time="2026-04-28T00:52:24.282364768Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\"" Apr 28 00:52:25.447841 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3783497898.mount: Deactivated successfully. Apr 28 00:52:25.924423 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 28 00:52:25.936806 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:52:26.374288 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:52:26.375167 (kubelet)[1874]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:52:26.745521 kubelet[1874]: E0428 00:52:26.745248 1874 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:52:26.751175 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:52:26.751319 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:52:27.470842 containerd[1465]: time="2026-04-28T00:52:27.470466807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:52:27.470842 containerd[1465]: time="2026-04-28T00:52:27.470790082Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.4: active requests=0, bytes read=27578861" Apr 28 00:52:27.472554 containerd[1465]: time="2026-04-28T00:52:27.472500896Z" level=info msg="ImageCreate event name:\"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:52:27.477637 containerd[1465]: time="2026-04-28T00:52:27.477578403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:52:27.479132 containerd[1465]: time="2026-04-28T00:52:27.479076593Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.4\" with image id \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\", size \"27576022\" in 3.196636835s" Apr 28 00:52:27.479185 containerd[1465]: time="2026-04-28T00:52:27.479129638Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\" returns image reference \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\"" Apr 28 00:52:27.481032 containerd[1465]: time="2026-04-28T00:52:27.481011038Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\"" Apr 28 00:52:29.076490 containerd[1465]: time="2026-04-28T00:52:29.076157727Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:52:29.077381 containerd[1465]: time="2026-04-28T00:52:29.076792459Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.4: active requests=0, bytes read=21451591" Apr 28 00:52:29.079714 containerd[1465]: time="2026-04-28T00:52:29.079386069Z" level=info msg="ImageCreate event name:\"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:52:29.084675 containerd[1465]: time="2026-04-28T00:52:29.084337030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:52:29.086880 containerd[1465]: time="2026-04-28T00:52:29.086691967Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.4\" with image id \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\", size \"23018006\" in 1.605596841s" Apr 28 00:52:29.086880 containerd[1465]: time="2026-04-28T00:52:29.086742370Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\" returns image reference \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\"" Apr 28 00:52:29.089033 containerd[1465]: time="2026-04-28T00:52:29.088578066Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\"" Apr 28 00:52:30.426145 containerd[1465]: time="2026-04-28T00:52:30.425705795Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:52:30.427673 containerd[1465]: time="2026-04-28T00:52:30.427135442Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.4: active requests=0, bytes read=15555222" Apr 28 00:52:30.428473 containerd[1465]: time="2026-04-28T00:52:30.428415274Z" level=info msg="ImageCreate event name:\"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:52:30.433663 containerd[1465]: time="2026-04-28T00:52:30.433442737Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:52:30.435056 containerd[1465]: time="2026-04-28T00:52:30.434968481Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.4\" with image id \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\", size \"17121655\" in 1.346352228s" Apr 28 00:52:30.435056 containerd[1465]: time="2026-04-28T00:52:30.435036318Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\" returns image reference \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\"" Apr 28 00:52:30.437726 containerd[1465]: time="2026-04-28T00:52:30.437492495Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\"" Apr 28 00:52:32.166695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4277456954.mount: Deactivated successfully. Apr 28 00:52:32.616122 containerd[1465]: time="2026-04-28T00:52:32.615722798Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:52:32.616981 containerd[1465]: time="2026-04-28T00:52:32.616388343Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.4: active requests=0, bytes read=25699819" Apr 28 00:52:32.617482 containerd[1465]: time="2026-04-28T00:52:32.617437012Z" level=info msg="ImageCreate event name:\"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:52:32.619360 containerd[1465]: time="2026-04-28T00:52:32.619317881Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:52:32.619926 containerd[1465]: time="2026-04-28T00:52:32.619889390Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.4\" with image id \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\", repo tag \"registry.k8s.io/kube-proxy:v1.35.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\", size \"25698944\" in 2.182279948s" Apr 28 00:52:32.619952 containerd[1465]: time="2026-04-28T00:52:32.619922594Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\" returns image reference \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\"" Apr 28 00:52:32.620941 containerd[1465]: time="2026-04-28T00:52:32.620913860Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Apr 28 00:52:33.140842 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1838422391.mount: Deactivated successfully. Apr 28 00:52:34.367271 containerd[1465]: time="2026-04-28T00:52:34.367044088Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:52:34.367957 containerd[1465]: time="2026-04-28T00:52:34.367500359Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23555980" Apr 28 00:52:34.368531 containerd[1465]: time="2026-04-28T00:52:34.368491047Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:52:34.373172 containerd[1465]: time="2026-04-28T00:52:34.372875138Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:52:34.374077 containerd[1465]: time="2026-04-28T00:52:34.374038029Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 1.753096024s" Apr 28 00:52:34.374077 containerd[1465]: time="2026-04-28T00:52:34.374075712Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Apr 28 00:52:34.375144 containerd[1465]: time="2026-04-28T00:52:34.375101782Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 28 00:52:34.765622 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3239316067.mount: Deactivated successfully. Apr 28 00:52:34.772825 containerd[1465]: time="2026-04-28T00:52:34.772565116Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:52:34.773351 containerd[1465]: time="2026-04-28T00:52:34.773246179Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321150" Apr 28 00:52:34.774358 containerd[1465]: time="2026-04-28T00:52:34.774312369Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:52:34.776738 containerd[1465]: time="2026-04-28T00:52:34.776704694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:52:34.777566 containerd[1465]: time="2026-04-28T00:52:34.777531237Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 402.386414ms" Apr 28 00:52:34.777566 containerd[1465]: time="2026-04-28T00:52:34.777563205Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 28 00:52:34.778724 containerd[1465]: time="2026-04-28T00:52:34.778540102Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Apr 28 00:52:35.305162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2834307432.mount: Deactivated successfully. Apr 28 00:52:36.546963 containerd[1465]: time="2026-04-28T00:52:36.546618113Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:52:36.547756 containerd[1465]: time="2026-04-28T00:52:36.547067456Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23643979" Apr 28 00:52:36.548161 containerd[1465]: time="2026-04-28T00:52:36.548121049Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:52:36.552308 containerd[1465]: time="2026-04-28T00:52:36.552057384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:52:36.553107 containerd[1465]: time="2026-04-28T00:52:36.553070379Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 1.774503449s" Apr 28 00:52:36.553107 containerd[1465]: time="2026-04-28T00:52:36.553105393Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Apr 28 00:52:36.753785 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 28 00:52:36.768396 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:52:37.381293 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:52:37.387541 (kubelet)[2048]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 28 00:52:37.462665 kubelet[2048]: E0428 00:52:37.462183 2048 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 28 00:52:37.468504 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 28 00:52:37.468632 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 28 00:52:38.694695 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:52:38.713923 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:52:38.771899 systemd[1]: Reloading requested from client PID 2067 ('systemctl') (unit session-7.scope)... Apr 28 00:52:38.771949 systemd[1]: Reloading... Apr 28 00:52:38.994664 zram_generator::config[2109]: No configuration found. Apr 28 00:52:39.489446 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 28 00:52:39.742899 systemd[1]: Reloading finished in 967 ms. Apr 28 00:52:39.849786 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:52:39.862873 systemd[1]: kubelet.service: Deactivated successfully. Apr 28 00:52:39.863179 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:52:39.882857 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:52:40.092637 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:52:40.113254 (kubelet)[2156]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 28 00:52:40.164394 kubelet[2156]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 28 00:52:40.698294 kubelet[2156]: I0428 00:52:40.696297 2156 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 28 00:52:40.698294 kubelet[2156]: I0428 00:52:40.698153 2156 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 28 00:52:40.699129 kubelet[2156]: I0428 00:52:40.698482 2156 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 28 00:52:40.699129 kubelet[2156]: I0428 00:52:40.698497 2156 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 28 00:52:40.699129 kubelet[2156]: I0428 00:52:40.698965 2156 server.go:951] "Client rotation is on, will bootstrap in background" Apr 28 00:52:40.743317 kubelet[2156]: E0428 00:52:40.743228 2156 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.98:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 28 00:52:40.744226 kubelet[2156]: I0428 00:52:40.744187 2156 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 28 00:52:40.751438 kubelet[2156]: E0428 00:52:40.751289 2156 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 28 00:52:40.751438 kubelet[2156]: I0428 00:52:40.751402 2156 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 28 00:52:40.757033 kubelet[2156]: I0428 00:52:40.756895 2156 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 28 00:52:40.757625 kubelet[2156]: I0428 00:52:40.757543 2156 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 28 00:52:40.757735 kubelet[2156]: I0428 00:52:40.757572 2156 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 28 00:52:40.757735 kubelet[2156]: I0428 00:52:40.757705 2156 topology_manager.go:143] "Creating topology manager with none policy" Apr 28 00:52:40.757735 kubelet[2156]: I0428 00:52:40.757711 2156 container_manager_linux.go:308] "Creating device plugin manager" Apr 28 00:52:40.758105 kubelet[2156]: I0428 00:52:40.757816 2156 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 28 00:52:40.760669 kubelet[2156]: I0428 00:52:40.760636 2156 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 28 00:52:40.760894 kubelet[2156]: I0428 00:52:40.760857 2156 kubelet.go:482] "Attempting to sync node with API server" Apr 28 00:52:40.760894 kubelet[2156]: I0428 00:52:40.760885 2156 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 28 00:52:40.760976 kubelet[2156]: I0428 00:52:40.760904 2156 kubelet.go:394] "Adding apiserver pod source" Apr 28 00:52:40.760976 kubelet[2156]: I0428 00:52:40.760912 2156 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 28 00:52:40.766145 kubelet[2156]: I0428 00:52:40.765521 2156 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 28 00:52:40.767950 kubelet[2156]: I0428 00:52:40.767902 2156 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 28 00:52:40.768042 kubelet[2156]: I0428 00:52:40.767967 2156 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 28 00:52:40.768154 kubelet[2156]: W0428 00:52:40.768113 2156 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 28 00:52:40.776513 kubelet[2156]: I0428 00:52:40.776466 2156 server.go:1257] "Started kubelet" Apr 28 00:52:40.777572 kubelet[2156]: I0428 00:52:40.776665 2156 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 28 00:52:40.777572 kubelet[2156]: I0428 00:52:40.776711 2156 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 28 00:52:40.777572 kubelet[2156]: I0428 00:52:40.776798 2156 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 28 00:52:40.777572 kubelet[2156]: I0428 00:52:40.777199 2156 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 28 00:52:40.781099 kubelet[2156]: I0428 00:52:40.780926 2156 server.go:317] "Adding debug handlers to kubelet server" Apr 28 00:52:40.782853 kubelet[2156]: I0428 00:52:40.782613 2156 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 28 00:52:40.782907 kubelet[2156]: I0428 00:52:40.782828 2156 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 28 00:52:40.784228 kubelet[2156]: I0428 00:52:40.783520 2156 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 28 00:52:40.784228 kubelet[2156]: I0428 00:52:40.783617 2156 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 28 00:52:40.784228 kubelet[2156]: I0428 00:52:40.783725 2156 reconciler.go:29] "Reconciler: start to sync state" Apr 28 00:52:41.047722 kubelet[2156]: I0428 00:52:41.047235 2156 factory.go:223] Registration of the systemd container factory successfully Apr 28 00:52:41.048590 kubelet[2156]: I0428 00:52:41.048552 2156 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 28 00:52:41.049148 kubelet[2156]: E0428 00:52:41.047444 2156 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 28 00:52:41.049491 kubelet[2156]: E0428 00:52:41.047763 2156 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.98:6443: connect: connection refused" interval="200ms" Apr 28 00:52:41.053797 kubelet[2156]: E0428 00:52:41.050711 2156 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.98:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.98:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18aa5f184408b1c8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-28 00:52:40.776356296 +0000 UTC m=+0.654130808,LastTimestamp:2026-04-28 00:52:40.776356296 +0000 UTC m=+0.654130808,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 28 00:52:41.053797 kubelet[2156]: E0428 00:52:41.052147 2156 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 28 00:52:41.056140 kubelet[2156]: I0428 00:52:41.055915 2156 factory.go:223] Registration of the containerd container factory successfully Apr 28 00:52:41.079364 kubelet[2156]: I0428 00:52:41.079316 2156 cpu_manager.go:225] "Starting" policy="none" Apr 28 00:52:41.079528 kubelet[2156]: I0428 00:52:41.079483 2156 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 28 00:52:41.079528 kubelet[2156]: I0428 00:52:41.079510 2156 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 28 00:52:41.082193 kubelet[2156]: I0428 00:52:41.082098 2156 policy_none.go:50] "Start" Apr 28 00:52:41.082193 kubelet[2156]: I0428 00:52:41.082113 2156 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 28 00:52:41.082193 kubelet[2156]: I0428 00:52:41.082122 2156 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 28 00:52:41.084336 kubelet[2156]: I0428 00:52:41.084260 2156 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 28 00:52:41.086563 kubelet[2156]: I0428 00:52:41.086433 2156 policy_none.go:44] "Start" Apr 28 00:52:41.087301 kubelet[2156]: I0428 00:52:41.087088 2156 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 28 00:52:41.087301 kubelet[2156]: I0428 00:52:41.087109 2156 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 28 00:52:41.087301 kubelet[2156]: I0428 00:52:41.087153 2156 kubelet.go:2501] "Starting kubelet main sync loop" Apr 28 00:52:41.087301 kubelet[2156]: E0428 00:52:41.087203 2156 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 28 00:52:41.093474 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 28 00:52:41.109701 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 28 00:52:41.112371 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 28 00:52:41.126493 kubelet[2156]: E0428 00:52:41.126193 2156 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 28 00:52:41.126803 kubelet[2156]: I0428 00:52:41.126779 2156 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 28 00:52:41.126884 kubelet[2156]: I0428 00:52:41.126790 2156 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 28 00:52:41.127249 kubelet[2156]: I0428 00:52:41.127143 2156 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 28 00:52:41.128694 kubelet[2156]: E0428 00:52:41.128634 2156 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 28 00:52:41.128694 kubelet[2156]: E0428 00:52:41.128682 2156 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 28 00:52:41.219941 systemd[1]: Created slice kubepods-burstable-pod62e263c1ab98067f4f5eb1873a7bfa89.slice - libcontainer container kubepods-burstable-pod62e263c1ab98067f4f5eb1873a7bfa89.slice. Apr 28 00:52:41.231341 kubelet[2156]: I0428 00:52:41.231215 2156 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 28 00:52:41.231936 kubelet[2156]: E0428 00:52:41.231794 2156 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.98:6443/api/v1/nodes\": dial tcp 10.0.0.98:6443: connect: connection refused" node="localhost" Apr 28 00:52:41.234203 kubelet[2156]: E0428 00:52:41.234139 2156 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:52:41.237828 systemd[1]: Created slice kubepods-burstable-pod14bc29ec35edba17af38052ec24275f2.slice - libcontainer container kubepods-burstable-pod14bc29ec35edba17af38052ec24275f2.slice. Apr 28 00:52:41.240925 kubelet[2156]: E0428 00:52:41.240820 2156 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:52:41.243072 systemd[1]: Created slice kubepods-burstable-podf7c88b30fc803a3ec6b6c138191bdaca.slice - libcontainer container kubepods-burstable-podf7c88b30fc803a3ec6b6c138191bdaca.slice. Apr 28 00:52:41.247216 kubelet[2156]: E0428 00:52:41.247096 2156 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:52:41.251970 kubelet[2156]: E0428 00:52:41.251790 2156 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.98:6443: connect: connection refused" interval="400ms" Apr 28 00:52:41.348764 kubelet[2156]: I0428 00:52:41.347877 2156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/62e263c1ab98067f4f5eb1873a7bfa89-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"62e263c1ab98067f4f5eb1873a7bfa89\") " pod="kube-system/kube-apiserver-localhost" Apr 28 00:52:41.348764 kubelet[2156]: I0428 00:52:41.348149 2156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/62e263c1ab98067f4f5eb1873a7bfa89-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"62e263c1ab98067f4f5eb1873a7bfa89\") " pod="kube-system/kube-apiserver-localhost" Apr 28 00:52:41.348764 kubelet[2156]: I0428 00:52:41.348174 2156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/62e263c1ab98067f4f5eb1873a7bfa89-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"62e263c1ab98067f4f5eb1873a7bfa89\") " pod="kube-system/kube-apiserver-localhost" Apr 28 00:52:41.348764 kubelet[2156]: I0428 00:52:41.348206 2156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 00:52:41.348764 kubelet[2156]: I0428 00:52:41.348310 2156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 00:52:41.349310 kubelet[2156]: I0428 00:52:41.348397 2156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 00:52:41.349310 kubelet[2156]: I0428 00:52:41.348456 2156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7c88b30fc803a3ec6b6c138191bdaca-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f7c88b30fc803a3ec6b6c138191bdaca\") " pod="kube-system/kube-scheduler-localhost" Apr 28 00:52:41.349310 kubelet[2156]: I0428 00:52:41.348489 2156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 00:52:41.349310 kubelet[2156]: I0428 00:52:41.348503 2156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 00:52:41.437624 kubelet[2156]: I0428 00:52:41.437521 2156 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 28 00:52:41.437943 kubelet[2156]: E0428 00:52:41.437901 2156 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.98:6443/api/v1/nodes\": dial tcp 10.0.0.98:6443: connect: connection refused" node="localhost" Apr 28 00:52:41.542093 kubelet[2156]: E0428 00:52:41.541873 2156 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:52:41.546455 containerd[1465]: time="2026-04-28T00:52:41.546182066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:62e263c1ab98067f4f5eb1873a7bfa89,Namespace:kube-system,Attempt:0,}" Apr 28 00:52:41.550099 kubelet[2156]: E0428 00:52:41.549953 2156 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:52:41.550686 containerd[1465]: time="2026-04-28T00:52:41.550619342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:14bc29ec35edba17af38052ec24275f2,Namespace:kube-system,Attempt:0,}" Apr 28 00:52:41.555954 kubelet[2156]: E0428 00:52:41.555753 2156 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:52:41.556330 containerd[1465]: time="2026-04-28T00:52:41.556295974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f7c88b30fc803a3ec6b6c138191bdaca,Namespace:kube-system,Attempt:0,}" Apr 28 00:52:41.654430 kubelet[2156]: E0428 00:52:41.653187 2156 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.98:6443: connect: connection refused" interval="800ms" Apr 28 00:52:41.842647 kubelet[2156]: I0428 00:52:41.842539 2156 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 28 00:52:41.843109 kubelet[2156]: E0428 00:52:41.843070 2156 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.98:6443/api/v1/nodes\": dial tcp 10.0.0.98:6443: connect: connection refused" node="localhost" Apr 28 00:52:42.084952 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2985387909.mount: Deactivated successfully. Apr 28 00:52:42.094510 containerd[1465]: time="2026-04-28T00:52:42.094318530Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 00:52:42.095946 containerd[1465]: time="2026-04-28T00:52:42.095884528Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 00:52:42.097111 containerd[1465]: time="2026-04-28T00:52:42.097063654Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 28 00:52:42.097794 containerd[1465]: time="2026-04-28T00:52:42.097741173Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 28 00:52:42.099115 containerd[1465]: time="2026-04-28T00:52:42.098512032Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 00:52:42.100547 containerd[1465]: time="2026-04-28T00:52:42.100402350Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 00:52:42.100547 containerd[1465]: time="2026-04-28T00:52:42.100519445Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 28 00:52:42.102272 containerd[1465]: time="2026-04-28T00:52:42.102240406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 28 00:52:42.103392 containerd[1465]: time="2026-04-28T00:52:42.103343733Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 546.996964ms" Apr 28 00:52:42.104469 containerd[1465]: time="2026-04-28T00:52:42.104429397Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 553.740733ms" Apr 28 00:52:42.107208 containerd[1465]: time="2026-04-28T00:52:42.107185223Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 560.866182ms" Apr 28 00:52:42.214563 containerd[1465]: time="2026-04-28T00:52:42.212734856Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:52:42.214563 containerd[1465]: time="2026-04-28T00:52:42.212901831Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:52:42.214563 containerd[1465]: time="2026-04-28T00:52:42.212922957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:52:42.214563 containerd[1465]: time="2026-04-28T00:52:42.213112006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:52:42.218449 containerd[1465]: time="2026-04-28T00:52:42.214330974Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:52:42.218449 containerd[1465]: time="2026-04-28T00:52:42.214366724Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:52:42.218449 containerd[1465]: time="2026-04-28T00:52:42.214412838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:52:42.218449 containerd[1465]: time="2026-04-28T00:52:42.214541489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:52:42.219472 containerd[1465]: time="2026-04-28T00:52:42.219398691Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:52:42.219554 containerd[1465]: time="2026-04-28T00:52:42.219482763Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:52:42.219554 containerd[1465]: time="2026-04-28T00:52:42.219501075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:52:42.219598 containerd[1465]: time="2026-04-28T00:52:42.219573968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:52:42.240175 systemd[1]: Started cri-containerd-89f49214c9dec349487316a2253568e02fb3209ace4533c9b7b78cb477a060f4.scope - libcontainer container 89f49214c9dec349487316a2253568e02fb3209ace4533c9b7b78cb477a060f4. Apr 28 00:52:42.241308 systemd[1]: Started cri-containerd-ce7d354dce436b33bd7f5eec9d46da44120c960c6a70d7822648b75599f11fb8.scope - libcontainer container ce7d354dce436b33bd7f5eec9d46da44120c960c6a70d7822648b75599f11fb8. Apr 28 00:52:42.242729 systemd[1]: Started cri-containerd-e548deca71106e600a798baf2d28a325abba6bd3b59cde4ed36d5cabd012c310.scope - libcontainer container e548deca71106e600a798baf2d28a325abba6bd3b59cde4ed36d5cabd012c310. Apr 28 00:52:42.291553 containerd[1465]: time="2026-04-28T00:52:42.291343960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f7c88b30fc803a3ec6b6c138191bdaca,Namespace:kube-system,Attempt:0,} returns sandbox id \"89f49214c9dec349487316a2253568e02fb3209ace4533c9b7b78cb477a060f4\"" Apr 28 00:52:42.292609 containerd[1465]: time="2026-04-28T00:52:42.292555758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:62e263c1ab98067f4f5eb1873a7bfa89,Namespace:kube-system,Attempt:0,} returns sandbox id \"e548deca71106e600a798baf2d28a325abba6bd3b59cde4ed36d5cabd012c310\"" Apr 28 00:52:42.293008 kubelet[2156]: E0428 00:52:42.292950 2156 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:52:42.293008 kubelet[2156]: E0428 00:52:42.292976 2156 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:52:42.297805 containerd[1465]: time="2026-04-28T00:52:42.297779650Z" level=info msg="CreateContainer within sandbox \"89f49214c9dec349487316a2253568e02fb3209ace4533c9b7b78cb477a060f4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 28 00:52:42.300803 containerd[1465]: time="2026-04-28T00:52:42.300433769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:14bc29ec35edba17af38052ec24275f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce7d354dce436b33bd7f5eec9d46da44120c960c6a70d7822648b75599f11fb8\"" Apr 28 00:52:42.301571 containerd[1465]: time="2026-04-28T00:52:42.301546439Z" level=info msg="CreateContainer within sandbox \"e548deca71106e600a798baf2d28a325abba6bd3b59cde4ed36d5cabd012c310\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 28 00:52:42.301975 kubelet[2156]: E0428 00:52:42.301936 2156 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:52:42.306279 containerd[1465]: time="2026-04-28T00:52:42.306252755Z" level=info msg="CreateContainer within sandbox \"ce7d354dce436b33bd7f5eec9d46da44120c960c6a70d7822648b75599f11fb8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 28 00:52:42.322201 containerd[1465]: time="2026-04-28T00:52:42.321875445Z" level=info msg="CreateContainer within sandbox \"e548deca71106e600a798baf2d28a325abba6bd3b59cde4ed36d5cabd012c310\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"24e00df59a7929760ebae38c4be3749c2994e6518d0b91df03a2787e35bf101d\"" Apr 28 00:52:42.323542 containerd[1465]: time="2026-04-28T00:52:42.323506868Z" level=info msg="StartContainer for \"24e00df59a7929760ebae38c4be3749c2994e6518d0b91df03a2787e35bf101d\"" Apr 28 00:52:42.328033 containerd[1465]: time="2026-04-28T00:52:42.327762953Z" level=info msg="CreateContainer within sandbox \"89f49214c9dec349487316a2253568e02fb3209ace4533c9b7b78cb477a060f4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cfc0cafe196fd3c016238a3cd7d061a76e204d8c73b2567fd142ffd6d43d4348\"" Apr 28 00:52:42.328713 containerd[1465]: time="2026-04-28T00:52:42.328667119Z" level=info msg="StartContainer for \"cfc0cafe196fd3c016238a3cd7d061a76e204d8c73b2567fd142ffd6d43d4348\"" Apr 28 00:52:42.334582 containerd[1465]: time="2026-04-28T00:52:42.334497836Z" level=info msg="CreateContainer within sandbox \"ce7d354dce436b33bd7f5eec9d46da44120c960c6a70d7822648b75599f11fb8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0bdef606b93585c53c40cde28f7154eb5876097b924a90db2e02da2a4a1a8af5\"" Apr 28 00:52:42.338891 containerd[1465]: time="2026-04-28T00:52:42.336525687Z" level=info msg="StartContainer for \"0bdef606b93585c53c40cde28f7154eb5876097b924a90db2e02da2a4a1a8af5\"" Apr 28 00:52:42.355154 systemd[1]: Started cri-containerd-24e00df59a7929760ebae38c4be3749c2994e6518d0b91df03a2787e35bf101d.scope - libcontainer container 24e00df59a7929760ebae38c4be3749c2994e6518d0b91df03a2787e35bf101d. Apr 28 00:52:42.356026 systemd[1]: Started cri-containerd-cfc0cafe196fd3c016238a3cd7d061a76e204d8c73b2567fd142ffd6d43d4348.scope - libcontainer container cfc0cafe196fd3c016238a3cd7d061a76e204d8c73b2567fd142ffd6d43d4348. Apr 28 00:52:42.362657 systemd[1]: Started cri-containerd-0bdef606b93585c53c40cde28f7154eb5876097b924a90db2e02da2a4a1a8af5.scope - libcontainer container 0bdef606b93585c53c40cde28f7154eb5876097b924a90db2e02da2a4a1a8af5. Apr 28 00:52:42.405837 containerd[1465]: time="2026-04-28T00:52:42.405754559Z" level=info msg="StartContainer for \"24e00df59a7929760ebae38c4be3749c2994e6518d0b91df03a2787e35bf101d\" returns successfully" Apr 28 00:52:42.412251 containerd[1465]: time="2026-04-28T00:52:42.411906951Z" level=info msg="StartContainer for \"cfc0cafe196fd3c016238a3cd7d061a76e204d8c73b2567fd142ffd6d43d4348\" returns successfully" Apr 28 00:52:42.423420 containerd[1465]: time="2026-04-28T00:52:42.423364821Z" level=info msg="StartContainer for \"0bdef606b93585c53c40cde28f7154eb5876097b924a90db2e02da2a4a1a8af5\" returns successfully" Apr 28 00:52:42.455538 kubelet[2156]: E0428 00:52:42.455384 2156 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.98:6443: connect: connection refused" interval="1.6s" Apr 28 00:52:42.650420 kubelet[2156]: I0428 00:52:42.649234 2156 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 28 00:52:43.156064 kubelet[2156]: E0428 00:52:43.155837 2156 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:52:43.156064 kubelet[2156]: E0428 00:52:43.156084 2156 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:52:43.156774 kubelet[2156]: E0428 00:52:43.156706 2156 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:52:43.156865 kubelet[2156]: E0428 00:52:43.156834 2156 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:52:43.158383 kubelet[2156]: E0428 00:52:43.158356 2156 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 28 00:52:43.158564 kubelet[2156]: E0428 00:52:43.158504 2156 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:52:43.735879 kubelet[2156]: I0428 00:52:43.735613 2156 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Apr 28 00:52:43.747941 kubelet[2156]: I0428 00:52:43.747834 2156 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 28 00:52:43.757341 kubelet[2156]: E0428 00:52:43.755664 2156 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 28 00:52:43.757341 kubelet[2156]: I0428 00:52:43.755697 2156 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 28 00:52:43.760199 kubelet[2156]: E0428 00:52:43.760156 2156 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Apr 28 00:52:43.760199 kubelet[2156]: I0428 00:52:43.760184 2156 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 28 00:52:43.761380 kubelet[2156]: E0428 00:52:43.761345 2156 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 28 00:52:43.765173 kubelet[2156]: I0428 00:52:43.764628 2156 apiserver.go:52] "Watching apiserver" Apr 28 00:52:43.789067 kubelet[2156]: I0428 00:52:43.788408 2156 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 28 00:52:44.160908 kubelet[2156]: I0428 00:52:44.160408 2156 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 28 00:52:44.160908 kubelet[2156]: I0428 00:52:44.160431 2156 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 28 00:52:44.165112 kubelet[2156]: E0428 00:52:44.164763 2156 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 28 00:52:44.165112 kubelet[2156]: E0428 00:52:44.164908 2156 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 28 00:52:44.165112 kubelet[2156]: E0428 00:52:44.165050 2156 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:52:44.165112 kubelet[2156]: E0428 00:52:44.165062 2156 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:52:46.107353 systemd[1]: Reloading requested from client PID 2442 ('systemctl') (unit session-7.scope)... Apr 28 00:52:46.107392 systemd[1]: Reloading... Apr 28 00:52:46.189205 zram_generator::config[2481]: No configuration found. Apr 28 00:52:46.462393 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 28 00:52:46.567512 systemd[1]: Reloading finished in 459 ms. Apr 28 00:52:46.614756 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:52:46.633785 systemd[1]: kubelet.service: Deactivated successfully. Apr 28 00:52:46.634107 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:52:46.634170 systemd[1]: kubelet.service: Consumed 2.120s CPU time, 127.2M memory peak, 0B memory swap peak. Apr 28 00:52:46.648682 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 28 00:52:46.845368 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 28 00:52:46.850858 (kubelet)[2526]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 28 00:52:46.960907 kubelet[2526]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 28 00:52:46.989573 kubelet[2526]: I0428 00:52:46.988262 2526 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 28 00:52:46.989573 kubelet[2526]: I0428 00:52:46.988969 2526 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 28 00:52:46.989573 kubelet[2526]: I0428 00:52:46.989162 2526 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 28 00:52:46.989573 kubelet[2526]: I0428 00:52:46.989168 2526 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 28 00:52:46.989573 kubelet[2526]: I0428 00:52:46.989458 2526 server.go:951] "Client rotation is on, will bootstrap in background" Apr 28 00:52:46.990673 kubelet[2526]: I0428 00:52:46.990616 2526 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 28 00:52:46.993626 kubelet[2526]: I0428 00:52:46.992416 2526 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 28 00:52:46.999667 kubelet[2526]: E0428 00:52:46.999537 2526 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 28 00:52:46.999667 kubelet[2526]: I0428 00:52:46.999629 2526 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 28 00:52:47.005564 kubelet[2526]: I0428 00:52:47.005355 2526 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 28 00:52:47.005564 kubelet[2526]: I0428 00:52:47.005628 2526 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 28 00:52:47.006042 kubelet[2526]: I0428 00:52:47.005650 2526 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 28 00:52:47.006042 kubelet[2526]: I0428 00:52:47.005835 2526 topology_manager.go:143] "Creating topology manager with none policy" Apr 28 00:52:47.006042 kubelet[2526]: I0428 00:52:47.005842 2526 container_manager_linux.go:308] "Creating device plugin manager" Apr 28 00:52:47.006042 kubelet[2526]: I0428 00:52:47.005860 2526 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 28 00:52:47.006290 kubelet[2526]: I0428 00:52:47.006225 2526 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 28 00:52:47.006460 kubelet[2526]: I0428 00:52:47.006422 2526 kubelet.go:482] "Attempting to sync node with API server" Apr 28 00:52:47.006460 kubelet[2526]: I0428 00:52:47.006446 2526 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 28 00:52:47.006460 kubelet[2526]: I0428 00:52:47.006458 2526 kubelet.go:394] "Adding apiserver pod source" Apr 28 00:52:47.006533 kubelet[2526]: I0428 00:52:47.006467 2526 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 28 00:52:47.009461 kubelet[2526]: I0428 00:52:47.008041 2526 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 28 00:52:47.009461 kubelet[2526]: I0428 00:52:47.008632 2526 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 28 00:52:47.009461 kubelet[2526]: I0428 00:52:47.008653 2526 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 28 00:52:47.013677 kubelet[2526]: I0428 00:52:47.013121 2526 server.go:1257] "Started kubelet" Apr 28 00:52:47.018496 kubelet[2526]: I0428 00:52:47.014170 2526 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 28 00:52:47.019072 kubelet[2526]: I0428 00:52:47.019043 2526 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 28 00:52:47.019238 kubelet[2526]: I0428 00:52:47.014280 2526 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 28 00:52:47.019921 kubelet[2526]: I0428 00:52:47.014470 2526 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 28 00:52:47.019985 kubelet[2526]: I0428 00:52:47.019930 2526 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 28 00:52:47.022022 kubelet[2526]: I0428 00:52:47.021125 2526 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 28 00:52:47.022293 kubelet[2526]: I0428 00:52:47.022271 2526 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 28 00:52:47.022600 kubelet[2526]: I0428 00:52:47.014443 2526 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 28 00:52:47.024206 kubelet[2526]: I0428 00:52:47.024184 2526 reconciler.go:29] "Reconciler: start to sync state" Apr 28 00:52:47.026650 kubelet[2526]: I0428 00:52:47.025427 2526 server.go:317] "Adding debug handlers to kubelet server" Apr 28 00:52:47.028801 kubelet[2526]: I0428 00:52:47.028748 2526 factory.go:223] Registration of the systemd container factory successfully Apr 28 00:52:47.028960 kubelet[2526]: I0428 00:52:47.028852 2526 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 28 00:52:47.030959 kubelet[2526]: I0428 00:52:47.030811 2526 factory.go:223] Registration of the containerd container factory successfully Apr 28 00:52:47.031487 kubelet[2526]: E0428 00:52:47.031463 2526 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 28 00:52:47.047293 kubelet[2526]: I0428 00:52:47.045129 2526 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 28 00:52:47.051341 kubelet[2526]: I0428 00:52:47.050260 2526 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 28 00:52:47.051341 kubelet[2526]: I0428 00:52:47.050315 2526 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 28 00:52:47.051341 kubelet[2526]: I0428 00:52:47.050342 2526 kubelet.go:2501] "Starting kubelet main sync loop" Apr 28 00:52:47.051341 kubelet[2526]: E0428 00:52:47.050623 2526 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 28 00:52:47.115822 sudo[2565]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 28 00:52:47.116429 kubelet[2526]: I0428 00:52:47.115914 2526 cpu_manager.go:225] "Starting" policy="none" Apr 28 00:52:47.116429 kubelet[2526]: I0428 00:52:47.115925 2526 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 28 00:52:47.116429 kubelet[2526]: I0428 00:52:47.115940 2526 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 28 00:52:47.116429 kubelet[2526]: I0428 00:52:47.116154 2526 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Apr 28 00:52:47.116429 kubelet[2526]: I0428 00:52:47.116186 2526 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Apr 28 00:52:47.116429 kubelet[2526]: I0428 00:52:47.116201 2526 policy_none.go:50] "Start" Apr 28 00:52:47.116429 kubelet[2526]: I0428 00:52:47.116209 2526 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 28 00:52:47.116429 kubelet[2526]: I0428 00:52:47.116217 2526 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 28 00:52:47.116429 kubelet[2526]: I0428 00:52:47.116302 2526 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 28 00:52:47.116429 kubelet[2526]: I0428 00:52:47.116308 2526 policy_none.go:44] "Start" Apr 28 00:52:47.117110 sudo[2565]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 28 00:52:47.123415 kubelet[2526]: E0428 00:52:47.123360 2526 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 28 00:52:47.123564 kubelet[2526]: I0428 00:52:47.123545 2526 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 28 00:52:47.123614 kubelet[2526]: I0428 00:52:47.123576 2526 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 28 00:52:47.124812 kubelet[2526]: I0428 00:52:47.124771 2526 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 28 00:52:47.125513 kubelet[2526]: E0428 00:52:47.125467 2526 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 28 00:52:47.152233 kubelet[2526]: I0428 00:52:47.152143 2526 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 28 00:52:47.152640 kubelet[2526]: I0428 00:52:47.152587 2526 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 28 00:52:47.152968 kubelet[2526]: I0428 00:52:47.152903 2526 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 28 00:52:47.225873 kubelet[2526]: I0428 00:52:47.225653 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/62e263c1ab98067f4f5eb1873a7bfa89-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"62e263c1ab98067f4f5eb1873a7bfa89\") " pod="kube-system/kube-apiserver-localhost" Apr 28 00:52:47.238512 kubelet[2526]: I0428 00:52:47.238381 2526 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 28 00:52:47.248318 kubelet[2526]: I0428 00:52:47.248181 2526 kubelet_node_status.go:123] "Node was previously registered" node="localhost" Apr 28 00:52:47.248881 kubelet[2526]: I0428 00:52:47.248509 2526 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Apr 28 00:52:47.352733 kubelet[2526]: I0428 00:52:47.326894 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 00:52:47.352733 kubelet[2526]: I0428 00:52:47.327067 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 00:52:47.352733 kubelet[2526]: I0428 00:52:47.327101 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 00:52:47.352733 kubelet[2526]: I0428 00:52:47.327127 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7c88b30fc803a3ec6b6c138191bdaca-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f7c88b30fc803a3ec6b6c138191bdaca\") " pod="kube-system/kube-scheduler-localhost" Apr 28 00:52:47.355179 kubelet[2526]: I0428 00:52:47.355123 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/62e263c1ab98067f4f5eb1873a7bfa89-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"62e263c1ab98067f4f5eb1873a7bfa89\") " pod="kube-system/kube-apiserver-localhost" Apr 28 00:52:47.355406 kubelet[2526]: I0428 00:52:47.355394 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/62e263c1ab98067f4f5eb1873a7bfa89-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"62e263c1ab98067f4f5eb1873a7bfa89\") " pod="kube-system/kube-apiserver-localhost" Apr 28 00:52:47.355446 kubelet[2526]: I0428 00:52:47.355440 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 00:52:47.355640 kubelet[2526]: I0428 00:52:47.355613 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 28 00:52:47.460173 kubelet[2526]: E0428 00:52:47.459837 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:52:47.461501 kubelet[2526]: E0428 00:52:47.461440 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:52:47.462588 kubelet[2526]: E0428 00:52:47.462548 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:52:48.010811 kubelet[2526]: I0428 00:52:48.009379 2526 apiserver.go:52] "Watching apiserver" Apr 28 00:52:48.024586 kubelet[2526]: I0428 00:52:48.023640 2526 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 28 00:52:48.144576 kubelet[2526]: E0428 00:52:48.144494 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:52:48.145289 kubelet[2526]: E0428 00:52:48.145274 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:52:48.145677 kubelet[2526]: I0428 00:52:48.145667 2526 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 28 00:52:48.161133 kubelet[2526]: E0428 00:52:48.160885 2526 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 28 00:52:48.162365 kubelet[2526]: E0428 00:52:48.162238 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:52:48.229810 kubelet[2526]: I0428 00:52:48.229664 2526 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.229621027 podStartE2EDuration="1.229621027s" podCreationTimestamp="2026-04-28 00:52:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 00:52:48.202339605 +0000 UTC m=+1.346597703" watchObservedRunningTime="2026-04-28 00:52:48.229621027 +0000 UTC m=+1.373879116" Apr 28 00:52:48.229810 kubelet[2526]: I0428 00:52:48.229849 2526 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.229844812 podStartE2EDuration="1.229844812s" podCreationTimestamp="2026-04-28 00:52:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 00:52:48.229829352 +0000 UTC m=+1.374087444" watchObservedRunningTime="2026-04-28 00:52:48.229844812 +0000 UTC m=+1.374102905" Apr 28 00:52:48.243188 kubelet[2526]: I0428 00:52:48.242862 2526 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.242824517 podStartE2EDuration="1.242824517s" podCreationTimestamp="2026-04-28 00:52:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 00:52:48.242605368 +0000 UTC m=+1.386863454" watchObservedRunningTime="2026-04-28 00:52:48.242824517 +0000 UTC m=+1.387082603" Apr 28 00:52:48.265345 sudo[2565]: pam_unix(sudo:session): session closed for user root Apr 28 00:52:49.151537 kubelet[2526]: E0428 00:52:49.151295 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:52:49.151537 kubelet[2526]: E0428 00:52:49.151371 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:52:49.968930 sudo[1642]: pam_unix(sudo:session): session closed for user root Apr 28 00:52:49.977849 sshd[1639]: pam_unix(sshd:session): session closed for user core Apr 28 00:52:49.984129 systemd[1]: sshd@6-10.0.0.98:22-10.0.0.1:44328.service: Deactivated successfully. Apr 28 00:52:50.001576 systemd[1]: session-7.scope: Deactivated successfully. Apr 28 00:52:50.001908 systemd[1]: session-7.scope: Consumed 6.695s CPU time, 160.9M memory peak, 0B memory swap peak. Apr 28 00:52:50.004127 systemd-logind[1445]: Session 7 logged out. Waiting for processes to exit. Apr 28 00:52:50.028202 systemd-logind[1445]: Removed session 7. Apr 28 00:52:50.159899 kubelet[2526]: E0428 00:52:50.159662 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:52:51.096534 kubelet[2526]: I0428 00:52:51.096245 2526 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 28 00:52:51.103691 containerd[1465]: time="2026-04-28T00:52:51.102474282Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 28 00:52:51.104519 kubelet[2526]: I0428 00:52:51.103721 2526 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 28 00:52:51.772439 systemd[1]: Created slice kubepods-besteffort-pod31083037_914e_486f_bd3e_01c76d0afb16.slice - libcontainer container kubepods-besteffort-pod31083037_914e_486f_bd3e_01c76d0afb16.slice. Apr 28 00:52:51.794818 systemd[1]: Created slice kubepods-burstable-podeb85c81a_23da_4457_9bba_49e68db7ac00.slice - libcontainer container kubepods-burstable-podeb85c81a_23da_4457_9bba_49e68db7ac00.slice. Apr 28 00:52:51.877308 kubelet[2526]: I0428 00:52:51.872372 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/31083037-914e-486f-bd3e-01c76d0afb16-kube-proxy\") pod \"kube-proxy-m7jnz\" (UID: \"31083037-914e-486f-bd3e-01c76d0afb16\") " pod="kube-system/kube-proxy-m7jnz" Apr 28 00:52:51.877308 kubelet[2526]: I0428 00:52:51.873512 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31083037-914e-486f-bd3e-01c76d0afb16-lib-modules\") pod \"kube-proxy-m7jnz\" (UID: \"31083037-914e-486f-bd3e-01c76d0afb16\") " pod="kube-system/kube-proxy-m7jnz" Apr 28 00:52:51.877308 kubelet[2526]: I0428 00:52:51.873554 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/eb85c81a-23da-4457-9bba-49e68db7ac00-bpf-maps\") pod \"cilium-7nr2t\" (UID: \"eb85c81a-23da-4457-9bba-49e68db7ac00\") " pod="kube-system/cilium-7nr2t" Apr 28 00:52:51.877308 kubelet[2526]: I0428 00:52:51.873593 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/eb85c81a-23da-4457-9bba-49e68db7ac00-hostproc\") pod \"cilium-7nr2t\" (UID: \"eb85c81a-23da-4457-9bba-49e68db7ac00\") " pod="kube-system/cilium-7nr2t" Apr 28 00:52:51.877308 kubelet[2526]: I0428 00:52:51.873613 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/eb85c81a-23da-4457-9bba-49e68db7ac00-cilium-cgroup\") pod \"cilium-7nr2t\" (UID: \"eb85c81a-23da-4457-9bba-49e68db7ac00\") " pod="kube-system/cilium-7nr2t" Apr 28 00:52:51.877308 kubelet[2526]: I0428 00:52:51.873714 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eb85c81a-23da-4457-9bba-49e68db7ac00-etc-cni-netd\") pod \"cilium-7nr2t\" (UID: \"eb85c81a-23da-4457-9bba-49e68db7ac00\") " pod="kube-system/cilium-7nr2t" Apr 28 00:52:51.878637 kubelet[2526]: I0428 00:52:51.873745 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/eb85c81a-23da-4457-9bba-49e68db7ac00-cilium-run\") pod \"cilium-7nr2t\" (UID: \"eb85c81a-23da-4457-9bba-49e68db7ac00\") " pod="kube-system/cilium-7nr2t" Apr 28 00:52:51.878637 kubelet[2526]: I0428 00:52:51.873763 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/eb85c81a-23da-4457-9bba-49e68db7ac00-cni-path\") pod \"cilium-7nr2t\" (UID: \"eb85c81a-23da-4457-9bba-49e68db7ac00\") " pod="kube-system/cilium-7nr2t" Apr 28 00:52:51.878637 kubelet[2526]: I0428 00:52:51.873789 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eb85c81a-23da-4457-9bba-49e68db7ac00-lib-modules\") pod \"cilium-7nr2t\" (UID: \"eb85c81a-23da-4457-9bba-49e68db7ac00\") " pod="kube-system/cilium-7nr2t" Apr 28 00:52:51.878637 kubelet[2526]: I0428 00:52:51.877267 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eb85c81a-23da-4457-9bba-49e68db7ac00-xtables-lock\") pod \"cilium-7nr2t\" (UID: \"eb85c81a-23da-4457-9bba-49e68db7ac00\") " pod="kube-system/cilium-7nr2t" Apr 28 00:52:51.878637 kubelet[2526]: I0428 00:52:51.877349 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/31083037-914e-486f-bd3e-01c76d0afb16-xtables-lock\") pod \"kube-proxy-m7jnz\" (UID: \"31083037-914e-486f-bd3e-01c76d0afb16\") " pod="kube-system/kube-proxy-m7jnz" Apr 28 00:52:51.878637 kubelet[2526]: I0428 00:52:51.877362 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp4qp\" (UniqueName: \"kubernetes.io/projected/31083037-914e-486f-bd3e-01c76d0afb16-kube-api-access-bp4qp\") pod \"kube-proxy-m7jnz\" (UID: \"31083037-914e-486f-bd3e-01c76d0afb16\") " pod="kube-system/kube-proxy-m7jnz" Apr 28 00:52:51.878769 kubelet[2526]: I0428 00:52:51.877389 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eb85c81a-23da-4457-9bba-49e68db7ac00-cilium-config-path\") pod \"cilium-7nr2t\" (UID: \"eb85c81a-23da-4457-9bba-49e68db7ac00\") " pod="kube-system/cilium-7nr2t" Apr 28 00:52:51.878769 kubelet[2526]: I0428 00:52:51.877479 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/eb85c81a-23da-4457-9bba-49e68db7ac00-host-proc-sys-net\") pod \"cilium-7nr2t\" (UID: \"eb85c81a-23da-4457-9bba-49e68db7ac00\") " pod="kube-system/cilium-7nr2t" Apr 28 00:52:51.878769 kubelet[2526]: I0428 00:52:51.877524 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/eb85c81a-23da-4457-9bba-49e68db7ac00-host-proc-sys-kernel\") pod \"cilium-7nr2t\" (UID: \"eb85c81a-23da-4457-9bba-49e68db7ac00\") " pod="kube-system/cilium-7nr2t" Apr 28 00:52:51.878769 kubelet[2526]: I0428 00:52:51.877537 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/eb85c81a-23da-4457-9bba-49e68db7ac00-clustermesh-secrets\") pod \"cilium-7nr2t\" (UID: \"eb85c81a-23da-4457-9bba-49e68db7ac00\") " pod="kube-system/cilium-7nr2t" Apr 28 00:52:51.878769 kubelet[2526]: I0428 00:52:51.877550 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/eb85c81a-23da-4457-9bba-49e68db7ac00-hubble-tls\") pod \"cilium-7nr2t\" (UID: \"eb85c81a-23da-4457-9bba-49e68db7ac00\") " pod="kube-system/cilium-7nr2t" Apr 28 00:52:51.881119 kubelet[2526]: I0428 00:52:51.877562 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw8pv\" (UniqueName: \"kubernetes.io/projected/eb85c81a-23da-4457-9bba-49e68db7ac00-kube-api-access-xw8pv\") pod \"cilium-7nr2t\" (UID: \"eb85c81a-23da-4457-9bba-49e68db7ac00\") " pod="kube-system/cilium-7nr2t" Apr 28 00:52:52.100382 kubelet[2526]: E0428 00:52:52.099895 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:52:52.105387 containerd[1465]: time="2026-04-28T00:52:52.104513910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m7jnz,Uid:31083037-914e-486f-bd3e-01c76d0afb16,Namespace:kube-system,Attempt:0,}" Apr 28 00:52:52.109359 kubelet[2526]: E0428 00:52:52.109166 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:52:52.109852 containerd[1465]: time="2026-04-28T00:52:52.109793494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7nr2t,Uid:eb85c81a-23da-4457-9bba-49e68db7ac00,Namespace:kube-system,Attempt:0,}" Apr 28 00:52:52.170132 containerd[1465]: time="2026-04-28T00:52:52.168434112Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:52:52.170132 containerd[1465]: time="2026-04-28T00:52:52.169569196Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:52:52.170132 containerd[1465]: time="2026-04-28T00:52:52.169589392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:52:52.170829 containerd[1465]: time="2026-04-28T00:52:52.170492270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:52:52.179823 containerd[1465]: time="2026-04-28T00:52:52.179093451Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:52:52.179823 containerd[1465]: time="2026-04-28T00:52:52.179217888Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:52:52.179823 containerd[1465]: time="2026-04-28T00:52:52.179228555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:52:52.179823 containerd[1465]: time="2026-04-28T00:52:52.179327653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:52:52.211440 systemd[1]: Started cri-containerd-a4e0697ec5dcc5d4e3c31b3911cc3b1de42f757ae2d3d1f9ddbf3005ba2b3d3d.scope - libcontainer container a4e0697ec5dcc5d4e3c31b3911cc3b1de42f757ae2d3d1f9ddbf3005ba2b3d3d. Apr 28 00:52:52.224088 systemd[1]: Started cri-containerd-e8eee1b9d2db4da8e4e2567b3e7cf3313b9ea7b51be671a57a175c5c212791bc.scope - libcontainer container e8eee1b9d2db4da8e4e2567b3e7cf3313b9ea7b51be671a57a175c5c212791bc. Apr 28 00:52:52.365034 containerd[1465]: time="2026-04-28T00:52:52.362478109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7nr2t,Uid:eb85c81a-23da-4457-9bba-49e68db7ac00,Namespace:kube-system,Attempt:0,} returns sandbox id \"e8eee1b9d2db4da8e4e2567b3e7cf3313b9ea7b51be671a57a175c5c212791bc\"" Apr 28 00:52:52.371314 kubelet[2526]: E0428 00:52:52.368020 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:52:52.373822 containerd[1465]: time="2026-04-28T00:52:52.371807573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m7jnz,Uid:31083037-914e-486f-bd3e-01c76d0afb16,Namespace:kube-system,Attempt:0,} returns sandbox id \"a4e0697ec5dcc5d4e3c31b3911cc3b1de42f757ae2d3d1f9ddbf3005ba2b3d3d\"" Apr 28 00:52:52.390343 containerd[1465]: time="2026-04-28T00:52:52.390147397Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 28 00:52:52.391322 kubelet[2526]: E0428 00:52:52.391239 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:52:52.407836 containerd[1465]: time="2026-04-28T00:52:52.407752582Z" level=info msg="CreateContainer within sandbox \"a4e0697ec5dcc5d4e3c31b3911cc3b1de42f757ae2d3d1f9ddbf3005ba2b3d3d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 28 00:52:52.428878 systemd[1]: Created slice kubepods-besteffort-podfed718fc_56e8_4472_bce7_5c57a894600b.slice - libcontainer container kubepods-besteffort-podfed718fc_56e8_4472_bce7_5c57a894600b.slice. Apr 28 00:52:52.516364 containerd[1465]: time="2026-04-28T00:52:52.515476497Z" level=info msg="CreateContainer within sandbox \"a4e0697ec5dcc5d4e3c31b3911cc3b1de42f757ae2d3d1f9ddbf3005ba2b3d3d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d89ac0531ac10f135e2f8debc5e27d950ca5ef3389d4156c65d02c1a250a0ee5\"" Apr 28 00:52:52.519615 containerd[1465]: time="2026-04-28T00:52:52.519574655Z" level=info msg="StartContainer for \"d89ac0531ac10f135e2f8debc5e27d950ca5ef3389d4156c65d02c1a250a0ee5\"" Apr 28 00:52:52.563540 kubelet[2526]: I0428 00:52:52.558308 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fed718fc-56e8-4472-bce7-5c57a894600b-cilium-config-path\") pod \"cilium-operator-78cf5644cb-vkqn7\" (UID: \"fed718fc-56e8-4472-bce7-5c57a894600b\") " pod="kube-system/cilium-operator-78cf5644cb-vkqn7" Apr 28 00:52:52.563540 kubelet[2526]: I0428 00:52:52.558364 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpfw2\" (UniqueName: \"kubernetes.io/projected/fed718fc-56e8-4472-bce7-5c57a894600b-kube-api-access-rpfw2\") pod \"cilium-operator-78cf5644cb-vkqn7\" (UID: \"fed718fc-56e8-4472-bce7-5c57a894600b\") " pod="kube-system/cilium-operator-78cf5644cb-vkqn7" Apr 28 00:52:52.611738 systemd[1]: Started cri-containerd-d89ac0531ac10f135e2f8debc5e27d950ca5ef3389d4156c65d02c1a250a0ee5.scope - libcontainer container d89ac0531ac10f135e2f8debc5e27d950ca5ef3389d4156c65d02c1a250a0ee5. Apr 28 00:52:52.650298 containerd[1465]: time="2026-04-28T00:52:52.649703542Z" level=info msg="StartContainer for \"d89ac0531ac10f135e2f8debc5e27d950ca5ef3389d4156c65d02c1a250a0ee5\" returns successfully" Apr 28 00:52:52.752904 kubelet[2526]: E0428 00:52:52.752616 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:52:52.753599 containerd[1465]: time="2026-04-28T00:52:52.753548586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-78cf5644cb-vkqn7,Uid:fed718fc-56e8-4472-bce7-5c57a894600b,Namespace:kube-system,Attempt:0,}" Apr 28 00:52:52.819836 containerd[1465]: time="2026-04-28T00:52:52.818724449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:52:52.819836 containerd[1465]: time="2026-04-28T00:52:52.819512683Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:52:52.819836 containerd[1465]: time="2026-04-28T00:52:52.819529129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:52:52.819836 containerd[1465]: time="2026-04-28T00:52:52.819623153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:52:52.886237 systemd[1]: Started cri-containerd-2e31b90064eddb17c26b1c06b7c7b2cccacd7d2a06be609071f2b2c43b13f170.scope - libcontainer container 2e31b90064eddb17c26b1c06b7c7b2cccacd7d2a06be609071f2b2c43b13f170. Apr 28 00:52:52.955239 containerd[1465]: time="2026-04-28T00:52:52.955062436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-78cf5644cb-vkqn7,Uid:fed718fc-56e8-4472-bce7-5c57a894600b,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e31b90064eddb17c26b1c06b7c7b2cccacd7d2a06be609071f2b2c43b13f170\"" Apr 28 00:52:52.956462 kubelet[2526]: E0428 00:52:52.956398 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:52:53.130067 kubelet[2526]: E0428 00:52:53.129062 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:52:53.194323 kubelet[2526]: E0428 00:52:53.193297 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:52:54.025815 kubelet[2526]: E0428 00:52:54.025693 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:52:54.045937 kubelet[2526]: I0428 00:52:54.045532 2526 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-m7jnz" podStartSLOduration=3.045438336 podStartE2EDuration="3.045438336s" podCreationTimestamp="2026-04-28 00:52:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 00:52:53.207847922 +0000 UTC m=+6.352106013" watchObservedRunningTime="2026-04-28 00:52:54.045438336 +0000 UTC m=+7.189696438" Apr 28 00:52:56.399745 kubelet[2526]: E0428 00:52:56.399393 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:52:56.856116 update_engine[1451]: I20260428 00:52:56.855500 1451 update_attempter.cc:509] Updating boot flags... Apr 28 00:52:56.958095 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2915) Apr 28 00:52:57.019437 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2919) Apr 28 00:53:00.235290 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2196959117.mount: Deactivated successfully. Apr 28 00:53:02.733978 containerd[1465]: time="2026-04-28T00:53:02.733625208Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:53:02.737557 containerd[1465]: time="2026-04-28T00:53:02.737088525Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 28 00:53:02.744832 containerd[1465]: time="2026-04-28T00:53:02.744605725Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:53:02.750631 containerd[1465]: time="2026-04-28T00:53:02.750301830Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.360052054s" Apr 28 00:53:02.750631 containerd[1465]: time="2026-04-28T00:53:02.750544522Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 28 00:53:02.752663 containerd[1465]: time="2026-04-28T00:53:02.752396341Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 28 00:53:02.759716 containerd[1465]: time="2026-04-28T00:53:02.759232035Z" level=info msg="CreateContainer within sandbox \"e8eee1b9d2db4da8e4e2567b3e7cf3313b9ea7b51be671a57a175c5c212791bc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 28 00:53:02.799341 containerd[1465]: time="2026-04-28T00:53:02.799281487Z" level=info msg="CreateContainer within sandbox \"e8eee1b9d2db4da8e4e2567b3e7cf3313b9ea7b51be671a57a175c5c212791bc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"83bfbb40b00235dd6a211b06c994dcbf640ea03033018a06e112799bc40b60db\"" Apr 28 00:53:02.800085 containerd[1465]: time="2026-04-28T00:53:02.799783178Z" level=info msg="StartContainer for \"83bfbb40b00235dd6a211b06c994dcbf640ea03033018a06e112799bc40b60db\"" Apr 28 00:53:02.855268 systemd[1]: Started cri-containerd-83bfbb40b00235dd6a211b06c994dcbf640ea03033018a06e112799bc40b60db.scope - libcontainer container 83bfbb40b00235dd6a211b06c994dcbf640ea03033018a06e112799bc40b60db. Apr 28 00:53:02.935937 containerd[1465]: time="2026-04-28T00:53:02.935669125Z" level=info msg="StartContainer for \"83bfbb40b00235dd6a211b06c994dcbf640ea03033018a06e112799bc40b60db\" returns successfully" Apr 28 00:53:02.954464 systemd[1]: cri-containerd-83bfbb40b00235dd6a211b06c994dcbf640ea03033018a06e112799bc40b60db.scope: Deactivated successfully. Apr 28 00:53:03.147206 kubelet[2526]: E0428 00:53:03.146704 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:53:03.150689 containerd[1465]: time="2026-04-28T00:53:03.150401080Z" level=info msg="shim disconnected" id=83bfbb40b00235dd6a211b06c994dcbf640ea03033018a06e112799bc40b60db namespace=k8s.io Apr 28 00:53:03.150689 containerd[1465]: time="2026-04-28T00:53:03.150477072Z" level=warning msg="cleaning up after shim disconnected" id=83bfbb40b00235dd6a211b06c994dcbf640ea03033018a06e112799bc40b60db namespace=k8s.io Apr 28 00:53:03.150689 containerd[1465]: time="2026-04-28T00:53:03.150484369Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 00:53:03.208240 containerd[1465]: time="2026-04-28T00:53:03.207226500Z" level=warning msg="cleanup warnings time=\"2026-04-28T00:53:03Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 28 00:53:03.298156 kubelet[2526]: E0428 00:53:03.297850 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:53:03.334864 containerd[1465]: time="2026-04-28T00:53:03.334566327Z" level=info msg="CreateContainer within sandbox \"e8eee1b9d2db4da8e4e2567b3e7cf3313b9ea7b51be671a57a175c5c212791bc\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 28 00:53:03.442985 containerd[1465]: time="2026-04-28T00:53:03.442828852Z" level=info msg="CreateContainer within sandbox \"e8eee1b9d2db4da8e4e2567b3e7cf3313b9ea7b51be671a57a175c5c212791bc\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"cf6a156d256f78e878570d15a42ddbfebf48a12fda7b0beed6a1c87eedb14731\"" Apr 28 00:53:03.445090 containerd[1465]: time="2026-04-28T00:53:03.444875942Z" level=info msg="StartContainer for \"cf6a156d256f78e878570d15a42ddbfebf48a12fda7b0beed6a1c87eedb14731\"" Apr 28 00:53:03.528084 systemd[1]: Started cri-containerd-cf6a156d256f78e878570d15a42ddbfebf48a12fda7b0beed6a1c87eedb14731.scope - libcontainer container cf6a156d256f78e878570d15a42ddbfebf48a12fda7b0beed6a1c87eedb14731. Apr 28 00:53:03.592413 containerd[1465]: time="2026-04-28T00:53:03.592130865Z" level=info msg="StartContainer for \"cf6a156d256f78e878570d15a42ddbfebf48a12fda7b0beed6a1c87eedb14731\" returns successfully" Apr 28 00:53:03.616173 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 28 00:53:03.616710 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 28 00:53:03.616797 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 28 00:53:03.629524 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 28 00:53:03.630850 systemd[1]: cri-containerd-cf6a156d256f78e878570d15a42ddbfebf48a12fda7b0beed6a1c87eedb14731.scope: Deactivated successfully. Apr 28 00:53:03.702647 containerd[1465]: time="2026-04-28T00:53:03.702338925Z" level=info msg="shim disconnected" id=cf6a156d256f78e878570d15a42ddbfebf48a12fda7b0beed6a1c87eedb14731 namespace=k8s.io Apr 28 00:53:03.702647 containerd[1465]: time="2026-04-28T00:53:03.702386909Z" level=warning msg="cleaning up after shim disconnected" id=cf6a156d256f78e878570d15a42ddbfebf48a12fda7b0beed6a1c87eedb14731 namespace=k8s.io Apr 28 00:53:03.702647 containerd[1465]: time="2026-04-28T00:53:03.702393912Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 00:53:03.703855 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 28 00:53:03.797329 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-83bfbb40b00235dd6a211b06c994dcbf640ea03033018a06e112799bc40b60db-rootfs.mount: Deactivated successfully. Apr 28 00:53:03.797591 containerd[1465]: time="2026-04-28T00:53:03.797544939Z" level=warning msg="cleanup warnings time=\"2026-04-28T00:53:03Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 28 00:53:04.045695 kubelet[2526]: E0428 00:53:04.039297 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:53:04.314476 kubelet[2526]: E0428 00:53:04.314112 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:53:04.336326 containerd[1465]: time="2026-04-28T00:53:04.335968921Z" level=info msg="CreateContainer within sandbox \"e8eee1b9d2db4da8e4e2567b3e7cf3313b9ea7b51be671a57a175c5c212791bc\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 28 00:53:04.357201 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3477055355.mount: Deactivated successfully. Apr 28 00:53:04.397411 containerd[1465]: time="2026-04-28T00:53:04.397334412Z" level=info msg="CreateContainer within sandbox \"e8eee1b9d2db4da8e4e2567b3e7cf3313b9ea7b51be671a57a175c5c212791bc\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e13e9b96703346c74c34498668f3a4f8154f9ad671e8bcf60248b01fe332f065\"" Apr 28 00:53:04.399426 containerd[1465]: time="2026-04-28T00:53:04.399190684Z" level=info msg="StartContainer for \"e13e9b96703346c74c34498668f3a4f8154f9ad671e8bcf60248b01fe332f065\"" Apr 28 00:53:04.447463 systemd[1]: Started cri-containerd-e13e9b96703346c74c34498668f3a4f8154f9ad671e8bcf60248b01fe332f065.scope - libcontainer container e13e9b96703346c74c34498668f3a4f8154f9ad671e8bcf60248b01fe332f065. Apr 28 00:53:04.501811 containerd[1465]: time="2026-04-28T00:53:04.501687080Z" level=info msg="StartContainer for \"e13e9b96703346c74c34498668f3a4f8154f9ad671e8bcf60248b01fe332f065\" returns successfully" Apr 28 00:53:04.503619 systemd[1]: cri-containerd-e13e9b96703346c74c34498668f3a4f8154f9ad671e8bcf60248b01fe332f065.scope: Deactivated successfully. Apr 28 00:53:04.549534 containerd[1465]: time="2026-04-28T00:53:04.549238377Z" level=info msg="shim disconnected" id=e13e9b96703346c74c34498668f3a4f8154f9ad671e8bcf60248b01fe332f065 namespace=k8s.io Apr 28 00:53:04.549534 containerd[1465]: time="2026-04-28T00:53:04.549396695Z" level=warning msg="cleaning up after shim disconnected" id=e13e9b96703346c74c34498668f3a4f8154f9ad671e8bcf60248b01fe332f065 namespace=k8s.io Apr 28 00:53:04.549534 containerd[1465]: time="2026-04-28T00:53:04.549404361Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 00:53:05.034799 containerd[1465]: time="2026-04-28T00:53:05.034441543Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:53:05.035862 containerd[1465]: time="2026-04-28T00:53:05.035520156Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 28 00:53:05.038495 containerd[1465]: time="2026-04-28T00:53:05.038223596Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 28 00:53:05.041679 containerd[1465]: time="2026-04-28T00:53:05.041589714Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.289164111s" Apr 28 00:53:05.041763 containerd[1465]: time="2026-04-28T00:53:05.041693694Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 28 00:53:05.056716 containerd[1465]: time="2026-04-28T00:53:05.056421491Z" level=info msg="CreateContainer within sandbox \"2e31b90064eddb17c26b1c06b7c7b2cccacd7d2a06be609071f2b2c43b13f170\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 28 00:53:05.155344 containerd[1465]: time="2026-04-28T00:53:05.154895480Z" level=info msg="CreateContainer within sandbox \"2e31b90064eddb17c26b1c06b7c7b2cccacd7d2a06be609071f2b2c43b13f170\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"30a4eeeee1dc519851cc2ff43d18bd5800de025f2489462728e86dee1fe63ec0\"" Apr 28 00:53:05.158425 containerd[1465]: time="2026-04-28T00:53:05.156799095Z" level=info msg="StartContainer for \"30a4eeeee1dc519851cc2ff43d18bd5800de025f2489462728e86dee1fe63ec0\"" Apr 28 00:53:05.238582 systemd[1]: Started cri-containerd-30a4eeeee1dc519851cc2ff43d18bd5800de025f2489462728e86dee1fe63ec0.scope - libcontainer container 30a4eeeee1dc519851cc2ff43d18bd5800de025f2489462728e86dee1fe63ec0. Apr 28 00:53:05.348061 kubelet[2526]: E0428 00:53:05.344936 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:53:05.506560 containerd[1465]: time="2026-04-28T00:53:05.506174352Z" level=info msg="StartContainer for \"30a4eeeee1dc519851cc2ff43d18bd5800de025f2489462728e86dee1fe63ec0\" returns successfully" Apr 28 00:53:05.511365 containerd[1465]: time="2026-04-28T00:53:05.510982215Z" level=info msg="CreateContainer within sandbox \"e8eee1b9d2db4da8e4e2567b3e7cf3313b9ea7b51be671a57a175c5c212791bc\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 28 00:53:05.651560 containerd[1465]: time="2026-04-28T00:53:05.649027099Z" level=info msg="CreateContainer within sandbox \"e8eee1b9d2db4da8e4e2567b3e7cf3313b9ea7b51be671a57a175c5c212791bc\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"32fffeb9eee6ba0cd8c1b530674cd9f9727d893a0317992e3b1fdaa7fa7f349b\"" Apr 28 00:53:05.653890 containerd[1465]: time="2026-04-28T00:53:05.653583265Z" level=info msg="StartContainer for \"32fffeb9eee6ba0cd8c1b530674cd9f9727d893a0317992e3b1fdaa7fa7f349b\"" Apr 28 00:53:05.718191 systemd[1]: Started cri-containerd-32fffeb9eee6ba0cd8c1b530674cd9f9727d893a0317992e3b1fdaa7fa7f349b.scope - libcontainer container 32fffeb9eee6ba0cd8c1b530674cd9f9727d893a0317992e3b1fdaa7fa7f349b. Apr 28 00:53:05.790158 systemd[1]: cri-containerd-32fffeb9eee6ba0cd8c1b530674cd9f9727d893a0317992e3b1fdaa7fa7f349b.scope: Deactivated successfully. Apr 28 00:53:05.795583 containerd[1465]: time="2026-04-28T00:53:05.794534933Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb85c81a_23da_4457_9bba_49e68db7ac00.slice/cri-containerd-32fffeb9eee6ba0cd8c1b530674cd9f9727d893a0317992e3b1fdaa7fa7f349b.scope/memory.events\": no such file or directory" Apr 28 00:53:05.801871 containerd[1465]: time="2026-04-28T00:53:05.801813517Z" level=info msg="StartContainer for \"32fffeb9eee6ba0cd8c1b530674cd9f9727d893a0317992e3b1fdaa7fa7f349b\" returns successfully" Apr 28 00:53:05.921931 containerd[1465]: time="2026-04-28T00:53:05.919720601Z" level=info msg="shim disconnected" id=32fffeb9eee6ba0cd8c1b530674cd9f9727d893a0317992e3b1fdaa7fa7f349b namespace=k8s.io Apr 28 00:53:05.921931 containerd[1465]: time="2026-04-28T00:53:05.919821206Z" level=warning msg="cleaning up after shim disconnected" id=32fffeb9eee6ba0cd8c1b530674cd9f9727d893a0317992e3b1fdaa7fa7f349b namespace=k8s.io Apr 28 00:53:05.921931 containerd[1465]: time="2026-04-28T00:53:05.919829443Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 00:53:05.921335 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-32fffeb9eee6ba0cd8c1b530674cd9f9727d893a0317992e3b1fdaa7fa7f349b-rootfs.mount: Deactivated successfully. Apr 28 00:53:06.417437 kubelet[2526]: E0428 00:53:06.415972 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:53:06.447621 kubelet[2526]: E0428 00:53:06.447225 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:53:06.473567 kubelet[2526]: E0428 00:53:06.473488 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:53:06.492117 containerd[1465]: time="2026-04-28T00:53:06.491213961Z" level=info msg="CreateContainer within sandbox \"e8eee1b9d2db4da8e4e2567b3e7cf3313b9ea7b51be671a57a175c5c212791bc\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 28 00:53:06.539051 containerd[1465]: time="2026-04-28T00:53:06.538821474Z" level=info msg="CreateContainer within sandbox \"e8eee1b9d2db4da8e4e2567b3e7cf3313b9ea7b51be671a57a175c5c212791bc\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7e86f7d01b4d6dd3949cea7d4323b097d6143eb38f93fbead100a71ddc38c0d9\"" Apr 28 00:53:06.555349 containerd[1465]: time="2026-04-28T00:53:06.555116545Z" level=info msg="StartContainer for \"7e86f7d01b4d6dd3949cea7d4323b097d6143eb38f93fbead100a71ddc38c0d9\"" Apr 28 00:53:06.706887 kubelet[2526]: I0428 00:53:06.698443 2526 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-operator-78cf5644cb-vkqn7" podStartSLOduration=2.605205084 podStartE2EDuration="14.689732462s" podCreationTimestamp="2026-04-28 00:52:52 +0000 UTC" firstStartedPulling="2026-04-28 00:52:52.958666949 +0000 UTC m=+6.102925036" lastFinishedPulling="2026-04-28 00:53:05.043194328 +0000 UTC m=+18.187452414" observedRunningTime="2026-04-28 00:53:06.504325024 +0000 UTC m=+19.648583121" watchObservedRunningTime="2026-04-28 00:53:06.689732462 +0000 UTC m=+19.833990561" Apr 28 00:53:06.737704 systemd[1]: Started cri-containerd-7e86f7d01b4d6dd3949cea7d4323b097d6143eb38f93fbead100a71ddc38c0d9.scope - libcontainer container 7e86f7d01b4d6dd3949cea7d4323b097d6143eb38f93fbead100a71ddc38c0d9. Apr 28 00:53:06.778434 containerd[1465]: time="2026-04-28T00:53:06.778068222Z" level=info msg="StartContainer for \"7e86f7d01b4d6dd3949cea7d4323b097d6143eb38f93fbead100a71ddc38c0d9\" returns successfully" Apr 28 00:53:06.981093 kubelet[2526]: I0428 00:53:06.978428 2526 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Apr 28 00:53:07.146064 systemd[1]: Created slice kubepods-burstable-pod236ff5e0_4461_4214_aef2_928b5f4971c4.slice - libcontainer container kubepods-burstable-pod236ff5e0_4461_4214_aef2_928b5f4971c4.slice. Apr 28 00:53:07.175942 systemd[1]: Created slice kubepods-burstable-pod118b2b5b_32c8_4539_9c27_9dc17934bb3f.slice - libcontainer container kubepods-burstable-pod118b2b5b_32c8_4539_9c27_9dc17934bb3f.slice. Apr 28 00:53:07.200770 kubelet[2526]: I0428 00:53:07.192920 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft4lm\" (UniqueName: \"kubernetes.io/projected/236ff5e0-4461-4214-aef2-928b5f4971c4-kube-api-access-ft4lm\") pod \"coredns-7d764666f9-zctgc\" (UID: \"236ff5e0-4461-4214-aef2-928b5f4971c4\") " pod="kube-system/coredns-7d764666f9-zctgc" Apr 28 00:53:07.200770 kubelet[2526]: I0428 00:53:07.199778 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/236ff5e0-4461-4214-aef2-928b5f4971c4-config-volume\") pod \"coredns-7d764666f9-zctgc\" (UID: \"236ff5e0-4461-4214-aef2-928b5f4971c4\") " pod="kube-system/coredns-7d764666f9-zctgc" Apr 28 00:53:07.301667 kubelet[2526]: I0428 00:53:07.300733 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/118b2b5b-32c8-4539-9c27-9dc17934bb3f-config-volume\") pod \"coredns-7d764666f9-h29pq\" (UID: \"118b2b5b-32c8-4539-9c27-9dc17934bb3f\") " pod="kube-system/coredns-7d764666f9-h29pq" Apr 28 00:53:07.301667 kubelet[2526]: I0428 00:53:07.300776 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zps74\" (UniqueName: \"kubernetes.io/projected/118b2b5b-32c8-4539-9c27-9dc17934bb3f-kube-api-access-zps74\") pod \"coredns-7d764666f9-h29pq\" (UID: \"118b2b5b-32c8-4539-9c27-9dc17934bb3f\") " pod="kube-system/coredns-7d764666f9-h29pq" Apr 28 00:53:07.463854 kubelet[2526]: E0428 00:53:07.463281 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:53:07.495428 kubelet[2526]: E0428 00:53:07.495261 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:53:07.496181 kubelet[2526]: E0428 00:53:07.496136 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:53:07.497034 containerd[1465]: time="2026-04-28T00:53:07.496928017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-zctgc,Uid:236ff5e0-4461-4214-aef2-928b5f4971c4,Namespace:kube-system,Attempt:0,}" Apr 28 00:53:07.511127 kubelet[2526]: E0428 00:53:07.510865 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:53:07.525505 containerd[1465]: time="2026-04-28T00:53:07.523989361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-h29pq,Uid:118b2b5b-32c8-4539-9c27-9dc17934bb3f,Namespace:kube-system,Attempt:0,}" Apr 28 00:53:07.543244 kubelet[2526]: I0428 00:53:07.543086 2526 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-7nr2t" podStartSLOduration=2.452608111 podStartE2EDuration="16.543030081s" podCreationTimestamp="2026-04-28 00:52:51 +0000 UTC" firstStartedPulling="2026-04-28 00:52:52.389061906 +0000 UTC m=+5.533319992" lastFinishedPulling="2026-04-28 00:53:06.479483875 +0000 UTC m=+19.623741962" observedRunningTime="2026-04-28 00:53:07.542320958 +0000 UTC m=+20.686579048" watchObservedRunningTime="2026-04-28 00:53:07.543030081 +0000 UTC m=+20.687288180" Apr 28 00:53:08.503649 kubelet[2526]: E0428 00:53:08.503401 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:53:09.325910 systemd-networkd[1392]: cilium_host: Link UP Apr 28 00:53:09.329205 systemd-networkd[1392]: cilium_net: Link UP Apr 28 00:53:09.330241 systemd-networkd[1392]: cilium_net: Gained carrier Apr 28 00:53:09.330693 systemd-networkd[1392]: cilium_host: Gained carrier Apr 28 00:53:09.506560 kubelet[2526]: E0428 00:53:09.506473 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:53:09.527627 systemd-networkd[1392]: cilium_vxlan: Link UP Apr 28 00:53:09.527635 systemd-networkd[1392]: cilium_vxlan: Gained carrier Apr 28 00:53:09.593907 systemd-networkd[1392]: cilium_host: Gained IPv6LL Apr 28 00:53:10.044060 kernel: NET: Registered PF_ALG protocol family Apr 28 00:53:10.278397 systemd-networkd[1392]: cilium_net: Gained IPv6LL Apr 28 00:53:11.198722 systemd-networkd[1392]: lxc_health: Link UP Apr 28 00:53:11.209651 systemd-networkd[1392]: lxc_health: Gained carrier Apr 28 00:53:11.362592 systemd-networkd[1392]: cilium_vxlan: Gained IPv6LL Apr 28 00:53:11.749666 systemd-networkd[1392]: lxce55b1563399b: Link UP Apr 28 00:53:11.760064 kernel: eth0: renamed from tmp86da4 Apr 28 00:53:11.774865 systemd-networkd[1392]: lxce55b1563399b: Gained carrier Apr 28 00:53:11.838953 systemd-networkd[1392]: lxcfd0ecf4ad7dc: Link UP Apr 28 00:53:11.846075 kernel: eth0: renamed from tmpe3395 Apr 28 00:53:11.865647 systemd-networkd[1392]: lxcfd0ecf4ad7dc: Gained carrier Apr 28 00:53:12.117123 kubelet[2526]: E0428 00:53:12.115759 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:53:12.642681 systemd-networkd[1392]: lxc_health: Gained IPv6LL Apr 28 00:53:13.030315 systemd-networkd[1392]: lxce55b1563399b: Gained IPv6LL Apr 28 00:53:13.346743 systemd-networkd[1392]: lxcfd0ecf4ad7dc: Gained IPv6LL Apr 28 00:53:13.497260 kubelet[2526]: I0428 00:53:13.496963 2526 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 28 00:53:13.499086 kubelet[2526]: E0428 00:53:13.498186 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:53:13.554194 kubelet[2526]: E0428 00:53:13.553841 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:53:22.832176 containerd[1465]: time="2026-04-28T00:53:22.831837683Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:53:22.832176 containerd[1465]: time="2026-04-28T00:53:22.831897238Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:53:22.832176 containerd[1465]: time="2026-04-28T00:53:22.831912189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:53:22.832176 containerd[1465]: time="2026-04-28T00:53:22.832068014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:53:22.851730 systemd[1]: run-containerd-runc-k8s.io-86da43deb39e203de60a5b33f5718b29f99ebc225632bb5f0473b007fcc8d8a0-runc.0jzUh5.mount: Deactivated successfully. Apr 28 00:53:22.855636 containerd[1465]: time="2026-04-28T00:53:22.855479765Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:53:22.855636 containerd[1465]: time="2026-04-28T00:53:22.855558508Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:53:22.855636 containerd[1465]: time="2026-04-28T00:53:22.855567602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:53:22.855834 containerd[1465]: time="2026-04-28T00:53:22.855627748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:53:22.876653 systemd[1]: Started cri-containerd-86da43deb39e203de60a5b33f5718b29f99ebc225632bb5f0473b007fcc8d8a0.scope - libcontainer container 86da43deb39e203de60a5b33f5718b29f99ebc225632bb5f0473b007fcc8d8a0. Apr 28 00:53:22.899218 systemd-resolved[1395]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 28 00:53:22.915778 systemd[1]: Started cri-containerd-e339541600c99bf651e292d2555ea049a096dc568968cbae0e598469912b477a.scope - libcontainer container e339541600c99bf651e292d2555ea049a096dc568968cbae0e598469912b477a. Apr 28 00:53:22.927671 systemd-resolved[1395]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 28 00:53:22.942204 containerd[1465]: time="2026-04-28T00:53:22.942166070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-zctgc,Uid:236ff5e0-4461-4214-aef2-928b5f4971c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"86da43deb39e203de60a5b33f5718b29f99ebc225632bb5f0473b007fcc8d8a0\"" Apr 28 00:53:22.943395 kubelet[2526]: E0428 00:53:22.943351 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:53:22.956963 containerd[1465]: time="2026-04-28T00:53:22.956866862Z" level=info msg="CreateContainer within sandbox \"86da43deb39e203de60a5b33f5718b29f99ebc225632bb5f0473b007fcc8d8a0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 28 00:53:22.975284 containerd[1465]: time="2026-04-28T00:53:22.975150303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-h29pq,Uid:118b2b5b-32c8-4539-9c27-9dc17934bb3f,Namespace:kube-system,Attempt:0,} returns sandbox id \"e339541600c99bf651e292d2555ea049a096dc568968cbae0e598469912b477a\"" Apr 28 00:53:22.976466 kubelet[2526]: E0428 00:53:22.976401 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:53:22.987863 containerd[1465]: time="2026-04-28T00:53:22.987721702Z" level=info msg="CreateContainer within sandbox \"e339541600c99bf651e292d2555ea049a096dc568968cbae0e598469912b477a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 28 00:53:23.000281 containerd[1465]: time="2026-04-28T00:53:23.000155491Z" level=info msg="CreateContainer within sandbox \"86da43deb39e203de60a5b33f5718b29f99ebc225632bb5f0473b007fcc8d8a0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5e90637f364010899c8e5e88353c1f9092f5b7889b6457d9b592f5d832c9ed64\"" Apr 28 00:53:23.000944 containerd[1465]: time="2026-04-28T00:53:23.000913174Z" level=info msg="StartContainer for \"5e90637f364010899c8e5e88353c1f9092f5b7889b6457d9b592f5d832c9ed64\"" Apr 28 00:53:23.014120 containerd[1465]: time="2026-04-28T00:53:23.013980949Z" level=info msg="CreateContainer within sandbox \"e339541600c99bf651e292d2555ea049a096dc568968cbae0e598469912b477a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"40bdfe522001ceb8724237a911324e794101f64155422cc0b57058e7b9313c21\"" Apr 28 00:53:23.016911 containerd[1465]: time="2026-04-28T00:53:23.015308643Z" level=info msg="StartContainer for \"40bdfe522001ceb8724237a911324e794101f64155422cc0b57058e7b9313c21\"" Apr 28 00:53:23.034305 systemd[1]: Started cri-containerd-5e90637f364010899c8e5e88353c1f9092f5b7889b6457d9b592f5d832c9ed64.scope - libcontainer container 5e90637f364010899c8e5e88353c1f9092f5b7889b6457d9b592f5d832c9ed64. Apr 28 00:53:23.049717 systemd[1]: Started cri-containerd-40bdfe522001ceb8724237a911324e794101f64155422cc0b57058e7b9313c21.scope - libcontainer container 40bdfe522001ceb8724237a911324e794101f64155422cc0b57058e7b9313c21. Apr 28 00:53:23.143273 containerd[1465]: time="2026-04-28T00:53:23.142838966Z" level=info msg="StartContainer for \"5e90637f364010899c8e5e88353c1f9092f5b7889b6457d9b592f5d832c9ed64\" returns successfully" Apr 28 00:53:23.150783 containerd[1465]: time="2026-04-28T00:53:23.148356584Z" level=info msg="StartContainer for \"40bdfe522001ceb8724237a911324e794101f64155422cc0b57058e7b9313c21\" returns successfully" Apr 28 00:53:23.600731 systemd[1]: Started sshd@7-10.0.0.98:22-10.0.0.1:48196.service - OpenSSH per-connection server daemon (10.0.0.1:48196). Apr 28 00:53:23.650609 sshd[3942]: Accepted publickey for core from 10.0.0.1 port 48196 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 00:53:23.652395 sshd[3942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:53:23.674938 systemd-logind[1445]: New session 8 of user core. Apr 28 00:53:23.753914 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 28 00:53:23.810683 kubelet[2526]: E0428 00:53:23.810620 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:53:23.828390 kubelet[2526]: E0428 00:53:23.828236 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:53:23.843949 systemd[1]: run-containerd-runc-k8s.io-e339541600c99bf651e292d2555ea049a096dc568968cbae0e598469912b477a-runc.TaDooQ.mount: Deactivated successfully. Apr 28 00:53:23.932896 kubelet[2526]: I0428 00:53:23.932508 2526 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-zctgc" podStartSLOduration=31.932487554 podStartE2EDuration="31.932487554s" podCreationTimestamp="2026-04-28 00:52:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 00:53:23.834760447 +0000 UTC m=+36.979018554" watchObservedRunningTime="2026-04-28 00:53:23.932487554 +0000 UTC m=+37.076745663" Apr 28 00:53:24.253062 sshd[3942]: pam_unix(sshd:session): session closed for user core Apr 28 00:53:24.260782 systemd[1]: sshd@7-10.0.0.98:22-10.0.0.1:48196.service: Deactivated successfully. Apr 28 00:53:24.264206 systemd[1]: session-8.scope: Deactivated successfully. Apr 28 00:53:24.273888 systemd-logind[1445]: Session 8 logged out. Waiting for processes to exit. Apr 28 00:53:24.286563 systemd-logind[1445]: Removed session 8. Apr 28 00:53:24.835494 kubelet[2526]: E0428 00:53:24.835267 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:53:24.835494 kubelet[2526]: E0428 00:53:24.835310 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:53:25.856248 kubelet[2526]: E0428 00:53:25.856069 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:53:25.856248 kubelet[2526]: E0428 00:53:25.856119 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:53:29.319669 systemd[1]: Started sshd@8-10.0.0.98:22-10.0.0.1:34654.service - OpenSSH per-connection server daemon (10.0.0.1:34654). Apr 28 00:53:29.494623 sshd[3966]: Accepted publickey for core from 10.0.0.1 port 34654 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 00:53:29.501842 sshd[3966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:53:29.518465 systemd-logind[1445]: New session 9 of user core. Apr 28 00:53:29.535263 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 28 00:53:29.813913 sshd[3966]: pam_unix(sshd:session): session closed for user core Apr 28 00:53:29.818718 systemd[1]: sshd@8-10.0.0.98:22-10.0.0.1:34654.service: Deactivated successfully. Apr 28 00:53:29.823451 systemd[1]: session-9.scope: Deactivated successfully. Apr 28 00:53:29.824424 systemd-logind[1445]: Session 9 logged out. Waiting for processes to exit. Apr 28 00:53:29.827449 systemd-logind[1445]: Removed session 9. Apr 28 00:53:34.863610 systemd[1]: Started sshd@9-10.0.0.98:22-10.0.0.1:34668.service - OpenSSH per-connection server daemon (10.0.0.1:34668). Apr 28 00:53:35.162099 sshd[3981]: Accepted publickey for core from 10.0.0.1 port 34668 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 00:53:35.166465 sshd[3981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:53:35.211382 systemd-logind[1445]: New session 10 of user core. Apr 28 00:53:35.229750 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 28 00:53:35.859662 sshd[3981]: pam_unix(sshd:session): session closed for user core Apr 28 00:53:35.884227 systemd[1]: sshd@9-10.0.0.98:22-10.0.0.1:34668.service: Deactivated successfully. Apr 28 00:53:35.893774 systemd[1]: session-10.scope: Deactivated successfully. Apr 28 00:53:35.967392 systemd-logind[1445]: Session 10 logged out. Waiting for processes to exit. Apr 28 00:53:35.977045 systemd-logind[1445]: Removed session 10. Apr 28 00:53:40.982566 systemd[1]: Started sshd@10-10.0.0.98:22-10.0.0.1:60404.service - OpenSSH per-connection server daemon (10.0.0.1:60404). Apr 28 00:53:41.194188 sshd[3997]: Accepted publickey for core from 10.0.0.1 port 60404 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 00:53:41.197744 sshd[3997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:53:41.212067 systemd-logind[1445]: New session 11 of user core. Apr 28 00:53:41.225456 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 28 00:53:41.695934 sshd[3997]: pam_unix(sshd:session): session closed for user core Apr 28 00:53:41.704913 systemd[1]: sshd@10-10.0.0.98:22-10.0.0.1:60404.service: Deactivated successfully. Apr 28 00:53:41.717893 systemd[1]: session-11.scope: Deactivated successfully. Apr 28 00:53:41.731481 systemd-logind[1445]: Session 11 logged out. Waiting for processes to exit. Apr 28 00:53:41.735419 systemd-logind[1445]: Removed session 11. Apr 28 00:53:46.819537 systemd[1]: Started sshd@11-10.0.0.98:22-10.0.0.1:38834.service - OpenSSH per-connection server daemon (10.0.0.1:38834). Apr 28 00:53:47.059375 sshd[4012]: Accepted publickey for core from 10.0.0.1 port 38834 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 00:53:47.080157 sshd[4012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:53:47.138296 systemd-logind[1445]: New session 12 of user core. Apr 28 00:53:47.150425 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 28 00:53:47.669108 sshd[4012]: pam_unix(sshd:session): session closed for user core Apr 28 00:53:47.690420 systemd[1]: sshd@11-10.0.0.98:22-10.0.0.1:38834.service: Deactivated successfully. Apr 28 00:53:47.701299 systemd[1]: session-12.scope: Deactivated successfully. Apr 28 00:53:47.702371 systemd-logind[1445]: Session 12 logged out. Waiting for processes to exit. Apr 28 00:53:47.704421 systemd-logind[1445]: Removed session 12. Apr 28 00:53:52.705642 systemd[1]: Started sshd@12-10.0.0.98:22-10.0.0.1:38846.service - OpenSSH per-connection server daemon (10.0.0.1:38846). Apr 28 00:53:52.756358 sshd[4030]: Accepted publickey for core from 10.0.0.1 port 38846 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 00:53:52.758889 sshd[4030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:53:52.778336 systemd-logind[1445]: New session 13 of user core. Apr 28 00:53:52.844631 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 28 00:53:53.051446 sshd[4030]: pam_unix(sshd:session): session closed for user core Apr 28 00:53:53.070056 systemd[1]: sshd@12-10.0.0.98:22-10.0.0.1:38846.service: Deactivated successfully. Apr 28 00:53:53.074497 systemd[1]: session-13.scope: Deactivated successfully. Apr 28 00:53:53.076221 systemd-logind[1445]: Session 13 logged out. Waiting for processes to exit. Apr 28 00:53:53.083817 systemd[1]: Started sshd@13-10.0.0.98:22-10.0.0.1:38848.service - OpenSSH per-connection server daemon (10.0.0.1:38848). Apr 28 00:53:53.089343 systemd-logind[1445]: Removed session 13. Apr 28 00:53:53.185440 sshd[4047]: Accepted publickey for core from 10.0.0.1 port 38848 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 00:53:53.191380 sshd[4047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:53:53.196501 systemd-logind[1445]: New session 14 of user core. Apr 28 00:53:53.203835 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 28 00:53:53.552680 sshd[4047]: pam_unix(sshd:session): session closed for user core Apr 28 00:53:53.595952 systemd[1]: sshd@13-10.0.0.98:22-10.0.0.1:38848.service: Deactivated successfully. Apr 28 00:53:53.597893 systemd[1]: session-14.scope: Deactivated successfully. Apr 28 00:53:53.609638 systemd-logind[1445]: Session 14 logged out. Waiting for processes to exit. Apr 28 00:53:53.617850 systemd[1]: Started sshd@14-10.0.0.98:22-10.0.0.1:38858.service - OpenSSH per-connection server daemon (10.0.0.1:38858). Apr 28 00:53:53.622663 systemd-logind[1445]: Removed session 14. Apr 28 00:53:53.920292 sshd[4059]: Accepted publickey for core from 10.0.0.1 port 38858 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 00:53:53.925797 sshd[4059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:53:53.931325 systemd-logind[1445]: New session 15 of user core. Apr 28 00:53:53.936685 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 28 00:53:54.056635 kubelet[2526]: E0428 00:53:54.053772 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:53:54.481548 sshd[4059]: pam_unix(sshd:session): session closed for user core Apr 28 00:53:54.499114 systemd[1]: sshd@14-10.0.0.98:22-10.0.0.1:38858.service: Deactivated successfully. Apr 28 00:53:54.507489 systemd[1]: session-15.scope: Deactivated successfully. Apr 28 00:53:54.508579 systemd-logind[1445]: Session 15 logged out. Waiting for processes to exit. Apr 28 00:53:54.509831 systemd-logind[1445]: Removed session 15. Apr 28 00:53:59.649872 systemd[1]: Started sshd@15-10.0.0.98:22-10.0.0.1:34864.service - OpenSSH per-connection server daemon (10.0.0.1:34864). Apr 28 00:53:59.946481 sshd[4073]: Accepted publickey for core from 10.0.0.1 port 34864 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 00:53:59.952798 sshd[4073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:54:00.003706 systemd-logind[1445]: New session 16 of user core. Apr 28 00:54:00.014247 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 28 00:54:00.233565 sshd[4073]: pam_unix(sshd:session): session closed for user core Apr 28 00:54:00.236874 systemd[1]: sshd@15-10.0.0.98:22-10.0.0.1:34864.service: Deactivated successfully. Apr 28 00:54:00.246661 systemd[1]: session-16.scope: Deactivated successfully. Apr 28 00:54:00.249437 systemd-logind[1445]: Session 16 logged out. Waiting for processes to exit. Apr 28 00:54:00.253251 systemd-logind[1445]: Removed session 16. Apr 28 00:54:05.274280 systemd[1]: Started sshd@16-10.0.0.98:22-10.0.0.1:46432.service - OpenSSH per-connection server daemon (10.0.0.1:46432). Apr 28 00:54:05.393619 sshd[4089]: Accepted publickey for core from 10.0.0.1 port 46432 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 00:54:05.399822 sshd[4089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:54:05.415927 systemd-logind[1445]: New session 17 of user core. Apr 28 00:54:05.429141 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 28 00:54:05.674837 sshd[4089]: pam_unix(sshd:session): session closed for user core Apr 28 00:54:05.678216 systemd[1]: sshd@16-10.0.0.98:22-10.0.0.1:46432.service: Deactivated successfully. Apr 28 00:54:05.679733 systemd[1]: session-17.scope: Deactivated successfully. Apr 28 00:54:05.680255 systemd-logind[1445]: Session 17 logged out. Waiting for processes to exit. Apr 28 00:54:05.683757 systemd-logind[1445]: Removed session 17. Apr 28 00:54:09.053973 kubelet[2526]: E0428 00:54:09.053769 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:54:10.691648 systemd[1]: Started sshd@17-10.0.0.98:22-10.0.0.1:46436.service - OpenSSH per-connection server daemon (10.0.0.1:46436). Apr 28 00:54:10.737737 sshd[4103]: Accepted publickey for core from 10.0.0.1 port 46436 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 00:54:10.739671 sshd[4103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:54:10.743661 systemd-logind[1445]: New session 18 of user core. Apr 28 00:54:10.752149 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 28 00:54:10.867495 sshd[4103]: pam_unix(sshd:session): session closed for user core Apr 28 00:54:10.874565 systemd[1]: sshd@17-10.0.0.98:22-10.0.0.1:46436.service: Deactivated successfully. Apr 28 00:54:10.878927 systemd[1]: session-18.scope: Deactivated successfully. Apr 28 00:54:10.880463 systemd-logind[1445]: Session 18 logged out. Waiting for processes to exit. Apr 28 00:54:10.896666 systemd[1]: Started sshd@18-10.0.0.98:22-10.0.0.1:46452.service - OpenSSH per-connection server daemon (10.0.0.1:46452). Apr 28 00:54:10.898526 systemd-logind[1445]: Removed session 18. Apr 28 00:54:10.921160 sshd[4117]: Accepted publickey for core from 10.0.0.1 port 46452 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 00:54:10.922594 sshd[4117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:54:10.926695 systemd-logind[1445]: New session 19 of user core. Apr 28 00:54:10.937428 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 28 00:54:11.422406 sshd[4117]: pam_unix(sshd:session): session closed for user core Apr 28 00:54:11.437910 systemd[1]: sshd@18-10.0.0.98:22-10.0.0.1:46452.service: Deactivated successfully. Apr 28 00:54:11.439535 systemd[1]: session-19.scope: Deactivated successfully. Apr 28 00:54:11.448741 systemd-logind[1445]: Session 19 logged out. Waiting for processes to exit. Apr 28 00:54:11.455311 systemd[1]: Started sshd@19-10.0.0.98:22-10.0.0.1:46462.service - OpenSSH per-connection server daemon (10.0.0.1:46462). Apr 28 00:54:11.456167 systemd-logind[1445]: Removed session 19. Apr 28 00:54:11.505704 sshd[4130]: Accepted publickey for core from 10.0.0.1 port 46462 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 00:54:11.508095 sshd[4130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:54:11.517436 systemd-logind[1445]: New session 20 of user core. Apr 28 00:54:11.532744 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 28 00:54:12.057537 kubelet[2526]: E0428 00:54:12.054456 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:54:13.099834 sshd[4130]: pam_unix(sshd:session): session closed for user core Apr 28 00:54:13.161562 systemd[1]: Started sshd@20-10.0.0.98:22-10.0.0.1:46472.service - OpenSSH per-connection server daemon (10.0.0.1:46472). Apr 28 00:54:13.165507 systemd[1]: sshd@19-10.0.0.98:22-10.0.0.1:46462.service: Deactivated successfully. Apr 28 00:54:13.167249 systemd[1]: session-20.scope: Deactivated successfully. Apr 28 00:54:13.167447 systemd[1]: session-20.scope: Consumed 1.172s CPU time. Apr 28 00:54:13.176270 systemd-logind[1445]: Session 20 logged out. Waiting for processes to exit. Apr 28 00:54:13.187139 systemd-logind[1445]: Removed session 20. Apr 28 00:54:13.216045 sshd[4146]: Accepted publickey for core from 10.0.0.1 port 46472 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 00:54:13.218397 sshd[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:54:13.223453 systemd-logind[1445]: New session 21 of user core. Apr 28 00:54:13.230501 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 28 00:54:13.524934 sshd[4146]: pam_unix(sshd:session): session closed for user core Apr 28 00:54:13.539438 systemd[1]: sshd@20-10.0.0.98:22-10.0.0.1:46472.service: Deactivated successfully. Apr 28 00:54:13.541248 systemd[1]: session-21.scope: Deactivated successfully. Apr 28 00:54:13.542757 systemd-logind[1445]: Session 21 logged out. Waiting for processes to exit. Apr 28 00:54:13.554986 systemd[1]: Started sshd@21-10.0.0.98:22-10.0.0.1:46480.service - OpenSSH per-connection server daemon (10.0.0.1:46480). Apr 28 00:54:13.558129 systemd-logind[1445]: Removed session 21. Apr 28 00:54:13.607063 sshd[4160]: Accepted publickey for core from 10.0.0.1 port 46480 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 00:54:13.608813 sshd[4160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:54:13.619862 systemd-logind[1445]: New session 22 of user core. Apr 28 00:54:13.641914 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 28 00:54:13.882193 sshd[4160]: pam_unix(sshd:session): session closed for user core Apr 28 00:54:13.896374 systemd[1]: sshd@21-10.0.0.98:22-10.0.0.1:46480.service: Deactivated successfully. Apr 28 00:54:13.905246 systemd[1]: session-22.scope: Deactivated successfully. Apr 28 00:54:13.910179 systemd-logind[1445]: Session 22 logged out. Waiting for processes to exit. Apr 28 00:54:13.911508 systemd-logind[1445]: Removed session 22. Apr 28 00:54:15.059972 kubelet[2526]: E0428 00:54:15.059697 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:54:18.908261 systemd[1]: Started sshd@22-10.0.0.98:22-10.0.0.1:60444.service - OpenSSH per-connection server daemon (10.0.0.1:60444). Apr 28 00:54:18.936062 sshd[4178]: Accepted publickey for core from 10.0.0.1 port 60444 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 00:54:18.940229 sshd[4178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:54:18.944829 systemd-logind[1445]: New session 23 of user core. Apr 28 00:54:18.950172 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 28 00:54:19.053930 kubelet[2526]: E0428 00:54:19.053775 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:54:19.094390 sshd[4178]: pam_unix(sshd:session): session closed for user core Apr 28 00:54:19.097950 systemd[1]: sshd@22-10.0.0.98:22-10.0.0.1:60444.service: Deactivated successfully. Apr 28 00:54:19.099387 systemd[1]: session-23.scope: Deactivated successfully. Apr 28 00:54:19.100067 systemd-logind[1445]: Session 23 logged out. Waiting for processes to exit. Apr 28 00:54:19.101145 systemd-logind[1445]: Removed session 23. Apr 28 00:54:24.129864 systemd[1]: Started sshd@23-10.0.0.98:22-10.0.0.1:60456.service - OpenSSH per-connection server daemon (10.0.0.1:60456). Apr 28 00:54:24.192898 sshd[4194]: Accepted publickey for core from 10.0.0.1 port 60456 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 00:54:24.195266 sshd[4194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:54:24.207346 systemd-logind[1445]: New session 24 of user core. Apr 28 00:54:24.225459 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 28 00:54:24.383058 sshd[4194]: pam_unix(sshd:session): session closed for user core Apr 28 00:54:24.386910 systemd[1]: sshd@23-10.0.0.98:22-10.0.0.1:60456.service: Deactivated successfully. Apr 28 00:54:24.391083 systemd[1]: session-24.scope: Deactivated successfully. Apr 28 00:54:24.391663 systemd-logind[1445]: Session 24 logged out. Waiting for processes to exit. Apr 28 00:54:24.397156 systemd-logind[1445]: Removed session 24. Apr 28 00:54:29.058180 kubelet[2526]: E0428 00:54:29.057936 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:54:29.410474 systemd[1]: Started sshd@24-10.0.0.98:22-10.0.0.1:37894.service - OpenSSH per-connection server daemon (10.0.0.1:37894). Apr 28 00:54:29.439725 sshd[4208]: Accepted publickey for core from 10.0.0.1 port 37894 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 00:54:29.441075 sshd[4208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:54:29.445072 systemd-logind[1445]: New session 25 of user core. Apr 28 00:54:29.454177 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 28 00:54:29.563519 sshd[4208]: pam_unix(sshd:session): session closed for user core Apr 28 00:54:29.573463 systemd[1]: sshd@24-10.0.0.98:22-10.0.0.1:37894.service: Deactivated successfully. Apr 28 00:54:29.575161 systemd[1]: session-25.scope: Deactivated successfully. Apr 28 00:54:29.576805 systemd-logind[1445]: Session 25 logged out. Waiting for processes to exit. Apr 28 00:54:29.587371 systemd[1]: Started sshd@25-10.0.0.98:22-10.0.0.1:37900.service - OpenSSH per-connection server daemon (10.0.0.1:37900). Apr 28 00:54:29.589238 systemd-logind[1445]: Removed session 25. Apr 28 00:54:29.661563 sshd[4222]: Accepted publickey for core from 10.0.0.1 port 37900 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 00:54:29.663100 sshd[4222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:54:29.672180 systemd-logind[1445]: New session 26 of user core. Apr 28 00:54:29.684557 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 28 00:54:34.891844 kubelet[2526]: E0428 00:54:34.891508 2526 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.24s" Apr 28 00:54:35.070123 kubelet[2526]: I0428 00:54:35.069533 2526 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-h29pq" podStartSLOduration=103.0695186 podStartE2EDuration="1m43.0695186s" podCreationTimestamp="2026-04-28 00:52:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 00:53:24.118655745 +0000 UTC m=+37.262913841" watchObservedRunningTime="2026-04-28 00:54:35.0695186 +0000 UTC m=+108.213776686" Apr 28 00:54:35.086187 containerd[1465]: time="2026-04-28T00:54:35.085658272Z" level=info msg="StopContainer for \"30a4eeeee1dc519851cc2ff43d18bd5800de025f2489462728e86dee1fe63ec0\" with timeout 30 (s)" Apr 28 00:54:35.090729 containerd[1465]: time="2026-04-28T00:54:35.090628759Z" level=info msg="Stop container \"30a4eeeee1dc519851cc2ff43d18bd5800de025f2489462728e86dee1fe63ec0\" with signal terminated" Apr 28 00:54:35.140257 systemd[1]: run-containerd-runc-k8s.io-7e86f7d01b4d6dd3949cea7d4323b097d6143eb38f93fbead100a71ddc38c0d9-runc.qeZOy1.mount: Deactivated successfully. Apr 28 00:54:35.175610 containerd[1465]: time="2026-04-28T00:54:35.175328641Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 28 00:54:35.192864 systemd[1]: cri-containerd-30a4eeeee1dc519851cc2ff43d18bd5800de025f2489462728e86dee1fe63ec0.scope: Deactivated successfully. Apr 28 00:54:35.193825 systemd[1]: cri-containerd-30a4eeeee1dc519851cc2ff43d18bd5800de025f2489462728e86dee1fe63ec0.scope: Consumed 2.056s CPU time. Apr 28 00:54:35.200408 containerd[1465]: time="2026-04-28T00:54:35.200373725Z" level=info msg="StopContainer for \"7e86f7d01b4d6dd3949cea7d4323b097d6143eb38f93fbead100a71ddc38c0d9\" with timeout 2 (s)" Apr 28 00:54:35.200707 containerd[1465]: time="2026-04-28T00:54:35.200666886Z" level=info msg="Stop container \"7e86f7d01b4d6dd3949cea7d4323b097d6143eb38f93fbead100a71ddc38c0d9\" with signal terminated" Apr 28 00:54:35.223539 systemd-networkd[1392]: lxc_health: Link DOWN Apr 28 00:54:35.223545 systemd-networkd[1392]: lxc_health: Lost carrier Apr 28 00:54:35.246068 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30a4eeeee1dc519851cc2ff43d18bd5800de025f2489462728e86dee1fe63ec0-rootfs.mount: Deactivated successfully. Apr 28 00:54:35.252294 containerd[1465]: time="2026-04-28T00:54:35.252108643Z" level=info msg="shim disconnected" id=30a4eeeee1dc519851cc2ff43d18bd5800de025f2489462728e86dee1fe63ec0 namespace=k8s.io Apr 28 00:54:35.252602 containerd[1465]: time="2026-04-28T00:54:35.252302702Z" level=warning msg="cleaning up after shim disconnected" id=30a4eeeee1dc519851cc2ff43d18bd5800de025f2489462728e86dee1fe63ec0 namespace=k8s.io Apr 28 00:54:35.252602 containerd[1465]: time="2026-04-28T00:54:35.252349493Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 00:54:35.263565 systemd[1]: cri-containerd-7e86f7d01b4d6dd3949cea7d4323b097d6143eb38f93fbead100a71ddc38c0d9.scope: Deactivated successfully. Apr 28 00:54:35.264064 systemd[1]: cri-containerd-7e86f7d01b4d6dd3949cea7d4323b097d6143eb38f93fbead100a71ddc38c0d9.scope: Consumed 18.181s CPU time. Apr 28 00:54:35.276337 containerd[1465]: time="2026-04-28T00:54:35.276271389Z" level=warning msg="cleanup warnings time=\"2026-04-28T00:54:35Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 28 00:54:35.283856 containerd[1465]: time="2026-04-28T00:54:35.283597682Z" level=info msg="StopContainer for \"30a4eeeee1dc519851cc2ff43d18bd5800de025f2489462728e86dee1fe63ec0\" returns successfully" Apr 28 00:54:35.285105 containerd[1465]: time="2026-04-28T00:54:35.285048466Z" level=info msg="StopPodSandbox for \"2e31b90064eddb17c26b1c06b7c7b2cccacd7d2a06be609071f2b2c43b13f170\"" Apr 28 00:54:35.285204 containerd[1465]: time="2026-04-28T00:54:35.285112754Z" level=info msg="Container to stop \"30a4eeeee1dc519851cc2ff43d18bd5800de025f2489462728e86dee1fe63ec0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 28 00:54:35.293169 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2e31b90064eddb17c26b1c06b7c7b2cccacd7d2a06be609071f2b2c43b13f170-shm.mount: Deactivated successfully. Apr 28 00:54:35.304353 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e86f7d01b4d6dd3949cea7d4323b097d6143eb38f93fbead100a71ddc38c0d9-rootfs.mount: Deactivated successfully. Apr 28 00:54:35.305081 systemd[1]: cri-containerd-2e31b90064eddb17c26b1c06b7c7b2cccacd7d2a06be609071f2b2c43b13f170.scope: Deactivated successfully. Apr 28 00:54:35.321962 containerd[1465]: time="2026-04-28T00:54:35.321899793Z" level=info msg="shim disconnected" id=7e86f7d01b4d6dd3949cea7d4323b097d6143eb38f93fbead100a71ddc38c0d9 namespace=k8s.io Apr 28 00:54:35.321962 containerd[1465]: time="2026-04-28T00:54:35.321956744Z" level=warning msg="cleaning up after shim disconnected" id=7e86f7d01b4d6dd3949cea7d4323b097d6143eb38f93fbead100a71ddc38c0d9 namespace=k8s.io Apr 28 00:54:35.321962 containerd[1465]: time="2026-04-28T00:54:35.321963801Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 00:54:35.350731 containerd[1465]: time="2026-04-28T00:54:35.350584785Z" level=info msg="StopContainer for \"7e86f7d01b4d6dd3949cea7d4323b097d6143eb38f93fbead100a71ddc38c0d9\" returns successfully" Apr 28 00:54:35.353705 containerd[1465]: time="2026-04-28T00:54:35.353648092Z" level=info msg="StopPodSandbox for \"e8eee1b9d2db4da8e4e2567b3e7cf3313b9ea7b51be671a57a175c5c212791bc\"" Apr 28 00:54:35.354254 containerd[1465]: time="2026-04-28T00:54:35.353711520Z" level=info msg="Container to stop \"cf6a156d256f78e878570d15a42ddbfebf48a12fda7b0beed6a1c87eedb14731\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 28 00:54:35.354254 containerd[1465]: time="2026-04-28T00:54:35.353721628Z" level=info msg="Container to stop \"e13e9b96703346c74c34498668f3a4f8154f9ad671e8bcf60248b01fe332f065\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 28 00:54:35.354254 containerd[1465]: time="2026-04-28T00:54:35.353729114Z" level=info msg="Container to stop \"32fffeb9eee6ba0cd8c1b530674cd9f9727d893a0317992e3b1fdaa7fa7f349b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 28 00:54:35.354254 containerd[1465]: time="2026-04-28T00:54:35.353736105Z" level=info msg="Container to stop \"7e86f7d01b4d6dd3949cea7d4323b097d6143eb38f93fbead100a71ddc38c0d9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 28 00:54:35.354254 containerd[1465]: time="2026-04-28T00:54:35.353759761Z" level=info msg="Container to stop \"83bfbb40b00235dd6a211b06c994dcbf640ea03033018a06e112799bc40b60db\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 28 00:54:35.364583 containerd[1465]: time="2026-04-28T00:54:35.364495795Z" level=info msg="shim disconnected" id=2e31b90064eddb17c26b1c06b7c7b2cccacd7d2a06be609071f2b2c43b13f170 namespace=k8s.io Apr 28 00:54:35.364583 containerd[1465]: time="2026-04-28T00:54:35.364575366Z" level=warning msg="cleaning up after shim disconnected" id=2e31b90064eddb17c26b1c06b7c7b2cccacd7d2a06be609071f2b2c43b13f170 namespace=k8s.io Apr 28 00:54:35.364583 containerd[1465]: time="2026-04-28T00:54:35.364583056Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 00:54:35.370700 systemd[1]: cri-containerd-e8eee1b9d2db4da8e4e2567b3e7cf3313b9ea7b51be671a57a175c5c212791bc.scope: Deactivated successfully. Apr 28 00:54:35.388044 containerd[1465]: time="2026-04-28T00:54:35.387982346Z" level=info msg="TearDown network for sandbox \"2e31b90064eddb17c26b1c06b7c7b2cccacd7d2a06be609071f2b2c43b13f170\" successfully" Apr 28 00:54:35.388044 containerd[1465]: time="2026-04-28T00:54:35.388055646Z" level=info msg="StopPodSandbox for \"2e31b90064eddb17c26b1c06b7c7b2cccacd7d2a06be609071f2b2c43b13f170\" returns successfully" Apr 28 00:54:35.413786 containerd[1465]: time="2026-04-28T00:54:35.413317601Z" level=info msg="shim disconnected" id=e8eee1b9d2db4da8e4e2567b3e7cf3313b9ea7b51be671a57a175c5c212791bc namespace=k8s.io Apr 28 00:54:35.413786 containerd[1465]: time="2026-04-28T00:54:35.413403206Z" level=warning msg="cleaning up after shim disconnected" id=e8eee1b9d2db4da8e4e2567b3e7cf3313b9ea7b51be671a57a175c5c212791bc namespace=k8s.io Apr 28 00:54:35.413786 containerd[1465]: time="2026-04-28T00:54:35.413410114Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 00:54:35.422360 kubelet[2526]: I0428 00:54:35.421436 2526 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/fed718fc-56e8-4472-bce7-5c57a894600b-kube-api-access-rpfw2\" (UniqueName: \"kubernetes.io/projected/fed718fc-56e8-4472-bce7-5c57a894600b-kube-api-access-rpfw2\") pod \"fed718fc-56e8-4472-bce7-5c57a894600b\" (UID: \"fed718fc-56e8-4472-bce7-5c57a894600b\") " Apr 28 00:54:35.422360 kubelet[2526]: I0428 00:54:35.421674 2526 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/fed718fc-56e8-4472-bce7-5c57a894600b-cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fed718fc-56e8-4472-bce7-5c57a894600b-cilium-config-path\") pod \"fed718fc-56e8-4472-bce7-5c57a894600b\" (UID: \"fed718fc-56e8-4472-bce7-5c57a894600b\") " Apr 28 00:54:35.426258 kubelet[2526]: I0428 00:54:35.426096 2526 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fed718fc-56e8-4472-bce7-5c57a894600b-cilium-config-path" pod "fed718fc-56e8-4472-bce7-5c57a894600b" (UID: "fed718fc-56e8-4472-bce7-5c57a894600b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 28 00:54:35.428983 kubelet[2526]: I0428 00:54:35.428905 2526 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fed718fc-56e8-4472-bce7-5c57a894600b-kube-api-access-rpfw2" pod "fed718fc-56e8-4472-bce7-5c57a894600b" (UID: "fed718fc-56e8-4472-bce7-5c57a894600b"). InnerVolumeSpecName "kube-api-access-rpfw2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 28 00:54:35.448881 containerd[1465]: time="2026-04-28T00:54:35.448550678Z" level=info msg="TearDown network for sandbox \"e8eee1b9d2db4da8e4e2567b3e7cf3313b9ea7b51be671a57a175c5c212791bc\" successfully" Apr 28 00:54:35.448881 containerd[1465]: time="2026-04-28T00:54:35.448629444Z" level=info msg="StopPodSandbox for \"e8eee1b9d2db4da8e4e2567b3e7cf3313b9ea7b51be671a57a175c5c212791bc\" returns successfully" Apr 28 00:54:35.523803 kubelet[2526]: I0428 00:54:35.522822 2526 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/eb85c81a-23da-4457-9bba-49e68db7ac00-hubble-tls\" (UniqueName: \"kubernetes.io/projected/eb85c81a-23da-4457-9bba-49e68db7ac00-hubble-tls\") pod \"eb85c81a-23da-4457-9bba-49e68db7ac00\" (UID: \"eb85c81a-23da-4457-9bba-49e68db7ac00\") " Apr 28 00:54:35.523803 kubelet[2526]: I0428 00:54:35.522979 2526 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/eb85c81a-23da-4457-9bba-49e68db7ac00-bpf-maps\" (UniqueName: \"kubernetes.io/host-path/eb85c81a-23da-4457-9bba-49e68db7ac00-bpf-maps\") pod \"eb85c81a-23da-4457-9bba-49e68db7ac00\" (UID: \"eb85c81a-23da-4457-9bba-49e68db7ac00\") " Apr 28 00:54:35.523803 kubelet[2526]: I0428 00:54:35.523085 2526 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/eb85c81a-23da-4457-9bba-49e68db7ac00-clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/eb85c81a-23da-4457-9bba-49e68db7ac00-clustermesh-secrets\") pod \"eb85c81a-23da-4457-9bba-49e68db7ac00\" (UID: \"eb85c81a-23da-4457-9bba-49e68db7ac00\") " Apr 28 00:54:35.523803 kubelet[2526]: I0428 00:54:35.523119 2526 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/eb85c81a-23da-4457-9bba-49e68db7ac00-etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eb85c81a-23da-4457-9bba-49e68db7ac00-etc-cni-netd\") pod \"eb85c81a-23da-4457-9bba-49e68db7ac00\" (UID: \"eb85c81a-23da-4457-9bba-49e68db7ac00\") " Apr 28 00:54:35.523803 kubelet[2526]: I0428 00:54:35.523145 2526 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/eb85c81a-23da-4457-9bba-49e68db7ac00-cilium-run\" (UniqueName: \"kubernetes.io/host-path/eb85c81a-23da-4457-9bba-49e68db7ac00-cilium-run\") pod \"eb85c81a-23da-4457-9bba-49e68db7ac00\" (UID: \"eb85c81a-23da-4457-9bba-49e68db7ac00\") " Apr 28 00:54:35.524263 kubelet[2526]: I0428 00:54:35.523163 2526 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/eb85c81a-23da-4457-9bba-49e68db7ac00-cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eb85c81a-23da-4457-9bba-49e68db7ac00-cilium-config-path\") pod \"eb85c81a-23da-4457-9bba-49e68db7ac00\" (UID: \"eb85c81a-23da-4457-9bba-49e68db7ac00\") " Apr 28 00:54:35.524263 kubelet[2526]: I0428 00:54:35.523201 2526 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/eb85c81a-23da-4457-9bba-49e68db7ac00-cni-path\" (UniqueName: \"kubernetes.io/host-path/eb85c81a-23da-4457-9bba-49e68db7ac00-cni-path\") pod \"eb85c81a-23da-4457-9bba-49e68db7ac00\" (UID: \"eb85c81a-23da-4457-9bba-49e68db7ac00\") " Apr 28 00:54:35.524263 kubelet[2526]: I0428 00:54:35.523214 2526 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/eb85c81a-23da-4457-9bba-49e68db7ac00-hostproc\" (UniqueName: \"kubernetes.io/host-path/eb85c81a-23da-4457-9bba-49e68db7ac00-hostproc\") pod \"eb85c81a-23da-4457-9bba-49e68db7ac00\" (UID: \"eb85c81a-23da-4457-9bba-49e68db7ac00\") " Apr 28 00:54:35.524263 kubelet[2526]: I0428 00:54:35.523239 2526 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/eb85c81a-23da-4457-9bba-49e68db7ac00-kube-api-access-xw8pv\" (UniqueName: \"kubernetes.io/projected/eb85c81a-23da-4457-9bba-49e68db7ac00-kube-api-access-xw8pv\") pod \"eb85c81a-23da-4457-9bba-49e68db7ac00\" (UID: \"eb85c81a-23da-4457-9bba-49e68db7ac00\") " Apr 28 00:54:35.524263 kubelet[2526]: I0428 00:54:35.523254 2526 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/eb85c81a-23da-4457-9bba-49e68db7ac00-host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/eb85c81a-23da-4457-9bba-49e68db7ac00-host-proc-sys-kernel\") pod \"eb85c81a-23da-4457-9bba-49e68db7ac00\" (UID: \"eb85c81a-23da-4457-9bba-49e68db7ac00\") " Apr 28 00:54:35.524358 kubelet[2526]: I0428 00:54:35.523286 2526 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/eb85c81a-23da-4457-9bba-49e68db7ac00-xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eb85c81a-23da-4457-9bba-49e68db7ac00-xtables-lock\") pod \"eb85c81a-23da-4457-9bba-49e68db7ac00\" (UID: \"eb85c81a-23da-4457-9bba-49e68db7ac00\") " Apr 28 00:54:35.524358 kubelet[2526]: I0428 00:54:35.523301 2526 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/eb85c81a-23da-4457-9bba-49e68db7ac00-host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/eb85c81a-23da-4457-9bba-49e68db7ac00-host-proc-sys-net\") pod \"eb85c81a-23da-4457-9bba-49e68db7ac00\" (UID: \"eb85c81a-23da-4457-9bba-49e68db7ac00\") " Apr 28 00:54:35.524358 kubelet[2526]: I0428 00:54:35.523345 2526 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/eb85c81a-23da-4457-9bba-49e68db7ac00-lib-modules\" (UniqueName: \"kubernetes.io/host-path/eb85c81a-23da-4457-9bba-49e68db7ac00-lib-modules\") pod \"eb85c81a-23da-4457-9bba-49e68db7ac00\" (UID: \"eb85c81a-23da-4457-9bba-49e68db7ac00\") " Apr 28 00:54:35.524358 kubelet[2526]: I0428 00:54:35.523364 2526 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/eb85c81a-23da-4457-9bba-49e68db7ac00-cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/eb85c81a-23da-4457-9bba-49e68db7ac00-cilium-cgroup\") pod \"eb85c81a-23da-4457-9bba-49e68db7ac00\" (UID: \"eb85c81a-23da-4457-9bba-49e68db7ac00\") " Apr 28 00:54:35.524358 kubelet[2526]: I0428 00:54:35.523393 2526 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rpfw2\" (UniqueName: \"kubernetes.io/projected/fed718fc-56e8-4472-bce7-5c57a894600b-kube-api-access-rpfw2\") on node \"localhost\" DevicePath \"\"" Apr 28 00:54:35.524358 kubelet[2526]: I0428 00:54:35.523429 2526 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fed718fc-56e8-4472-bce7-5c57a894600b-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 28 00:54:35.524488 kubelet[2526]: I0428 00:54:35.523456 2526 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb85c81a-23da-4457-9bba-49e68db7ac00-cilium-cgroup" pod "eb85c81a-23da-4457-9bba-49e68db7ac00" (UID: "eb85c81a-23da-4457-9bba-49e68db7ac00"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 28 00:54:35.524488 kubelet[2526]: I0428 00:54:35.523443 2526 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb85c81a-23da-4457-9bba-49e68db7ac00-hostproc" pod "eb85c81a-23da-4457-9bba-49e68db7ac00" (UID: "eb85c81a-23da-4457-9bba-49e68db7ac00"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 28 00:54:35.524488 kubelet[2526]: I0428 00:54:35.523517 2526 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb85c81a-23da-4457-9bba-49e68db7ac00-etc-cni-netd" pod "eb85c81a-23da-4457-9bba-49e68db7ac00" (UID: "eb85c81a-23da-4457-9bba-49e68db7ac00"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 28 00:54:35.524488 kubelet[2526]: I0428 00:54:35.523528 2526 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb85c81a-23da-4457-9bba-49e68db7ac00-cilium-run" pod "eb85c81a-23da-4457-9bba-49e68db7ac00" (UID: "eb85c81a-23da-4457-9bba-49e68db7ac00"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 28 00:54:35.524488 kubelet[2526]: I0428 00:54:35.523795 2526 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb85c81a-23da-4457-9bba-49e68db7ac00-bpf-maps" pod "eb85c81a-23da-4457-9bba-49e68db7ac00" (UID: "eb85c81a-23da-4457-9bba-49e68db7ac00"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 28 00:54:35.524646 kubelet[2526]: I0428 00:54:35.523840 2526 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb85c81a-23da-4457-9bba-49e68db7ac00-cni-path" pod "eb85c81a-23da-4457-9bba-49e68db7ac00" (UID: "eb85c81a-23da-4457-9bba-49e68db7ac00"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 28 00:54:35.524646 kubelet[2526]: I0428 00:54:35.523855 2526 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb85c81a-23da-4457-9bba-49e68db7ac00-host-proc-sys-net" pod "eb85c81a-23da-4457-9bba-49e68db7ac00" (UID: "eb85c81a-23da-4457-9bba-49e68db7ac00"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 28 00:54:35.524646 kubelet[2526]: I0428 00:54:35.523865 2526 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb85c81a-23da-4457-9bba-49e68db7ac00-host-proc-sys-kernel" pod "eb85c81a-23da-4457-9bba-49e68db7ac00" (UID: "eb85c81a-23da-4457-9bba-49e68db7ac00"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 28 00:54:35.524646 kubelet[2526]: I0428 00:54:35.523875 2526 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb85c81a-23da-4457-9bba-49e68db7ac00-xtables-lock" pod "eb85c81a-23da-4457-9bba-49e68db7ac00" (UID: "eb85c81a-23da-4457-9bba-49e68db7ac00"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 28 00:54:35.524646 kubelet[2526]: I0428 00:54:35.523886 2526 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb85c81a-23da-4457-9bba-49e68db7ac00-lib-modules" pod "eb85c81a-23da-4457-9bba-49e68db7ac00" (UID: "eb85c81a-23da-4457-9bba-49e68db7ac00"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 28 00:54:35.527588 kubelet[2526]: I0428 00:54:35.527427 2526 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb85c81a-23da-4457-9bba-49e68db7ac00-cilium-config-path" pod "eb85c81a-23da-4457-9bba-49e68db7ac00" (UID: "eb85c81a-23da-4457-9bba-49e68db7ac00"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 28 00:54:35.528225 kubelet[2526]: I0428 00:54:35.528165 2526 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb85c81a-23da-4457-9bba-49e68db7ac00-hubble-tls" pod "eb85c81a-23da-4457-9bba-49e68db7ac00" (UID: "eb85c81a-23da-4457-9bba-49e68db7ac00"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 28 00:54:35.528329 kubelet[2526]: I0428 00:54:35.528294 2526 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb85c81a-23da-4457-9bba-49e68db7ac00-kube-api-access-xw8pv" pod "eb85c81a-23da-4457-9bba-49e68db7ac00" (UID: "eb85c81a-23da-4457-9bba-49e68db7ac00"). InnerVolumeSpecName "kube-api-access-xw8pv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 28 00:54:35.528423 kubelet[2526]: I0428 00:54:35.528398 2526 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb85c81a-23da-4457-9bba-49e68db7ac00-clustermesh-secrets" pod "eb85c81a-23da-4457-9bba-49e68db7ac00" (UID: "eb85c81a-23da-4457-9bba-49e68db7ac00"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 28 00:54:35.648340 kubelet[2526]: I0428 00:54:35.647896 2526 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/eb85c81a-23da-4457-9bba-49e68db7ac00-cni-path\") on node \"localhost\" DevicePath \"\"" Apr 28 00:54:35.648340 kubelet[2526]: I0428 00:54:35.648034 2526 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/eb85c81a-23da-4457-9bba-49e68db7ac00-hostproc\") on node \"localhost\" DevicePath \"\"" Apr 28 00:54:35.648340 kubelet[2526]: I0428 00:54:35.648053 2526 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xw8pv\" (UniqueName: \"kubernetes.io/projected/eb85c81a-23da-4457-9bba-49e68db7ac00-kube-api-access-xw8pv\") on node \"localhost\" DevicePath \"\"" Apr 28 00:54:35.648340 kubelet[2526]: I0428 00:54:35.648060 2526 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/eb85c81a-23da-4457-9bba-49e68db7ac00-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Apr 28 00:54:35.648340 kubelet[2526]: I0428 00:54:35.648067 2526 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eb85c81a-23da-4457-9bba-49e68db7ac00-xtables-lock\") on node \"localhost\" DevicePath \"\"" Apr 28 00:54:35.648340 kubelet[2526]: I0428 00:54:35.648093 2526 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/eb85c81a-23da-4457-9bba-49e68db7ac00-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Apr 28 00:54:35.648340 kubelet[2526]: I0428 00:54:35.648098 2526 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eb85c81a-23da-4457-9bba-49e68db7ac00-lib-modules\") on node \"localhost\" DevicePath \"\"" Apr 28 00:54:35.648340 kubelet[2526]: I0428 00:54:35.648104 2526 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/eb85c81a-23da-4457-9bba-49e68db7ac00-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Apr 28 00:54:35.649188 kubelet[2526]: I0428 00:54:35.648113 2526 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/eb85c81a-23da-4457-9bba-49e68db7ac00-hubble-tls\") on node \"localhost\" DevicePath \"\"" Apr 28 00:54:35.649188 kubelet[2526]: I0428 00:54:35.648119 2526 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/eb85c81a-23da-4457-9bba-49e68db7ac00-bpf-maps\") on node \"localhost\" DevicePath \"\"" Apr 28 00:54:35.649188 kubelet[2526]: I0428 00:54:35.648124 2526 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/eb85c81a-23da-4457-9bba-49e68db7ac00-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Apr 28 00:54:35.649188 kubelet[2526]: I0428 00:54:35.648130 2526 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eb85c81a-23da-4457-9bba-49e68db7ac00-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Apr 28 00:54:35.649188 kubelet[2526]: I0428 00:54:35.648200 2526 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/eb85c81a-23da-4457-9bba-49e68db7ac00-cilium-run\") on node \"localhost\" DevicePath \"\"" Apr 28 00:54:35.649188 kubelet[2526]: I0428 00:54:35.648206 2526 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eb85c81a-23da-4457-9bba-49e68db7ac00-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 28 00:54:35.664709 sshd[4222]: pam_unix(sshd:session): session closed for user core Apr 28 00:54:35.680357 systemd[1]: sshd@25-10.0.0.98:22-10.0.0.1:37900.service: Deactivated successfully. Apr 28 00:54:35.686569 systemd[1]: session-26.scope: Deactivated successfully. Apr 28 00:54:35.686808 systemd[1]: session-26.scope: Consumed 3.126s CPU time. Apr 28 00:54:35.688406 systemd-logind[1445]: Session 26 logged out. Waiting for processes to exit. Apr 28 00:54:35.697844 systemd[1]: Started sshd@26-10.0.0.98:22-10.0.0.1:57954.service - OpenSSH per-connection server daemon (10.0.0.1:57954). Apr 28 00:54:35.702382 systemd-logind[1445]: Removed session 26. Apr 28 00:54:35.755720 sshd[4382]: Accepted publickey for core from 10.0.0.1 port 57954 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 00:54:35.757170 sshd[4382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:54:35.762973 systemd-logind[1445]: New session 27 of user core. Apr 28 00:54:35.779662 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 28 00:54:35.895211 kubelet[2526]: I0428 00:54:35.895107 2526 scope.go:122] "RemoveContainer" containerID="30a4eeeee1dc519851cc2ff43d18bd5800de025f2489462728e86dee1fe63ec0" Apr 28 00:54:35.898115 containerd[1465]: time="2026-04-28T00:54:35.898051391Z" level=info msg="RemoveContainer for \"30a4eeeee1dc519851cc2ff43d18bd5800de025f2489462728e86dee1fe63ec0\"" Apr 28 00:54:35.902256 systemd[1]: Removed slice kubepods-besteffort-podfed718fc_56e8_4472_bce7_5c57a894600b.slice - libcontainer container kubepods-besteffort-podfed718fc_56e8_4472_bce7_5c57a894600b.slice. Apr 28 00:54:35.902339 systemd[1]: kubepods-besteffort-podfed718fc_56e8_4472_bce7_5c57a894600b.slice: Consumed 2.094s CPU time. Apr 28 00:54:35.913249 containerd[1465]: time="2026-04-28T00:54:35.913190395Z" level=info msg="RemoveContainer for \"30a4eeeee1dc519851cc2ff43d18bd5800de025f2489462728e86dee1fe63ec0\" returns successfully" Apr 28 00:54:35.913570 kubelet[2526]: I0428 00:54:35.913532 2526 scope.go:122] "RemoveContainer" containerID="30a4eeeee1dc519851cc2ff43d18bd5800de025f2489462728e86dee1fe63ec0" Apr 28 00:54:35.916257 systemd[1]: Removed slice kubepods-burstable-podeb85c81a_23da_4457_9bba_49e68db7ac00.slice - libcontainer container kubepods-burstable-podeb85c81a_23da_4457_9bba_49e68db7ac00.slice. Apr 28 00:54:35.916328 systemd[1]: kubepods-burstable-podeb85c81a_23da_4457_9bba_49e68db7ac00.slice: Consumed 18.386s CPU time. Apr 28 00:54:35.920447 containerd[1465]: time="2026-04-28T00:54:35.920342207Z" level=error msg="ContainerStatus for \"30a4eeeee1dc519851cc2ff43d18bd5800de025f2489462728e86dee1fe63ec0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"30a4eeeee1dc519851cc2ff43d18bd5800de025f2489462728e86dee1fe63ec0\": not found" Apr 28 00:54:35.938589 kubelet[2526]: E0428 00:54:35.936937 2526 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"30a4eeeee1dc519851cc2ff43d18bd5800de025f2489462728e86dee1fe63ec0\": not found" containerID="30a4eeeee1dc519851cc2ff43d18bd5800de025f2489462728e86dee1fe63ec0" Apr 28 00:54:35.940831 kubelet[2526]: I0428 00:54:35.939159 2526 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"30a4eeeee1dc519851cc2ff43d18bd5800de025f2489462728e86dee1fe63ec0"} err="failed to get container status \"30a4eeeee1dc519851cc2ff43d18bd5800de025f2489462728e86dee1fe63ec0\": rpc error: code = NotFound desc = an error occurred when try to find container \"30a4eeeee1dc519851cc2ff43d18bd5800de025f2489462728e86dee1fe63ec0\": not found" Apr 28 00:54:35.940831 kubelet[2526]: I0428 00:54:35.939242 2526 scope.go:122] "RemoveContainer" containerID="7e86f7d01b4d6dd3949cea7d4323b097d6143eb38f93fbead100a71ddc38c0d9" Apr 28 00:54:35.949344 containerd[1465]: time="2026-04-28T00:54:35.949268283Z" level=info msg="RemoveContainer for \"7e86f7d01b4d6dd3949cea7d4323b097d6143eb38f93fbead100a71ddc38c0d9\"" Apr 28 00:54:35.957159 containerd[1465]: time="2026-04-28T00:54:35.956423294Z" level=info msg="RemoveContainer for \"7e86f7d01b4d6dd3949cea7d4323b097d6143eb38f93fbead100a71ddc38c0d9\" returns successfully" Apr 28 00:54:35.958230 kubelet[2526]: I0428 00:54:35.958161 2526 scope.go:122] "RemoveContainer" containerID="32fffeb9eee6ba0cd8c1b530674cd9f9727d893a0317992e3b1fdaa7fa7f349b" Apr 28 00:54:35.959784 containerd[1465]: time="2026-04-28T00:54:35.959747257Z" level=info msg="RemoveContainer for \"32fffeb9eee6ba0cd8c1b530674cd9f9727d893a0317992e3b1fdaa7fa7f349b\"" Apr 28 00:54:35.967129 containerd[1465]: time="2026-04-28T00:54:35.967066991Z" level=info msg="RemoveContainer for \"32fffeb9eee6ba0cd8c1b530674cd9f9727d893a0317992e3b1fdaa7fa7f349b\" returns successfully" Apr 28 00:54:35.967519 kubelet[2526]: I0428 00:54:35.967497 2526 scope.go:122] "RemoveContainer" containerID="e13e9b96703346c74c34498668f3a4f8154f9ad671e8bcf60248b01fe332f065" Apr 28 00:54:35.970800 containerd[1465]: time="2026-04-28T00:54:35.970272323Z" level=info msg="RemoveContainer for \"e13e9b96703346c74c34498668f3a4f8154f9ad671e8bcf60248b01fe332f065\"" Apr 28 00:54:35.973807 containerd[1465]: time="2026-04-28T00:54:35.973780218Z" level=info msg="RemoveContainer for \"e13e9b96703346c74c34498668f3a4f8154f9ad671e8bcf60248b01fe332f065\" returns successfully" Apr 28 00:54:35.974091 kubelet[2526]: I0428 00:54:35.974028 2526 scope.go:122] "RemoveContainer" containerID="cf6a156d256f78e878570d15a42ddbfebf48a12fda7b0beed6a1c87eedb14731" Apr 28 00:54:35.974966 containerd[1465]: time="2026-04-28T00:54:35.974939354Z" level=info msg="RemoveContainer for \"cf6a156d256f78e878570d15a42ddbfebf48a12fda7b0beed6a1c87eedb14731\"" Apr 28 00:54:35.978183 containerd[1465]: time="2026-04-28T00:54:35.978133537Z" level=info msg="RemoveContainer for \"cf6a156d256f78e878570d15a42ddbfebf48a12fda7b0beed6a1c87eedb14731\" returns successfully" Apr 28 00:54:35.978374 kubelet[2526]: I0428 00:54:35.978332 2526 scope.go:122] "RemoveContainer" containerID="83bfbb40b00235dd6a211b06c994dcbf640ea03033018a06e112799bc40b60db" Apr 28 00:54:35.979386 containerd[1465]: time="2026-04-28T00:54:35.979359751Z" level=info msg="RemoveContainer for \"83bfbb40b00235dd6a211b06c994dcbf640ea03033018a06e112799bc40b60db\"" Apr 28 00:54:35.982079 containerd[1465]: time="2026-04-28T00:54:35.982043387Z" level=info msg="RemoveContainer for \"83bfbb40b00235dd6a211b06c994dcbf640ea03033018a06e112799bc40b60db\" returns successfully" Apr 28 00:54:35.982257 kubelet[2526]: I0428 00:54:35.982200 2526 scope.go:122] "RemoveContainer" containerID="7e86f7d01b4d6dd3949cea7d4323b097d6143eb38f93fbead100a71ddc38c0d9" Apr 28 00:54:35.982369 containerd[1465]: time="2026-04-28T00:54:35.982325110Z" level=error msg="ContainerStatus for \"7e86f7d01b4d6dd3949cea7d4323b097d6143eb38f93fbead100a71ddc38c0d9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7e86f7d01b4d6dd3949cea7d4323b097d6143eb38f93fbead100a71ddc38c0d9\": not found" Apr 28 00:54:35.982492 kubelet[2526]: E0428 00:54:35.982470 2526 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7e86f7d01b4d6dd3949cea7d4323b097d6143eb38f93fbead100a71ddc38c0d9\": not found" containerID="7e86f7d01b4d6dd3949cea7d4323b097d6143eb38f93fbead100a71ddc38c0d9" Apr 28 00:54:35.982536 kubelet[2526]: I0428 00:54:35.982500 2526 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7e86f7d01b4d6dd3949cea7d4323b097d6143eb38f93fbead100a71ddc38c0d9"} err="failed to get container status \"7e86f7d01b4d6dd3949cea7d4323b097d6143eb38f93fbead100a71ddc38c0d9\": rpc error: code = NotFound desc = an error occurred when try to find container \"7e86f7d01b4d6dd3949cea7d4323b097d6143eb38f93fbead100a71ddc38c0d9\": not found" Apr 28 00:54:35.982536 kubelet[2526]: I0428 00:54:35.982517 2526 scope.go:122] "RemoveContainer" containerID="32fffeb9eee6ba0cd8c1b530674cd9f9727d893a0317992e3b1fdaa7fa7f349b" Apr 28 00:54:35.982703 containerd[1465]: time="2026-04-28T00:54:35.982660954Z" level=error msg="ContainerStatus for \"32fffeb9eee6ba0cd8c1b530674cd9f9727d893a0317992e3b1fdaa7fa7f349b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"32fffeb9eee6ba0cd8c1b530674cd9f9727d893a0317992e3b1fdaa7fa7f349b\": not found" Apr 28 00:54:35.982895 kubelet[2526]: E0428 00:54:35.982806 2526 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"32fffeb9eee6ba0cd8c1b530674cd9f9727d893a0317992e3b1fdaa7fa7f349b\": not found" containerID="32fffeb9eee6ba0cd8c1b530674cd9f9727d893a0317992e3b1fdaa7fa7f349b" Apr 28 00:54:35.982895 kubelet[2526]: I0428 00:54:35.982826 2526 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"32fffeb9eee6ba0cd8c1b530674cd9f9727d893a0317992e3b1fdaa7fa7f349b"} err="failed to get container status \"32fffeb9eee6ba0cd8c1b530674cd9f9727d893a0317992e3b1fdaa7fa7f349b\": rpc error: code = NotFound desc = an error occurred when try to find container \"32fffeb9eee6ba0cd8c1b530674cd9f9727d893a0317992e3b1fdaa7fa7f349b\": not found" Apr 28 00:54:35.982895 kubelet[2526]: I0428 00:54:35.982840 2526 scope.go:122] "RemoveContainer" containerID="e13e9b96703346c74c34498668f3a4f8154f9ad671e8bcf60248b01fe332f065" Apr 28 00:54:35.982985 containerd[1465]: time="2026-04-28T00:54:35.982953601Z" level=error msg="ContainerStatus for \"e13e9b96703346c74c34498668f3a4f8154f9ad671e8bcf60248b01fe332f065\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e13e9b96703346c74c34498668f3a4f8154f9ad671e8bcf60248b01fe332f065\": not found" Apr 28 00:54:35.983065 kubelet[2526]: E0428 00:54:35.983047 2526 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e13e9b96703346c74c34498668f3a4f8154f9ad671e8bcf60248b01fe332f065\": not found" containerID="e13e9b96703346c74c34498668f3a4f8154f9ad671e8bcf60248b01fe332f065" Apr 28 00:54:35.983084 kubelet[2526]: I0428 00:54:35.983068 2526 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e13e9b96703346c74c34498668f3a4f8154f9ad671e8bcf60248b01fe332f065"} err="failed to get container status \"e13e9b96703346c74c34498668f3a4f8154f9ad671e8bcf60248b01fe332f065\": rpc error: code = NotFound desc = an error occurred when try to find container \"e13e9b96703346c74c34498668f3a4f8154f9ad671e8bcf60248b01fe332f065\": not found" Apr 28 00:54:35.983084 kubelet[2526]: I0428 00:54:35.983078 2526 scope.go:122] "RemoveContainer" containerID="cf6a156d256f78e878570d15a42ddbfebf48a12fda7b0beed6a1c87eedb14731" Apr 28 00:54:35.983251 containerd[1465]: time="2026-04-28T00:54:35.983223420Z" level=error msg="ContainerStatus for \"cf6a156d256f78e878570d15a42ddbfebf48a12fda7b0beed6a1c87eedb14731\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cf6a156d256f78e878570d15a42ddbfebf48a12fda7b0beed6a1c87eedb14731\": not found" Apr 28 00:54:35.983374 kubelet[2526]: E0428 00:54:35.983342 2526 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cf6a156d256f78e878570d15a42ddbfebf48a12fda7b0beed6a1c87eedb14731\": not found" containerID="cf6a156d256f78e878570d15a42ddbfebf48a12fda7b0beed6a1c87eedb14731" Apr 28 00:54:35.983457 kubelet[2526]: I0428 00:54:35.983430 2526 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cf6a156d256f78e878570d15a42ddbfebf48a12fda7b0beed6a1c87eedb14731"} err="failed to get container status \"cf6a156d256f78e878570d15a42ddbfebf48a12fda7b0beed6a1c87eedb14731\": rpc error: code = NotFound desc = an error occurred when try to find container \"cf6a156d256f78e878570d15a42ddbfebf48a12fda7b0beed6a1c87eedb14731\": not found" Apr 28 00:54:35.983457 kubelet[2526]: I0428 00:54:35.983444 2526 scope.go:122] "RemoveContainer" containerID="83bfbb40b00235dd6a211b06c994dcbf640ea03033018a06e112799bc40b60db" Apr 28 00:54:35.983602 containerd[1465]: time="2026-04-28T00:54:35.983575142Z" level=error msg="ContainerStatus for \"83bfbb40b00235dd6a211b06c994dcbf640ea03033018a06e112799bc40b60db\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"83bfbb40b00235dd6a211b06c994dcbf640ea03033018a06e112799bc40b60db\": not found" Apr 28 00:54:35.983690 kubelet[2526]: E0428 00:54:35.983672 2526 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"83bfbb40b00235dd6a211b06c994dcbf640ea03033018a06e112799bc40b60db\": not found" containerID="83bfbb40b00235dd6a211b06c994dcbf640ea03033018a06e112799bc40b60db" Apr 28 00:54:35.983760 kubelet[2526]: I0428 00:54:35.983693 2526 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"83bfbb40b00235dd6a211b06c994dcbf640ea03033018a06e112799bc40b60db"} err="failed to get container status \"83bfbb40b00235dd6a211b06c994dcbf640ea03033018a06e112799bc40b60db\": rpc error: code = NotFound desc = an error occurred when try to find container \"83bfbb40b00235dd6a211b06c994dcbf640ea03033018a06e112799bc40b60db\": not found" Apr 28 00:54:36.138663 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e31b90064eddb17c26b1c06b7c7b2cccacd7d2a06be609071f2b2c43b13f170-rootfs.mount: Deactivated successfully. Apr 28 00:54:36.138928 systemd[1]: var-lib-kubelet-pods-fed718fc\x2d56e8\x2d4472\x2dbce7\x2d5c57a894600b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drpfw2.mount: Deactivated successfully. Apr 28 00:54:36.139028 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8eee1b9d2db4da8e4e2567b3e7cf3313b9ea7b51be671a57a175c5c212791bc-rootfs.mount: Deactivated successfully. Apr 28 00:54:36.139075 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e8eee1b9d2db4da8e4e2567b3e7cf3313b9ea7b51be671a57a175c5c212791bc-shm.mount: Deactivated successfully. Apr 28 00:54:36.139161 systemd[1]: var-lib-kubelet-pods-eb85c81a\x2d23da\x2d4457\x2d9bba\x2d49e68db7ac00-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxw8pv.mount: Deactivated successfully. Apr 28 00:54:36.139226 systemd[1]: var-lib-kubelet-pods-eb85c81a\x2d23da\x2d4457\x2d9bba\x2d49e68db7ac00-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 28 00:54:36.139301 systemd[1]: var-lib-kubelet-pods-eb85c81a\x2d23da\x2d4457\x2d9bba\x2d49e68db7ac00-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 28 00:54:36.409895 sshd[4382]: pam_unix(sshd:session): session closed for user core Apr 28 00:54:36.417702 systemd[1]: sshd@26-10.0.0.98:22-10.0.0.1:57954.service: Deactivated successfully. Apr 28 00:54:36.419345 systemd[1]: session-27.scope: Deactivated successfully. Apr 28 00:54:36.420795 systemd-logind[1445]: Session 27 logged out. Waiting for processes to exit. Apr 28 00:54:36.429529 systemd[1]: Started sshd@27-10.0.0.98:22-10.0.0.1:57958.service - OpenSSH per-connection server daemon (10.0.0.1:57958). Apr 28 00:54:36.432833 systemd-logind[1445]: Removed session 27. Apr 28 00:54:36.498036 systemd[1]: Created slice kubepods-burstable-podef07089e_b7cb_446a_b8bd_5c04c300997d.slice - libcontainer container kubepods-burstable-podef07089e_b7cb_446a_b8bd_5c04c300997d.slice. Apr 28 00:54:36.503131 sshd[4395]: Accepted publickey for core from 10.0.0.1 port 57958 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 00:54:36.504080 sshd[4395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:54:36.515529 systemd-logind[1445]: New session 28 of user core. Apr 28 00:54:36.534423 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 28 00:54:36.768976 kubelet[2526]: I0428 00:54:36.765726 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ef07089e-b7cb-446a-b8bd-5c04c300997d-cilium-run\") pod \"cilium-wr7xc\" (UID: \"ef07089e-b7cb-446a-b8bd-5c04c300997d\") " pod="kube-system/cilium-wr7xc" Apr 28 00:54:36.806368 kubelet[2526]: I0428 00:54:36.798579 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef07089e-b7cb-446a-b8bd-5c04c300997d-xtables-lock\") pod \"cilium-wr7xc\" (UID: \"ef07089e-b7cb-446a-b8bd-5c04c300997d\") " pod="kube-system/cilium-wr7xc" Apr 28 00:54:36.830460 kubelet[2526]: I0428 00:54:36.830254 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ef07089e-b7cb-446a-b8bd-5c04c300997d-clustermesh-secrets\") pod \"cilium-wr7xc\" (UID: \"ef07089e-b7cb-446a-b8bd-5c04c300997d\") " pod="kube-system/cilium-wr7xc" Apr 28 00:54:36.886371 kubelet[2526]: I0428 00:54:36.871385 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ef07089e-b7cb-446a-b8bd-5c04c300997d-cilium-ipsec-secrets\") pod \"cilium-wr7xc\" (UID: \"ef07089e-b7cb-446a-b8bd-5c04c300997d\") " pod="kube-system/cilium-wr7xc" Apr 28 00:54:37.168218 kubelet[2526]: I0428 00:54:37.059260 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ef07089e-b7cb-446a-b8bd-5c04c300997d-bpf-maps\") pod \"cilium-wr7xc\" (UID: \"ef07089e-b7cb-446a-b8bd-5c04c300997d\") " pod="kube-system/cilium-wr7xc" Apr 28 00:54:37.186808 kubelet[2526]: I0428 00:54:37.182574 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ef07089e-b7cb-446a-b8bd-5c04c300997d-etc-cni-netd\") pod \"cilium-wr7xc\" (UID: \"ef07089e-b7cb-446a-b8bd-5c04c300997d\") " pod="kube-system/cilium-wr7xc" Apr 28 00:54:37.186808 kubelet[2526]: I0428 00:54:37.183551 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ef07089e-b7cb-446a-b8bd-5c04c300997d-hubble-tls\") pod \"cilium-wr7xc\" (UID: \"ef07089e-b7cb-446a-b8bd-5c04c300997d\") " pod="kube-system/cilium-wr7xc" Apr 28 00:54:37.186808 kubelet[2526]: I0428 00:54:37.183646 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ef07089e-b7cb-446a-b8bd-5c04c300997d-cilium-cgroup\") pod \"cilium-wr7xc\" (UID: \"ef07089e-b7cb-446a-b8bd-5c04c300997d\") " pod="kube-system/cilium-wr7xc" Apr 28 00:54:37.186808 kubelet[2526]: I0428 00:54:37.183680 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef07089e-b7cb-446a-b8bd-5c04c300997d-lib-modules\") pod \"cilium-wr7xc\" (UID: \"ef07089e-b7cb-446a-b8bd-5c04c300997d\") " pod="kube-system/cilium-wr7xc" Apr 28 00:54:37.186808 kubelet[2526]: I0428 00:54:37.183707 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ef07089e-b7cb-446a-b8bd-5c04c300997d-cni-path\") pod \"cilium-wr7xc\" (UID: \"ef07089e-b7cb-446a-b8bd-5c04c300997d\") " pod="kube-system/cilium-wr7xc" Apr 28 00:54:37.186808 kubelet[2526]: I0428 00:54:37.183735 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ef07089e-b7cb-446a-b8bd-5c04c300997d-cilium-config-path\") pod \"cilium-wr7xc\" (UID: \"ef07089e-b7cb-446a-b8bd-5c04c300997d\") " pod="kube-system/cilium-wr7xc" Apr 28 00:54:37.190903 kubelet[2526]: I0428 00:54:37.183788 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ef07089e-b7cb-446a-b8bd-5c04c300997d-host-proc-sys-kernel\") pod \"cilium-wr7xc\" (UID: \"ef07089e-b7cb-446a-b8bd-5c04c300997d\") " pod="kube-system/cilium-wr7xc" Apr 28 00:54:37.190903 kubelet[2526]: I0428 00:54:37.183840 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ef07089e-b7cb-446a-b8bd-5c04c300997d-hostproc\") pod \"cilium-wr7xc\" (UID: \"ef07089e-b7cb-446a-b8bd-5c04c300997d\") " pod="kube-system/cilium-wr7xc" Apr 28 00:54:37.190903 kubelet[2526]: I0428 00:54:37.183895 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ef07089e-b7cb-446a-b8bd-5c04c300997d-host-proc-sys-net\") pod \"cilium-wr7xc\" (UID: \"ef07089e-b7cb-446a-b8bd-5c04c300997d\") " pod="kube-system/cilium-wr7xc" Apr 28 00:54:37.190903 kubelet[2526]: I0428 00:54:37.183907 2526 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s57d2\" (UniqueName: \"kubernetes.io/projected/ef07089e-b7cb-446a-b8bd-5c04c300997d-kube-api-access-s57d2\") pod \"cilium-wr7xc\" (UID: \"ef07089e-b7cb-446a-b8bd-5c04c300997d\") " pod="kube-system/cilium-wr7xc" Apr 28 00:54:37.207749 kubelet[2526]: I0428 00:54:37.207681 2526 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="eb85c81a-23da-4457-9bba-49e68db7ac00" path="/var/lib/kubelet/pods/eb85c81a-23da-4457-9bba-49e68db7ac00/volumes" Apr 28 00:54:37.214772 kubelet[2526]: I0428 00:54:37.214292 2526 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="fed718fc-56e8-4472-bce7-5c57a894600b" path="/var/lib/kubelet/pods/fed718fc-56e8-4472-bce7-5c57a894600b/volumes" Apr 28 00:54:37.220067 sshd[4395]: pam_unix(sshd:session): session closed for user core Apr 28 00:54:37.240870 systemd[1]: sshd@27-10.0.0.98:22-10.0.0.1:57958.service: Deactivated successfully. Apr 28 00:54:37.247649 systemd[1]: session-28.scope: Deactivated successfully. Apr 28 00:54:37.253249 systemd-logind[1445]: Session 28 logged out. Waiting for processes to exit. Apr 28 00:54:37.264429 systemd[1]: Started sshd@28-10.0.0.98:22-10.0.0.1:57966.service - OpenSSH per-connection server daemon (10.0.0.1:57966). Apr 28 00:54:37.265564 systemd-logind[1445]: Removed session 28. Apr 28 00:54:37.332386 sshd[4403]: Accepted publickey for core from 10.0.0.1 port 57966 ssh2: RSA SHA256:LlE/68A0qVd4DdmQfcok9T4l7BHzq3PFAQ3i8Jwjpps Apr 28 00:54:37.334610 sshd[4403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 28 00:54:37.339838 systemd-logind[1445]: New session 29 of user core. Apr 28 00:54:37.351200 systemd[1]: Started session-29.scope - Session 29 of User core. Apr 28 00:54:37.409865 kubelet[2526]: E0428 00:54:37.409809 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:54:37.413354 containerd[1465]: time="2026-04-28T00:54:37.413060304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wr7xc,Uid:ef07089e-b7cb-446a-b8bd-5c04c300997d,Namespace:kube-system,Attempt:0,}" Apr 28 00:54:37.475652 containerd[1465]: time="2026-04-28T00:54:37.475443731Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 28 00:54:37.475652 containerd[1465]: time="2026-04-28T00:54:37.475523781Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 28 00:54:37.475652 containerd[1465]: time="2026-04-28T00:54:37.475537091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:54:37.476037 containerd[1465]: time="2026-04-28T00:54:37.475595552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 28 00:54:37.520073 systemd[1]: Started cri-containerd-57ec268a6ce420f29e446f7c569ab860baec864593130fb344b8781158b9577d.scope - libcontainer container 57ec268a6ce420f29e446f7c569ab860baec864593130fb344b8781158b9577d. Apr 28 00:54:37.668775 containerd[1465]: time="2026-04-28T00:54:37.668653682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wr7xc,Uid:ef07089e-b7cb-446a-b8bd-5c04c300997d,Namespace:kube-system,Attempt:0,} returns sandbox id \"57ec268a6ce420f29e446f7c569ab860baec864593130fb344b8781158b9577d\"" Apr 28 00:54:37.670424 kubelet[2526]: E0428 00:54:37.670386 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:54:37.689397 containerd[1465]: time="2026-04-28T00:54:37.689238773Z" level=info msg="CreateContainer within sandbox \"57ec268a6ce420f29e446f7c569ab860baec864593130fb344b8781158b9577d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 28 00:54:37.742760 containerd[1465]: time="2026-04-28T00:54:37.741494435Z" level=info msg="CreateContainer within sandbox \"57ec268a6ce420f29e446f7c569ab860baec864593130fb344b8781158b9577d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1e302f4b25a2036f2a1340d6f9006c2f7661a2821e5bca7337d65326ba8983e9\"" Apr 28 00:54:37.743630 containerd[1465]: time="2026-04-28T00:54:37.743587703Z" level=info msg="StartContainer for \"1e302f4b25a2036f2a1340d6f9006c2f7661a2821e5bca7337d65326ba8983e9\"" Apr 28 00:54:37.824667 systemd[1]: Started cri-containerd-1e302f4b25a2036f2a1340d6f9006c2f7661a2821e5bca7337d65326ba8983e9.scope - libcontainer container 1e302f4b25a2036f2a1340d6f9006c2f7661a2821e5bca7337d65326ba8983e9. Apr 28 00:54:37.898101 containerd[1465]: time="2026-04-28T00:54:37.895912230Z" level=info msg="StartContainer for \"1e302f4b25a2036f2a1340d6f9006c2f7661a2821e5bca7337d65326ba8983e9\" returns successfully" Apr 28 00:54:37.917296 systemd[1]: cri-containerd-1e302f4b25a2036f2a1340d6f9006c2f7661a2821e5bca7337d65326ba8983e9.scope: Deactivated successfully. Apr 28 00:54:37.919894 kubelet[2526]: E0428 00:54:37.919848 2526 cadvisor_stats_provider.go:569] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef07089e_b7cb_446a_b8bd_5c04c300997d.slice/cri-containerd-1e302f4b25a2036f2a1340d6f9006c2f7661a2821e5bca7337d65326ba8983e9.scope\": RecentStats: unable to find data in memory cache]" Apr 28 00:54:37.998824 containerd[1465]: time="2026-04-28T00:54:37.998498882Z" level=info msg="shim disconnected" id=1e302f4b25a2036f2a1340d6f9006c2f7661a2821e5bca7337d65326ba8983e9 namespace=k8s.io Apr 28 00:54:37.998824 containerd[1465]: time="2026-04-28T00:54:37.998572461Z" level=warning msg="cleaning up after shim disconnected" id=1e302f4b25a2036f2a1340d6f9006c2f7661a2821e5bca7337d65326ba8983e9 namespace=k8s.io Apr 28 00:54:37.998824 containerd[1465]: time="2026-04-28T00:54:37.998580602Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 00:54:38.240365 kubelet[2526]: E0428 00:54:38.240130 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:54:38.253144 containerd[1465]: time="2026-04-28T00:54:38.252609769Z" level=info msg="CreateContainer within sandbox \"57ec268a6ce420f29e446f7c569ab860baec864593130fb344b8781158b9577d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 28 00:54:38.285546 containerd[1465]: time="2026-04-28T00:54:38.285390900Z" level=info msg="CreateContainer within sandbox \"57ec268a6ce420f29e446f7c569ab860baec864593130fb344b8781158b9577d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c5810ad8d0bfe04bd8ebe00f8b514dd1577a516c1f9ccd086288ca3ca7801600\"" Apr 28 00:54:38.286393 containerd[1465]: time="2026-04-28T00:54:38.286315119Z" level=info msg="StartContainer for \"c5810ad8d0bfe04bd8ebe00f8b514dd1577a516c1f9ccd086288ca3ca7801600\"" Apr 28 00:54:38.331747 systemd[1]: Started cri-containerd-c5810ad8d0bfe04bd8ebe00f8b514dd1577a516c1f9ccd086288ca3ca7801600.scope - libcontainer container c5810ad8d0bfe04bd8ebe00f8b514dd1577a516c1f9ccd086288ca3ca7801600. Apr 28 00:54:38.449089 containerd[1465]: time="2026-04-28T00:54:38.448973920Z" level=info msg="StartContainer for \"c5810ad8d0bfe04bd8ebe00f8b514dd1577a516c1f9ccd086288ca3ca7801600\" returns successfully" Apr 28 00:54:38.477490 systemd[1]: cri-containerd-c5810ad8d0bfe04bd8ebe00f8b514dd1577a516c1f9ccd086288ca3ca7801600.scope: Deactivated successfully. Apr 28 00:54:38.543829 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c5810ad8d0bfe04bd8ebe00f8b514dd1577a516c1f9ccd086288ca3ca7801600-rootfs.mount: Deactivated successfully. Apr 28 00:54:38.548630 containerd[1465]: time="2026-04-28T00:54:38.548567843Z" level=info msg="shim disconnected" id=c5810ad8d0bfe04bd8ebe00f8b514dd1577a516c1f9ccd086288ca3ca7801600 namespace=k8s.io Apr 28 00:54:38.548630 containerd[1465]: time="2026-04-28T00:54:38.548618227Z" level=warning msg="cleaning up after shim disconnected" id=c5810ad8d0bfe04bd8ebe00f8b514dd1577a516c1f9ccd086288ca3ca7801600 namespace=k8s.io Apr 28 00:54:38.548630 containerd[1465]: time="2026-04-28T00:54:38.548629843Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 00:54:39.249643 kubelet[2526]: E0428 00:54:39.249559 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:54:39.287090 containerd[1465]: time="2026-04-28T00:54:39.285870494Z" level=info msg="CreateContainer within sandbox \"57ec268a6ce420f29e446f7c569ab860baec864593130fb344b8781158b9577d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 28 00:54:39.368210 containerd[1465]: time="2026-04-28T00:54:39.367909904Z" level=info msg="CreateContainer within sandbox \"57ec268a6ce420f29e446f7c569ab860baec864593130fb344b8781158b9577d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"edf8e2452450f33dbe1d1a8ba9ff033f1678878e97e0e0092746243e2af12939\"" Apr 28 00:54:39.371126 containerd[1465]: time="2026-04-28T00:54:39.368970054Z" level=info msg="StartContainer for \"edf8e2452450f33dbe1d1a8ba9ff033f1678878e97e0e0092746243e2af12939\"" Apr 28 00:54:39.440441 systemd[1]: Started cri-containerd-edf8e2452450f33dbe1d1a8ba9ff033f1678878e97e0e0092746243e2af12939.scope - libcontainer container edf8e2452450f33dbe1d1a8ba9ff033f1678878e97e0e0092746243e2af12939. Apr 28 00:54:39.581100 containerd[1465]: time="2026-04-28T00:54:39.577717625Z" level=info msg="StartContainer for \"edf8e2452450f33dbe1d1a8ba9ff033f1678878e97e0e0092746243e2af12939\" returns successfully" Apr 28 00:54:39.600257 systemd[1]: cri-containerd-edf8e2452450f33dbe1d1a8ba9ff033f1678878e97e0e0092746243e2af12939.scope: Deactivated successfully. Apr 28 00:54:39.649899 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-edf8e2452450f33dbe1d1a8ba9ff033f1678878e97e0e0092746243e2af12939-rootfs.mount: Deactivated successfully. Apr 28 00:54:39.663568 containerd[1465]: time="2026-04-28T00:54:39.663445710Z" level=info msg="shim disconnected" id=edf8e2452450f33dbe1d1a8ba9ff033f1678878e97e0e0092746243e2af12939 namespace=k8s.io Apr 28 00:54:39.663568 containerd[1465]: time="2026-04-28T00:54:39.663529233Z" level=warning msg="cleaning up after shim disconnected" id=edf8e2452450f33dbe1d1a8ba9ff033f1678878e97e0e0092746243e2af12939 namespace=k8s.io Apr 28 00:54:39.663568 containerd[1465]: time="2026-04-28T00:54:39.663536663Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 00:54:39.948270 kubelet[2526]: E0428 00:54:39.948150 2526 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 28 00:54:40.053194 kubelet[2526]: E0428 00:54:40.052568 2526 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-zctgc" podUID="236ff5e0-4461-4214-aef2-928b5f4971c4" Apr 28 00:54:40.257284 kubelet[2526]: E0428 00:54:40.256852 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:54:40.269506 containerd[1465]: time="2026-04-28T00:54:40.269367692Z" level=info msg="CreateContainer within sandbox \"57ec268a6ce420f29e446f7c569ab860baec864593130fb344b8781158b9577d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 28 00:54:40.298626 containerd[1465]: time="2026-04-28T00:54:40.298525665Z" level=info msg="CreateContainer within sandbox \"57ec268a6ce420f29e446f7c569ab860baec864593130fb344b8781158b9577d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"21ea02c0f301fd7ed7195c3c40364749cca7716e5a9cd282b69cb245140f0391\"" Apr 28 00:54:40.299577 containerd[1465]: time="2026-04-28T00:54:40.299541488Z" level=info msg="StartContainer for \"21ea02c0f301fd7ed7195c3c40364749cca7716e5a9cd282b69cb245140f0391\"" Apr 28 00:54:40.342958 systemd[1]: Started cri-containerd-21ea02c0f301fd7ed7195c3c40364749cca7716e5a9cd282b69cb245140f0391.scope - libcontainer container 21ea02c0f301fd7ed7195c3c40364749cca7716e5a9cd282b69cb245140f0391. Apr 28 00:54:40.383909 systemd[1]: cri-containerd-21ea02c0f301fd7ed7195c3c40364749cca7716e5a9cd282b69cb245140f0391.scope: Deactivated successfully. Apr 28 00:54:40.389251 containerd[1465]: time="2026-04-28T00:54:40.388811509Z" level=info msg="StartContainer for \"21ea02c0f301fd7ed7195c3c40364749cca7716e5a9cd282b69cb245140f0391\" returns successfully" Apr 28 00:54:40.419985 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-21ea02c0f301fd7ed7195c3c40364749cca7716e5a9cd282b69cb245140f0391-rootfs.mount: Deactivated successfully. Apr 28 00:54:40.425351 containerd[1465]: time="2026-04-28T00:54:40.424335177Z" level=info msg="shim disconnected" id=21ea02c0f301fd7ed7195c3c40364749cca7716e5a9cd282b69cb245140f0391 namespace=k8s.io Apr 28 00:54:40.425883 containerd[1465]: time="2026-04-28T00:54:40.425495560Z" level=warning msg="cleaning up after shim disconnected" id=21ea02c0f301fd7ed7195c3c40364749cca7716e5a9cd282b69cb245140f0391 namespace=k8s.io Apr 28 00:54:40.425883 containerd[1465]: time="2026-04-28T00:54:40.425561957Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 28 00:54:41.271150 kubelet[2526]: E0428 00:54:41.271049 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:54:41.282083 containerd[1465]: time="2026-04-28T00:54:41.282034677Z" level=info msg="CreateContainer within sandbox \"57ec268a6ce420f29e446f7c569ab860baec864593130fb344b8781158b9577d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 28 00:54:41.314669 containerd[1465]: time="2026-04-28T00:54:41.310342075Z" level=info msg="CreateContainer within sandbox \"57ec268a6ce420f29e446f7c569ab860baec864593130fb344b8781158b9577d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"994c66201b4c60bb15607e3cb956a3f30a214132ec26da304e1d7c1a4fd3fbba\"" Apr 28 00:54:41.314669 containerd[1465]: time="2026-04-28T00:54:41.311873932Z" level=info msg="StartContainer for \"994c66201b4c60bb15607e3cb956a3f30a214132ec26da304e1d7c1a4fd3fbba\"" Apr 28 00:54:41.379193 systemd[1]: Started cri-containerd-994c66201b4c60bb15607e3cb956a3f30a214132ec26da304e1d7c1a4fd3fbba.scope - libcontainer container 994c66201b4c60bb15607e3cb956a3f30a214132ec26da304e1d7c1a4fd3fbba. Apr 28 00:54:41.422754 containerd[1465]: time="2026-04-28T00:54:41.422698239Z" level=info msg="StartContainer for \"994c66201b4c60bb15607e3cb956a3f30a214132ec26da304e1d7c1a4fd3fbba\" returns successfully" Apr 28 00:54:41.784940 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 28 00:54:42.054829 kubelet[2526]: E0428 00:54:42.053644 2526 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-zctgc" podUID="236ff5e0-4461-4214-aef2-928b5f4971c4" Apr 28 00:54:42.278941 kubelet[2526]: E0428 00:54:42.278810 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:54:43.648959 kubelet[2526]: E0428 00:54:43.648782 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:54:43.827633 systemd[1]: run-containerd-runc-k8s.io-994c66201b4c60bb15607e3cb956a3f30a214132ec26da304e1d7c1a4fd3fbba-runc.Kiz9nE.mount: Deactivated successfully. Apr 28 00:54:44.054411 kubelet[2526]: E0428 00:54:44.054104 2526 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7d764666f9-zctgc" podUID="236ff5e0-4461-4214-aef2-928b5f4971c4" Apr 28 00:54:46.052427 kubelet[2526]: E0428 00:54:46.052288 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:54:46.179218 systemd-networkd[1392]: lxc_health: Link UP Apr 28 00:54:46.193685 systemd-networkd[1392]: lxc_health: Gained carrier Apr 28 00:54:46.324654 systemd[1]: run-containerd-runc-k8s.io-994c66201b4c60bb15607e3cb956a3f30a214132ec26da304e1d7c1a4fd3fbba-runc.S6suwt.mount: Deactivated successfully. Apr 28 00:54:47.051355 containerd[1465]: time="2026-04-28T00:54:47.051091906Z" level=info msg="StopPodSandbox for \"e8eee1b9d2db4da8e4e2567b3e7cf3313b9ea7b51be671a57a175c5c212791bc\"" Apr 28 00:54:47.051355 containerd[1465]: time="2026-04-28T00:54:47.051253265Z" level=info msg="TearDown network for sandbox \"e8eee1b9d2db4da8e4e2567b3e7cf3313b9ea7b51be671a57a175c5c212791bc\" successfully" Apr 28 00:54:47.051355 containerd[1465]: time="2026-04-28T00:54:47.051264032Z" level=info msg="StopPodSandbox for \"e8eee1b9d2db4da8e4e2567b3e7cf3313b9ea7b51be671a57a175c5c212791bc\" returns successfully" Apr 28 00:54:47.052623 containerd[1465]: time="2026-04-28T00:54:47.052035084Z" level=info msg="RemovePodSandbox for \"e8eee1b9d2db4da8e4e2567b3e7cf3313b9ea7b51be671a57a175c5c212791bc\"" Apr 28 00:54:47.052623 containerd[1465]: time="2026-04-28T00:54:47.052059712Z" level=info msg="Forcibly stopping sandbox \"e8eee1b9d2db4da8e4e2567b3e7cf3313b9ea7b51be671a57a175c5c212791bc\"" Apr 28 00:54:47.052623 containerd[1465]: time="2026-04-28T00:54:47.052241073Z" level=info msg="TearDown network for sandbox \"e8eee1b9d2db4da8e4e2567b3e7cf3313b9ea7b51be671a57a175c5c212791bc\" successfully" Apr 28 00:54:47.069225 containerd[1465]: time="2026-04-28T00:54:47.064269034Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e8eee1b9d2db4da8e4e2567b3e7cf3313b9ea7b51be671a57a175c5c212791bc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 28 00:54:47.069225 containerd[1465]: time="2026-04-28T00:54:47.064551327Z" level=info msg="RemovePodSandbox \"e8eee1b9d2db4da8e4e2567b3e7cf3313b9ea7b51be671a57a175c5c212791bc\" returns successfully" Apr 28 00:54:47.074747 containerd[1465]: time="2026-04-28T00:54:47.074545478Z" level=info msg="StopPodSandbox for \"2e31b90064eddb17c26b1c06b7c7b2cccacd7d2a06be609071f2b2c43b13f170\"" Apr 28 00:54:47.074835 containerd[1465]: time="2026-04-28T00:54:47.074797512Z" level=info msg="TearDown network for sandbox \"2e31b90064eddb17c26b1c06b7c7b2cccacd7d2a06be609071f2b2c43b13f170\" successfully" Apr 28 00:54:47.074835 containerd[1465]: time="2026-04-28T00:54:47.074812396Z" level=info msg="StopPodSandbox for \"2e31b90064eddb17c26b1c06b7c7b2cccacd7d2a06be609071f2b2c43b13f170\" returns successfully" Apr 28 00:54:47.075803 containerd[1465]: time="2026-04-28T00:54:47.075751290Z" level=info msg="RemovePodSandbox for \"2e31b90064eddb17c26b1c06b7c7b2cccacd7d2a06be609071f2b2c43b13f170\"" Apr 28 00:54:47.075891 containerd[1465]: time="2026-04-28T00:54:47.075807491Z" level=info msg="Forcibly stopping sandbox \"2e31b90064eddb17c26b1c06b7c7b2cccacd7d2a06be609071f2b2c43b13f170\"" Apr 28 00:54:47.075933 containerd[1465]: time="2026-04-28T00:54:47.075914001Z" level=info msg="TearDown network for sandbox \"2e31b90064eddb17c26b1c06b7c7b2cccacd7d2a06be609071f2b2c43b13f170\" successfully" Apr 28 00:54:47.102911 containerd[1465]: time="2026-04-28T00:54:47.102695890Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2e31b90064eddb17c26b1c06b7c7b2cccacd7d2a06be609071f2b2c43b13f170\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 28 00:54:47.105094 containerd[1465]: time="2026-04-28T00:54:47.104955273Z" level=info msg="RemovePodSandbox \"2e31b90064eddb17c26b1c06b7c7b2cccacd7d2a06be609071f2b2c43b13f170\" returns successfully" Apr 28 00:54:47.430662 kubelet[2526]: E0428 00:54:47.430604 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:54:47.484294 kubelet[2526]: I0428 00:54:47.484108 2526 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-wr7xc" podStartSLOduration=11.484096028 podStartE2EDuration="11.484096028s" podCreationTimestamp="2026-04-28 00:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-28 00:54:42.313427405 +0000 UTC m=+115.457685502" watchObservedRunningTime="2026-04-28 00:54:47.484096028 +0000 UTC m=+120.628354125" Apr 28 00:54:48.003509 systemd-networkd[1392]: lxc_health: Gained IPv6LL Apr 28 00:54:48.347337 kubelet[2526]: E0428 00:54:48.343252 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:54:49.333096 kubelet[2526]: E0428 00:54:49.331281 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:54:50.060923 kubelet[2526]: E0428 00:54:50.060672 2526 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 28 00:54:53.003785 sshd[4403]: pam_unix(sshd:session): session closed for user core Apr 28 00:54:53.008644 systemd[1]: sshd@28-10.0.0.98:22-10.0.0.1:57966.service: Deactivated successfully. Apr 28 00:54:53.010470 systemd[1]: session-29.scope: Deactivated successfully. Apr 28 00:54:53.011108 systemd-logind[1445]: Session 29 logged out. Waiting for processes to exit. Apr 28 00:54:53.015276 systemd-logind[1445]: Removed session 29.