Apr 24 23:58:58.947499 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Apr 24 22:11:38 -00 2026 Apr 24 23:58:58.947535 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 24 23:58:58.947554 kernel: BIOS-provided physical RAM map: Apr 24 23:58:58.947566 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 24 23:58:58.947577 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Apr 24 23:58:58.947589 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Apr 24 23:58:58.947603 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Apr 24 23:58:58.947616 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Apr 24 23:58:58.947628 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Apr 24 23:58:58.947643 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Apr 24 23:58:58.947655 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Apr 24 23:58:58.947667 kernel: NX (Execute Disable) protection: active Apr 24 23:58:58.947679 kernel: APIC: Static calls initialized Apr 24 23:58:58.947692 kernel: efi: EFI v2.7 by EDK II Apr 24 23:58:58.947707 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x7701a018 Apr 24 23:58:58.947753 kernel: SMBIOS 2.7 present. Apr 24 23:58:58.947766 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Apr 24 23:58:58.947779 kernel: Hypervisor detected: KVM Apr 24 23:58:58.947793 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 24 23:58:58.947807 kernel: kvm-clock: using sched offset of 3785697368 cycles Apr 24 23:58:58.947821 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 24 23:58:58.947835 kernel: tsc: Detected 2499.996 MHz processor Apr 24 23:58:58.947849 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 24 23:58:58.947864 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 24 23:58:58.947878 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Apr 24 23:58:58.947896 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 24 23:58:58.947910 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 24 23:58:58.947924 kernel: Using GB pages for direct mapping Apr 24 23:58:58.947937 kernel: Secure boot disabled Apr 24 23:58:58.947950 kernel: ACPI: Early table checksum verification disabled Apr 24 23:58:58.947963 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Apr 24 23:58:58.947977 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Apr 24 23:58:58.947990 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Apr 24 23:58:58.948003 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Apr 24 23:58:58.948019 kernel: ACPI: FACS 0x00000000789D0000 000040 Apr 24 23:58:58.948032 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Apr 24 23:58:58.948045 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Apr 24 23:58:58.948058 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Apr 24 23:58:58.948072 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Apr 24 23:58:58.948085 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Apr 24 23:58:58.948104 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Apr 24 23:58:58.948121 kernel: ACPI: SSDT 0x0000000078952000 0000D1 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Apr 24 23:58:58.948135 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Apr 24 23:58:58.948149 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Apr 24 23:58:58.948162 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Apr 24 23:58:58.948176 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Apr 24 23:58:58.948190 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Apr 24 23:58:58.948207 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Apr 24 23:58:58.948221 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Apr 24 23:58:58.948235 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Apr 24 23:58:58.948249 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Apr 24 23:58:58.948263 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Apr 24 23:58:58.948277 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x789520d0] Apr 24 23:58:58.948291 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Apr 24 23:58:58.948304 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 24 23:58:58.948318 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 24 23:58:58.948332 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Apr 24 23:58:58.948349 kernel: NUMA: Initialized distance table, cnt=1 Apr 24 23:58:58.948362 kernel: NODE_DATA(0) allocated [mem 0x7a8f0000-0x7a8f5fff] Apr 24 23:58:58.948376 kernel: Zone ranges: Apr 24 23:58:58.948390 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 24 23:58:58.948404 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Apr 24 23:58:58.948418 kernel: Normal empty Apr 24 23:58:58.948432 kernel: Movable zone start for each node Apr 24 23:58:58.948446 kernel: Early memory node ranges Apr 24 23:58:58.948459 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 24 23:58:58.948476 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Apr 24 23:58:58.948490 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Apr 24 23:58:58.948503 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Apr 24 23:58:58.948518 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 24 23:58:58.948532 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 24 23:58:58.948546 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Apr 24 23:58:58.948560 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Apr 24 23:58:58.948574 kernel: ACPI: PM-Timer IO Port: 0xb008 Apr 24 23:58:58.948588 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 24 23:58:58.948602 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Apr 24 23:58:58.948619 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 24 23:58:58.948633 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 24 23:58:58.948647 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 24 23:58:58.948661 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 24 23:58:58.948675 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 24 23:58:58.948697 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 24 23:58:58.948711 kernel: TSC deadline timer available Apr 24 23:58:58.948738 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 24 23:58:58.948753 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 24 23:58:58.948770 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Apr 24 23:58:58.948784 kernel: Booting paravirtualized kernel on KVM Apr 24 23:58:58.948798 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 24 23:58:58.948813 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 24 23:58:58.948827 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 24 23:58:58.948841 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 24 23:58:58.948854 kernel: pcpu-alloc: [0] 0 1 Apr 24 23:58:58.948868 kernel: kvm-guest: PV spinlocks enabled Apr 24 23:58:58.948882 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 24 23:58:58.948901 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 24 23:58:58.948916 kernel: random: crng init done Apr 24 23:58:58.948929 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 24 23:58:58.948943 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 24 23:58:58.948957 kernel: Fallback order for Node 0: 0 Apr 24 23:58:58.948971 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Apr 24 23:58:58.948985 kernel: Policy zone: DMA32 Apr 24 23:58:58.948999 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 24 23:58:58.949016 kernel: Memory: 1874644K/2037804K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 162900K reserved, 0K cma-reserved) Apr 24 23:58:58.949031 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 24 23:58:58.949045 kernel: Kernel/User page tables isolation: enabled Apr 24 23:58:58.949059 kernel: ftrace: allocating 37996 entries in 149 pages Apr 24 23:58:58.949073 kernel: ftrace: allocated 149 pages with 4 groups Apr 24 23:58:58.949087 kernel: Dynamic Preempt: voluntary Apr 24 23:58:58.949101 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 24 23:58:58.949116 kernel: rcu: RCU event tracing is enabled. Apr 24 23:58:58.949130 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 24 23:58:58.949147 kernel: Trampoline variant of Tasks RCU enabled. Apr 24 23:58:58.949161 kernel: Rude variant of Tasks RCU enabled. Apr 24 23:58:58.949175 kernel: Tracing variant of Tasks RCU enabled. Apr 24 23:58:58.949189 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 24 23:58:58.949203 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 24 23:58:58.949217 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 24 23:58:58.949231 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 24 23:58:58.949259 kernel: Console: colour dummy device 80x25 Apr 24 23:58:58.949272 kernel: printk: console [tty0] enabled Apr 24 23:58:58.949284 kernel: printk: console [ttyS0] enabled Apr 24 23:58:58.949297 kernel: ACPI: Core revision 20230628 Apr 24 23:58:58.949310 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Apr 24 23:58:58.949326 kernel: APIC: Switch to symmetric I/O mode setup Apr 24 23:58:58.949339 kernel: x2apic enabled Apr 24 23:58:58.949353 kernel: APIC: Switched APIC routing to: physical x2apic Apr 24 23:58:58.949367 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Apr 24 23:58:58.949382 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Apr 24 23:58:58.949399 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 24 23:58:58.949414 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Apr 24 23:58:58.949430 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 24 23:58:58.949444 kernel: Spectre V2 : Mitigation: Retpolines Apr 24 23:58:58.949460 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 24 23:58:58.949475 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 24 23:58:58.949491 kernel: RETBleed: Vulnerable Apr 24 23:58:58.949506 kernel: Speculative Store Bypass: Vulnerable Apr 24 23:58:58.949521 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Apr 24 23:58:58.949536 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 24 23:58:58.949555 kernel: GDS: Unknown: Dependent on hypervisor status Apr 24 23:58:58.949570 kernel: active return thunk: its_return_thunk Apr 24 23:58:58.949585 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 24 23:58:58.949600 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 24 23:58:58.949617 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 24 23:58:58.949632 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 24 23:58:58.949647 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Apr 24 23:58:58.949662 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Apr 24 23:58:58.949678 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 24 23:58:58.949693 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 24 23:58:58.949708 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 24 23:58:58.949742 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 24 23:58:58.949757 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 24 23:58:58.949773 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Apr 24 23:58:58.949788 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Apr 24 23:58:58.949803 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Apr 24 23:58:58.949818 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Apr 24 23:58:58.949834 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Apr 24 23:58:58.949849 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Apr 24 23:58:58.949864 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Apr 24 23:58:58.949879 kernel: Freeing SMP alternatives memory: 32K Apr 24 23:58:58.949894 kernel: pid_max: default: 32768 minimum: 301 Apr 24 23:58:58.949914 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 24 23:58:58.949929 kernel: landlock: Up and running. Apr 24 23:58:58.949944 kernel: SELinux: Initializing. Apr 24 23:58:58.949959 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 24 23:58:58.949974 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 24 23:58:58.949988 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Apr 24 23:58:58.950002 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 24 23:58:58.950017 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 24 23:58:58.950032 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 24 23:58:58.950047 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Apr 24 23:58:58.950066 kernel: signal: max sigframe size: 3632 Apr 24 23:58:58.950081 kernel: rcu: Hierarchical SRCU implementation. Apr 24 23:58:58.950097 kernel: rcu: Max phase no-delay instances is 400. Apr 24 23:58:58.950112 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 24 23:58:58.950127 kernel: smp: Bringing up secondary CPUs ... Apr 24 23:58:58.950141 kernel: smpboot: x86: Booting SMP configuration: Apr 24 23:58:58.950164 kernel: .... node #0, CPUs: #1 Apr 24 23:58:58.950191 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Apr 24 23:58:58.950216 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 24 23:58:58.950235 kernel: smp: Brought up 1 node, 2 CPUs Apr 24 23:58:58.950251 kernel: smpboot: Max logical packages: 1 Apr 24 23:58:58.950267 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Apr 24 23:58:58.950283 kernel: devtmpfs: initialized Apr 24 23:58:58.950299 kernel: x86/mm: Memory block size: 128MB Apr 24 23:58:58.950315 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Apr 24 23:58:58.950331 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 24 23:58:58.950347 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 24 23:58:58.950364 kernel: pinctrl core: initialized pinctrl subsystem Apr 24 23:58:58.950383 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 24 23:58:58.950399 kernel: audit: initializing netlink subsys (disabled) Apr 24 23:58:58.950415 kernel: audit: type=2000 audit(1777075138.682:1): state=initialized audit_enabled=0 res=1 Apr 24 23:58:58.950430 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 24 23:58:58.950446 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 24 23:58:58.950462 kernel: cpuidle: using governor menu Apr 24 23:58:58.950478 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 24 23:58:58.950494 kernel: dca service started, version 1.12.1 Apr 24 23:58:58.950511 kernel: PCI: Using configuration type 1 for base access Apr 24 23:58:58.950530 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 24 23:58:58.950546 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 24 23:58:58.950562 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 24 23:58:58.950578 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 24 23:58:58.950594 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 24 23:58:58.950610 kernel: ACPI: Added _OSI(Module Device) Apr 24 23:58:58.950626 kernel: ACPI: Added _OSI(Processor Device) Apr 24 23:58:58.950642 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 24 23:58:58.950658 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Apr 24 23:58:58.950677 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 24 23:58:58.950693 kernel: ACPI: Interpreter enabled Apr 24 23:58:58.950709 kernel: ACPI: PM: (supports S0 S5) Apr 24 23:58:58.950750 kernel: ACPI: Using IOAPIC for interrupt routing Apr 24 23:58:58.950766 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 24 23:58:58.950783 kernel: PCI: Using E820 reservations for host bridge windows Apr 24 23:58:58.950798 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Apr 24 23:58:58.950815 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 24 23:58:58.951030 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Apr 24 23:58:58.951176 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Apr 24 23:58:58.951307 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Apr 24 23:58:58.951327 kernel: acpiphp: Slot [3] registered Apr 24 23:58:58.951344 kernel: acpiphp: Slot [4] registered Apr 24 23:58:58.951360 kernel: acpiphp: Slot [5] registered Apr 24 23:58:58.951375 kernel: acpiphp: Slot [6] registered Apr 24 23:58:58.951391 kernel: acpiphp: Slot [7] registered Apr 24 23:58:58.951410 kernel: acpiphp: Slot [8] registered Apr 24 23:58:58.951426 kernel: acpiphp: Slot [9] registered Apr 24 23:58:58.951441 kernel: acpiphp: Slot [10] registered Apr 24 23:58:58.951457 kernel: acpiphp: Slot [11] registered Apr 24 23:58:58.951473 kernel: acpiphp: Slot [12] registered Apr 24 23:58:58.951489 kernel: acpiphp: Slot [13] registered Apr 24 23:58:58.951505 kernel: acpiphp: Slot [14] registered Apr 24 23:58:58.951520 kernel: acpiphp: Slot [15] registered Apr 24 23:58:58.951536 kernel: acpiphp: Slot [16] registered Apr 24 23:58:58.951552 kernel: acpiphp: Slot [17] registered Apr 24 23:58:58.951571 kernel: acpiphp: Slot [18] registered Apr 24 23:58:58.951587 kernel: acpiphp: Slot [19] registered Apr 24 23:58:58.951602 kernel: acpiphp: Slot [20] registered Apr 24 23:58:58.951618 kernel: acpiphp: Slot [21] registered Apr 24 23:58:58.951634 kernel: acpiphp: Slot [22] registered Apr 24 23:58:58.951649 kernel: acpiphp: Slot [23] registered Apr 24 23:58:58.951665 kernel: acpiphp: Slot [24] registered Apr 24 23:58:58.951681 kernel: acpiphp: Slot [25] registered Apr 24 23:58:58.951697 kernel: acpiphp: Slot [26] registered Apr 24 23:58:58.951735 kernel: acpiphp: Slot [27] registered Apr 24 23:58:58.951750 kernel: acpiphp: Slot [28] registered Apr 24 23:58:58.951764 kernel: acpiphp: Slot [29] registered Apr 24 23:58:58.951778 kernel: acpiphp: Slot [30] registered Apr 24 23:58:58.951792 kernel: acpiphp: Slot [31] registered Apr 24 23:58:58.951808 kernel: PCI host bridge to bus 0000:00 Apr 24 23:58:58.951960 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 24 23:58:58.952094 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 24 23:58:58.952227 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 24 23:58:58.952348 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Apr 24 23:58:58.952469 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Apr 24 23:58:58.952600 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 24 23:58:58.952789 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Apr 24 23:58:58.952940 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Apr 24 23:58:58.953083 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Apr 24 23:58:58.953259 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Apr 24 23:58:58.953477 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Apr 24 23:58:58.953639 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Apr 24 23:58:58.953841 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Apr 24 23:58:58.953986 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Apr 24 23:58:58.954128 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Apr 24 23:58:58.954266 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Apr 24 23:58:58.954443 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Apr 24 23:58:58.954585 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Apr 24 23:58:58.954732 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 24 23:58:58.954864 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Apr 24 23:58:58.954996 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 24 23:58:58.955132 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Apr 24 23:58:58.955267 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Apr 24 23:58:58.955419 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Apr 24 23:58:58.955555 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Apr 24 23:58:58.955576 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 24 23:58:58.955592 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 24 23:58:58.955607 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 24 23:58:58.955636 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 24 23:58:58.955657 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Apr 24 23:58:58.955676 kernel: iommu: Default domain type: Translated Apr 24 23:58:58.955689 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 24 23:58:58.955705 kernel: efivars: Registered efivars operations Apr 24 23:58:58.955742 kernel: PCI: Using ACPI for IRQ routing Apr 24 23:58:58.955758 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 24 23:58:58.955776 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Apr 24 23:58:58.955790 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Apr 24 23:58:58.955947 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Apr 24 23:58:58.956092 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Apr 24 23:58:58.956235 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 24 23:58:58.956254 kernel: vgaarb: loaded Apr 24 23:58:58.956270 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Apr 24 23:58:58.956286 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Apr 24 23:58:58.956301 kernel: clocksource: Switched to clocksource kvm-clock Apr 24 23:58:58.956317 kernel: VFS: Disk quotas dquot_6.6.0 Apr 24 23:58:58.956333 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 24 23:58:58.956348 kernel: pnp: PnP ACPI init Apr 24 23:58:58.956364 kernel: pnp: PnP ACPI: found 5 devices Apr 24 23:58:58.956384 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 24 23:58:58.956399 kernel: NET: Registered PF_INET protocol family Apr 24 23:58:58.956415 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 24 23:58:58.956430 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Apr 24 23:58:58.956446 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 24 23:58:58.956462 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 24 23:58:58.956477 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 24 23:58:58.956493 kernel: TCP: Hash tables configured (established 16384 bind 16384) Apr 24 23:58:58.956512 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 24 23:58:58.956527 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 24 23:58:58.956543 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 24 23:58:58.956558 kernel: NET: Registered PF_XDP protocol family Apr 24 23:58:58.956692 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 24 23:58:58.956839 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 24 23:58:58.956955 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 24 23:58:58.957071 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Apr 24 23:58:58.957293 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Apr 24 23:58:58.957449 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Apr 24 23:58:58.957468 kernel: PCI: CLS 0 bytes, default 64 Apr 24 23:58:58.957483 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 24 23:58:58.957498 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Apr 24 23:58:58.957512 kernel: clocksource: Switched to clocksource tsc Apr 24 23:58:58.957526 kernel: Initialise system trusted keyrings Apr 24 23:58:58.957541 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Apr 24 23:58:58.957555 kernel: Key type asymmetric registered Apr 24 23:58:58.957574 kernel: Asymmetric key parser 'x509' registered Apr 24 23:58:58.957588 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 24 23:58:58.957603 kernel: io scheduler mq-deadline registered Apr 24 23:58:58.957618 kernel: io scheduler kyber registered Apr 24 23:58:58.957632 kernel: io scheduler bfq registered Apr 24 23:58:58.957646 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 24 23:58:58.957660 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 24 23:58:58.957674 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 24 23:58:58.957690 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 24 23:58:58.957709 kernel: i8042: Warning: Keylock active Apr 24 23:58:58.957740 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 24 23:58:58.957754 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 24 23:58:58.957926 kernel: rtc_cmos 00:00: RTC can wake from S4 Apr 24 23:58:58.958071 kernel: rtc_cmos 00:00: registered as rtc0 Apr 24 23:58:58.958213 kernel: rtc_cmos 00:00: setting system clock to 2026-04-24T23:58:58 UTC (1777075138) Apr 24 23:58:58.958345 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Apr 24 23:58:58.958363 kernel: intel_pstate: CPU model not supported Apr 24 23:58:58.958383 kernel: efifb: probing for efifb Apr 24 23:58:58.958398 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Apr 24 23:58:58.958414 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Apr 24 23:58:58.958431 kernel: efifb: scrolling: redraw Apr 24 23:58:58.958446 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 24 23:58:58.958463 kernel: Console: switching to colour frame buffer device 100x37 Apr 24 23:58:58.958477 kernel: fb0: EFI VGA frame buffer device Apr 24 23:58:58.958491 kernel: pstore: Using crash dump compression: deflate Apr 24 23:58:58.958506 kernel: pstore: Registered efi_pstore as persistent store backend Apr 24 23:58:58.958527 kernel: NET: Registered PF_INET6 protocol family Apr 24 23:58:58.958543 kernel: Segment Routing with IPv6 Apr 24 23:58:58.958560 kernel: In-situ OAM (IOAM) with IPv6 Apr 24 23:58:58.958577 kernel: NET: Registered PF_PACKET protocol family Apr 24 23:58:58.958595 kernel: Key type dns_resolver registered Apr 24 23:58:58.958610 kernel: IPI shorthand broadcast: enabled Apr 24 23:58:58.958653 kernel: sched_clock: Marking stable (481001894, 133231122)->(685670707, -71437691) Apr 24 23:58:58.958673 kernel: registered taskstats version 1 Apr 24 23:58:58.958696 kernel: Loading compiled-in X.509 certificates Apr 24 23:58:58.962767 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 507f116e6718ec7535b55c873de10edf9b6fe124' Apr 24 23:58:58.962801 kernel: Key type .fscrypt registered Apr 24 23:58:58.962818 kernel: Key type fscrypt-provisioning registered Apr 24 23:58:58.962835 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 24 23:58:58.962851 kernel: ima: Allocated hash algorithm: sha1 Apr 24 23:58:58.962867 kernel: ima: No architecture policies found Apr 24 23:58:58.962882 kernel: clk: Disabling unused clocks Apr 24 23:58:58.962898 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 24 23:58:58.962915 kernel: Write protecting the kernel read-only data: 36864k Apr 24 23:58:58.962937 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 24 23:58:58.962952 kernel: Run /init as init process Apr 24 23:58:58.962969 kernel: with arguments: Apr 24 23:58:58.962986 kernel: /init Apr 24 23:58:58.963003 kernel: with environment: Apr 24 23:58:58.963018 kernel: HOME=/ Apr 24 23:58:58.963034 kernel: TERM=linux Apr 24 23:58:58.963055 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 24 23:58:58.963079 systemd[1]: Detected virtualization amazon. Apr 24 23:58:58.963096 systemd[1]: Detected architecture x86-64. Apr 24 23:58:58.963114 systemd[1]: Running in initrd. Apr 24 23:58:58.963134 systemd[1]: No hostname configured, using default hostname. Apr 24 23:58:58.963152 systemd[1]: Hostname set to . Apr 24 23:58:58.963169 systemd[1]: Initializing machine ID from VM UUID. Apr 24 23:58:58.963185 systemd[1]: Queued start job for default target initrd.target. Apr 24 23:58:58.963202 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:58:58.963220 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:58:58.963237 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 24 23:58:58.963255 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 24 23:58:58.963272 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 24 23:58:58.963295 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 24 23:58:58.963321 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 24 23:58:58.963337 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 24 23:58:58.963353 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:58:58.963369 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:58:58.963386 systemd[1]: Reached target paths.target - Path Units. Apr 24 23:58:58.963401 systemd[1]: Reached target slices.target - Slice Units. Apr 24 23:58:58.963417 systemd[1]: Reached target swap.target - Swaps. Apr 24 23:58:58.963438 systemd[1]: Reached target timers.target - Timer Units. Apr 24 23:58:58.963454 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 24 23:58:58.963472 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 24 23:58:58.963490 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 24 23:58:58.963507 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 24 23:58:58.963524 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:58:58.963541 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 24 23:58:58.963558 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:58:58.963576 systemd[1]: Reached target sockets.target - Socket Units. Apr 24 23:58:58.963597 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 24 23:58:58.963613 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 24 23:58:58.963630 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 24 23:58:58.963648 systemd[1]: Starting systemd-fsck-usr.service... Apr 24 23:58:58.963666 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 24 23:58:58.963685 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 24 23:58:58.965768 systemd-journald[179]: Collecting audit messages is disabled. Apr 24 23:58:58.965835 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:58:58.965855 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 24 23:58:58.965874 systemd-journald[179]: Journal started Apr 24 23:58:58.965916 systemd-journald[179]: Runtime Journal (/run/log/journal/ec2a4f46e59dbfa5418f07dd9a7cbfd1) is 4.7M, max 38.2M, 33.4M free. Apr 24 23:58:58.953828 systemd-modules-load[180]: Inserted module 'overlay' Apr 24 23:58:58.969831 systemd[1]: Started systemd-journald.service - Journal Service. Apr 24 23:58:58.973039 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:58:58.974682 systemd[1]: Finished systemd-fsck-usr.service. Apr 24 23:58:58.976232 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:58:58.986040 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:58:58.990058 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 24 23:58:58.993933 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 24 23:58:59.004891 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 24 23:58:59.012095 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 24 23:58:59.018737 kernel: Bridge firewalling registered Apr 24 23:58:59.019341 systemd-modules-load[180]: Inserted module 'br_netfilter' Apr 24 23:58:59.023869 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 24 23:58:59.027105 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 23:58:59.037003 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 23:58:59.040903 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 24 23:58:59.043296 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:58:59.046534 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:58:59.047337 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:58:59.053572 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 24 23:58:59.056948 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 24 23:58:59.066325 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:58:59.089992 dracut-cmdline[212]: dracut-dracut-053 Apr 24 23:58:59.093858 dracut-cmdline[212]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 24 23:58:59.114190 systemd-resolved[213]: Positive Trust Anchors: Apr 24 23:58:59.114211 systemd-resolved[213]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 24 23:58:59.114270 systemd-resolved[213]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 24 23:58:59.122138 systemd-resolved[213]: Defaulting to hostname 'linux'. Apr 24 23:58:59.123649 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 24 23:58:59.128501 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:58:59.182753 kernel: SCSI subsystem initialized Apr 24 23:58:59.193746 kernel: Loading iSCSI transport class v2.0-870. Apr 24 23:58:59.204774 kernel: iscsi: registered transport (tcp) Apr 24 23:58:59.228007 kernel: iscsi: registered transport (qla4xxx) Apr 24 23:58:59.228097 kernel: QLogic iSCSI HBA Driver Apr 24 23:58:59.323955 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 24 23:58:59.337979 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 24 23:58:59.370346 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 24 23:58:59.370430 kernel: device-mapper: uevent: version 1.0.3 Apr 24 23:58:59.370454 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 24 23:58:59.413747 kernel: raid6: avx512x4 gen() 17622 MB/s Apr 24 23:58:59.431742 kernel: raid6: avx512x2 gen() 17750 MB/s Apr 24 23:58:59.449742 kernel: raid6: avx512x1 gen() 17670 MB/s Apr 24 23:58:59.467740 kernel: raid6: avx2x4 gen() 17701 MB/s Apr 24 23:58:59.485738 kernel: raid6: avx2x2 gen() 17557 MB/s Apr 24 23:58:59.504013 kernel: raid6: avx2x1 gen() 12468 MB/s Apr 24 23:58:59.504092 kernel: raid6: using algorithm avx512x2 gen() 17750 MB/s Apr 24 23:58:59.522971 kernel: raid6: .... xor() 24498 MB/s, rmw enabled Apr 24 23:58:59.523041 kernel: raid6: using avx512x2 recovery algorithm Apr 24 23:58:59.544763 kernel: xor: automatically using best checksumming function avx Apr 24 23:58:59.705751 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 24 23:58:59.717128 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 24 23:58:59.721011 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:58:59.744168 systemd-udevd[397]: Using default interface naming scheme 'v255'. Apr 24 23:58:59.749472 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:58:59.757969 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 24 23:58:59.775926 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation Apr 24 23:58:59.806682 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 24 23:58:59.810923 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 24 23:58:59.864576 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:58:59.871897 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 24 23:58:59.895776 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 24 23:58:59.902513 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 24 23:58:59.904796 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:58:59.906841 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 24 23:58:59.914222 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 24 23:58:59.940243 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 24 23:58:59.971735 kernel: cryptd: max_cpu_qlen set to 1000 Apr 24 23:58:59.980745 kernel: ena 0000:00:05.0: ENA device version: 0.10 Apr 24 23:58:59.981040 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Apr 24 23:58:59.992738 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Apr 24 23:58:59.994299 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 24 23:58:59.994558 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:58:59.996778 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:59:00.000373 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:59:00.000735 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:59:00.010464 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:2b:ad:39:be:23 Apr 24 23:59:00.001329 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:59:00.012100 (udev-worker)[454]: Network interface NamePolicy= disabled on kernel command line. Apr 24 23:59:00.013123 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:59:00.021045 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:59:00.022620 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:59:00.030857 kernel: AVX2 version of gcm_enc/dec engaged. Apr 24 23:59:00.030917 kernel: AES CTR mode by8 optimization enabled Apr 24 23:59:00.036019 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:59:00.044354 kernel: nvme nvme0: pci function 0000:00:04.0 Apr 24 23:59:00.047832 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Apr 24 23:59:00.061884 kernel: nvme nvme0: 2/0/0 default/read/poll queues Apr 24 23:59:00.067035 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:59:00.072743 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 24 23:59:00.072809 kernel: GPT:9289727 != 33554431 Apr 24 23:59:00.072827 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 24 23:59:00.075189 kernel: GPT:9289727 != 33554431 Apr 24 23:59:00.075248 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 24 23:59:00.076868 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 24 23:59:00.080083 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:59:00.101477 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:59:00.155032 kernel: BTRFS: device fsid 077bb4ac-fe88-409a-8f61-fdf28cadf681 devid 1 transid 31 /dev/nvme0n1p3 scanned by (udev-worker) (458) Apr 24 23:59:00.174739 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (452) Apr 24 23:59:00.211988 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Apr 24 23:59:00.237892 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Apr 24 23:59:00.254637 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Apr 24 23:59:00.255177 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Apr 24 23:59:00.262552 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 24 23:59:00.267913 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 24 23:59:00.276066 disk-uuid[627]: Primary Header is updated. Apr 24 23:59:00.276066 disk-uuid[627]: Secondary Entries is updated. Apr 24 23:59:00.276066 disk-uuid[627]: Secondary Header is updated. Apr 24 23:59:00.282736 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 24 23:59:00.290811 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 24 23:59:00.298997 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 24 23:59:01.310607 disk-uuid[628]: The operation has completed successfully. Apr 24 23:59:01.311652 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 24 23:59:01.631687 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 24 23:59:01.631979 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 24 23:59:01.668991 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 24 23:59:01.694294 sh[971]: Success Apr 24 23:59:01.751737 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 24 23:59:01.869516 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 24 23:59:01.893910 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 24 23:59:01.909778 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 24 23:59:01.933893 kernel: BTRFS info (device dm-0): first mount of filesystem 077bb4ac-fe88-409a-8f61-fdf28cadf681 Apr 24 23:59:01.933972 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:59:01.935758 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 24 23:59:01.938654 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 24 23:59:01.938706 kernel: BTRFS info (device dm-0): using free space tree Apr 24 23:59:02.012819 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 24 23:59:02.017342 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 24 23:59:02.018657 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 24 23:59:02.025980 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 24 23:59:02.029029 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 24 23:59:02.056491 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:59:02.056560 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:59:02.059676 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 24 23:59:02.078746 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 24 23:59:02.092706 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 24 23:59:02.095736 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:59:02.102918 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 24 23:59:02.112014 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 24 23:59:02.137532 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 24 23:59:02.150040 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 24 23:59:02.182409 systemd-networkd[1163]: lo: Link UP Apr 24 23:59:02.182422 systemd-networkd[1163]: lo: Gained carrier Apr 24 23:59:02.184351 systemd-networkd[1163]: Enumeration completed Apr 24 23:59:02.184910 systemd-networkd[1163]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:59:02.184915 systemd-networkd[1163]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 23:59:02.186269 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 24 23:59:02.187749 systemd[1]: Reached target network.target - Network. Apr 24 23:59:02.189097 systemd-networkd[1163]: eth0: Link UP Apr 24 23:59:02.189103 systemd-networkd[1163]: eth0: Gained carrier Apr 24 23:59:02.189117 systemd-networkd[1163]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:59:02.198818 systemd-networkd[1163]: eth0: DHCPv4 address 172.31.30.251/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 24 23:59:02.326069 ignition[1128]: Ignition 2.19.0 Apr 24 23:59:02.326084 ignition[1128]: Stage: fetch-offline Apr 24 23:59:02.328072 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 24 23:59:02.326370 ignition[1128]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:59:02.326385 ignition[1128]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 24 23:59:02.326894 ignition[1128]: Ignition finished successfully Apr 24 23:59:02.331957 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 24 23:59:02.359994 ignition[1173]: Ignition 2.19.0 Apr 24 23:59:02.360007 ignition[1173]: Stage: fetch Apr 24 23:59:02.360473 ignition[1173]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:59:02.360488 ignition[1173]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 24 23:59:02.360618 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 24 23:59:02.375795 ignition[1173]: PUT result: OK Apr 24 23:59:02.377456 ignition[1173]: parsed url from cmdline: "" Apr 24 23:59:02.377568 ignition[1173]: no config URL provided Apr 24 23:59:02.377579 ignition[1173]: reading system config file "/usr/lib/ignition/user.ign" Apr 24 23:59:02.377597 ignition[1173]: no config at "/usr/lib/ignition/user.ign" Apr 24 23:59:02.377622 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 24 23:59:02.378189 ignition[1173]: PUT result: OK Apr 24 23:59:02.378241 ignition[1173]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Apr 24 23:59:02.378792 ignition[1173]: GET result: OK Apr 24 23:59:02.378889 ignition[1173]: parsing config with SHA512: 0d6f43edce38383e0524fe500ec0da9e8c4748ef55648b31eaa86c4d54b2c4f6934fa4a0d60a2feb32dbd22d557e49fda3dc9bc9156d7b6f2fec3199230a238c Apr 24 23:59:02.384331 unknown[1173]: fetched base config from "system" Apr 24 23:59:02.384889 ignition[1173]: fetch: fetch complete Apr 24 23:59:02.384346 unknown[1173]: fetched base config from "system" Apr 24 23:59:02.384895 ignition[1173]: fetch: fetch passed Apr 24 23:59:02.384355 unknown[1173]: fetched user config from "aws" Apr 24 23:59:02.384948 ignition[1173]: Ignition finished successfully Apr 24 23:59:02.389048 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 24 23:59:02.393942 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 24 23:59:02.409868 ignition[1180]: Ignition 2.19.0 Apr 24 23:59:02.409881 ignition[1180]: Stage: kargs Apr 24 23:59:02.410333 ignition[1180]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:59:02.410346 ignition[1180]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 24 23:59:02.410480 ignition[1180]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 24 23:59:02.411348 ignition[1180]: PUT result: OK Apr 24 23:59:02.413835 ignition[1180]: kargs: kargs passed Apr 24 23:59:02.413910 ignition[1180]: Ignition finished successfully Apr 24 23:59:02.415216 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 24 23:59:02.422905 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 24 23:59:02.438152 ignition[1186]: Ignition 2.19.0 Apr 24 23:59:02.438164 ignition[1186]: Stage: disks Apr 24 23:59:02.438628 ignition[1186]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:59:02.438643 ignition[1186]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 24 23:59:02.438789 ignition[1186]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 24 23:59:02.440295 ignition[1186]: PUT result: OK Apr 24 23:59:02.442608 ignition[1186]: disks: disks passed Apr 24 23:59:02.442662 ignition[1186]: Ignition finished successfully Apr 24 23:59:02.444167 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 24 23:59:02.445103 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 24 23:59:02.445436 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 24 23:59:02.446005 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 24 23:59:02.446552 systemd[1]: Reached target sysinit.target - System Initialization. Apr 24 23:59:02.447133 systemd[1]: Reached target basic.target - Basic System. Apr 24 23:59:02.451964 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 24 23:59:02.485558 systemd-fsck[1194]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 24 23:59:02.489910 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 24 23:59:02.496040 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 24 23:59:02.602782 kernel: EXT4-fs (nvme0n1p9): mounted filesystem ae73d4a7-3ef8-4c50-8348-4aeb952085ba r/w with ordered data mode. Quota mode: none. Apr 24 23:59:02.602367 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 24 23:59:02.603481 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 24 23:59:02.625014 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 24 23:59:02.628855 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 24 23:59:02.630021 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 24 23:59:02.630095 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 24 23:59:02.630130 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 24 23:59:02.649949 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1213) Apr 24 23:59:02.653279 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 24 23:59:02.658024 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:59:02.658058 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:59:02.658076 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 24 23:59:02.662954 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 24 23:59:02.670744 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 24 23:59:02.672588 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 24 23:59:02.855438 initrd-setup-root[1237]: cut: /sysroot/etc/passwd: No such file or directory Apr 24 23:59:02.862282 initrd-setup-root[1244]: cut: /sysroot/etc/group: No such file or directory Apr 24 23:59:02.867630 initrd-setup-root[1251]: cut: /sysroot/etc/shadow: No such file or directory Apr 24 23:59:02.872318 initrd-setup-root[1258]: cut: /sysroot/etc/gshadow: No such file or directory Apr 24 23:59:03.076522 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 24 23:59:03.087919 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 24 23:59:03.093012 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 24 23:59:03.100117 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 24 23:59:03.103826 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:59:03.131918 ignition[1329]: INFO : Ignition 2.19.0 Apr 24 23:59:03.133403 ignition[1329]: INFO : Stage: mount Apr 24 23:59:03.134548 ignition[1329]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:59:03.134548 ignition[1329]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 24 23:59:03.134548 ignition[1329]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 24 23:59:03.137465 ignition[1329]: INFO : PUT result: OK Apr 24 23:59:03.142463 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 24 23:59:03.143826 ignition[1329]: INFO : mount: mount passed Apr 24 23:59:03.144461 ignition[1329]: INFO : Ignition finished successfully Apr 24 23:59:03.145982 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 24 23:59:03.151849 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 24 23:59:03.159847 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 24 23:59:03.183757 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1341) Apr 24 23:59:03.188421 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:59:03.188498 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:59:03.188521 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 24 23:59:03.195739 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 24 23:59:03.198442 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 24 23:59:03.230635 ignition[1358]: INFO : Ignition 2.19.0 Apr 24 23:59:03.230635 ignition[1358]: INFO : Stage: files Apr 24 23:59:03.232171 ignition[1358]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:59:03.232171 ignition[1358]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 24 23:59:03.232171 ignition[1358]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 24 23:59:03.233587 ignition[1358]: INFO : PUT result: OK Apr 24 23:59:03.235844 ignition[1358]: DEBUG : files: compiled without relabeling support, skipping Apr 24 23:59:03.246305 ignition[1358]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 24 23:59:03.246305 ignition[1358]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 24 23:59:03.284603 ignition[1358]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 24 23:59:03.285969 ignition[1358]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 24 23:59:03.285969 ignition[1358]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 24 23:59:03.285281 unknown[1358]: wrote ssh authorized keys file for user: core Apr 24 23:59:03.297172 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 24 23:59:03.298218 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 24 23:59:03.380382 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 24 23:59:03.553351 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 24 23:59:03.554485 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 24 23:59:03.554485 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 24 23:59:03.673024 systemd-networkd[1163]: eth0: Gained IPv6LL Apr 24 23:59:03.773570 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 24 23:59:03.890026 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 24 23:59:03.891465 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 24 23:59:03.891465 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 24 23:59:03.891465 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 24 23:59:03.891465 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 24 23:59:03.891465 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 24 23:59:03.891465 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 24 23:59:03.891465 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 24 23:59:03.898894 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 24 23:59:03.898894 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 24 23:59:03.898894 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 24 23:59:03.898894 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 24 23:59:03.898894 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 24 23:59:03.898894 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 24 23:59:03.898894 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 24 23:59:04.172338 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 24 23:59:04.731371 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 24 23:59:04.731371 ignition[1358]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 24 23:59:04.744313 ignition[1358]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 24 23:59:04.746819 ignition[1358]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 24 23:59:04.746819 ignition[1358]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 24 23:59:04.746819 ignition[1358]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Apr 24 23:59:04.746819 ignition[1358]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Apr 24 23:59:04.746819 ignition[1358]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 24 23:59:04.746819 ignition[1358]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 24 23:59:04.746819 ignition[1358]: INFO : files: files passed Apr 24 23:59:04.746819 ignition[1358]: INFO : Ignition finished successfully Apr 24 23:59:04.747341 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 24 23:59:04.752018 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 24 23:59:04.762034 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 24 23:59:04.768134 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 24 23:59:04.768288 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 24 23:59:04.784002 initrd-setup-root-after-ignition[1387]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:59:04.784002 initrd-setup-root-after-ignition[1387]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:59:04.787530 initrd-setup-root-after-ignition[1391]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:59:04.788293 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 24 23:59:04.790103 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 24 23:59:04.795910 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 24 23:59:04.825649 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 24 23:59:04.825772 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 24 23:59:04.826592 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 24 23:59:04.827392 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 24 23:59:04.828792 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 24 23:59:04.836023 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 24 23:59:04.849149 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 24 23:59:04.854954 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 24 23:59:04.867780 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:59:04.868463 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:59:04.869588 systemd[1]: Stopped target timers.target - Timer Units. Apr 24 23:59:04.870491 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 24 23:59:04.870673 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 24 23:59:04.871821 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 24 23:59:04.872702 systemd[1]: Stopped target basic.target - Basic System. Apr 24 23:59:04.873514 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 24 23:59:04.874278 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 24 23:59:04.875046 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 24 23:59:04.875814 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 24 23:59:04.876565 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 24 23:59:04.877457 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 24 23:59:04.878590 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 24 23:59:04.879344 systemd[1]: Stopped target swap.target - Swaps. Apr 24 23:59:04.880059 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 24 23:59:04.880239 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 24 23:59:04.881461 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:59:04.882248 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:59:04.882931 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 24 23:59:04.883659 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:59:04.884139 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 24 23:59:04.884311 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 24 23:59:04.885852 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 24 23:59:04.886040 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 24 23:59:04.886746 systemd[1]: ignition-files.service: Deactivated successfully. Apr 24 23:59:04.886900 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 24 23:59:04.895117 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 24 23:59:04.897781 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 24 23:59:04.898330 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 24 23:59:04.899894 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:59:04.900776 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 24 23:59:04.900987 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 24 23:59:04.911463 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 24 23:59:04.912164 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 24 23:59:04.922830 ignition[1411]: INFO : Ignition 2.19.0 Apr 24 23:59:04.923680 ignition[1411]: INFO : Stage: umount Apr 24 23:59:04.924626 ignition[1411]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:59:04.924626 ignition[1411]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 24 23:59:04.924626 ignition[1411]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 24 23:59:04.927419 ignition[1411]: INFO : PUT result: OK Apr 24 23:59:04.931833 ignition[1411]: INFO : umount: umount passed Apr 24 23:59:04.932756 ignition[1411]: INFO : Ignition finished successfully Apr 24 23:59:04.934508 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 24 23:59:04.935221 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 24 23:59:04.936586 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 24 23:59:04.936658 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 24 23:59:04.938535 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 24 23:59:04.938603 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 24 23:59:04.939583 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 24 23:59:04.939645 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 24 23:59:04.940562 systemd[1]: Stopped target network.target - Network. Apr 24 23:59:04.940992 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 24 23:59:04.941058 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 24 23:59:04.941683 systemd[1]: Stopped target paths.target - Path Units. Apr 24 23:59:04.942267 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 24 23:59:04.945776 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:59:04.946210 systemd[1]: Stopped target slices.target - Slice Units. Apr 24 23:59:04.947145 systemd[1]: Stopped target sockets.target - Socket Units. Apr 24 23:59:04.947818 systemd[1]: iscsid.socket: Deactivated successfully. Apr 24 23:59:04.947879 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 24 23:59:04.948443 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 24 23:59:04.948490 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 24 23:59:04.949172 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 24 23:59:04.949240 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 24 23:59:04.949851 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 24 23:59:04.949911 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 24 23:59:04.950681 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 24 23:59:04.951356 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 24 23:59:04.953583 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 24 23:59:04.956806 systemd-networkd[1163]: eth0: DHCPv6 lease lost Apr 24 23:59:04.959083 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 24 23:59:04.959237 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 24 23:59:04.960386 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 24 23:59:04.960516 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 24 23:59:04.965166 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 24 23:59:04.965226 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:59:04.970876 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 24 23:59:04.971493 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 24 23:59:04.971581 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 24 23:59:04.972263 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 24 23:59:04.972324 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:59:04.972991 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 24 23:59:04.973049 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 24 23:59:04.974603 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 24 23:59:04.974660 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:59:04.975361 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:59:04.989133 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 24 23:59:04.989850 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 24 23:59:04.993525 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 24 23:59:04.993671 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:59:04.994603 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 24 23:59:04.994662 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 24 23:59:04.995173 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 24 23:59:04.995219 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:59:04.996303 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 24 23:59:04.996366 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 24 23:59:04.997364 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 24 23:59:04.997420 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 24 23:59:04.998080 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 24 23:59:04.998131 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:59:05.008207 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 24 23:59:05.009649 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 24 23:59:05.009761 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:59:05.012158 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 24 23:59:05.012236 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 23:59:05.012913 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 24 23:59:05.012981 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:59:05.013542 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:59:05.013602 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:59:05.017313 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 24 23:59:05.017779 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 24 23:59:05.068896 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 24 23:59:05.069040 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 24 23:59:05.070206 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 24 23:59:05.070870 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 24 23:59:05.070945 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 24 23:59:05.077940 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 24 23:59:05.085874 systemd[1]: Switching root. Apr 24 23:59:05.123595 systemd-journald[179]: Journal stopped Apr 24 23:59:07.112154 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Apr 24 23:59:07.112255 kernel: SELinux: policy capability network_peer_controls=1 Apr 24 23:59:07.112279 kernel: SELinux: policy capability open_perms=1 Apr 24 23:59:07.112299 kernel: SELinux: policy capability extended_socket_class=1 Apr 24 23:59:07.112324 kernel: SELinux: policy capability always_check_network=0 Apr 24 23:59:07.112350 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 24 23:59:07.112375 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 24 23:59:07.112395 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 24 23:59:07.112413 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 24 23:59:07.112429 kernel: audit: type=1403 audit(1777075145.853:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 24 23:59:07.112448 systemd[1]: Successfully loaded SELinux policy in 62.259ms. Apr 24 23:59:07.112476 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.821ms. Apr 24 23:59:07.112499 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 24 23:59:07.112518 systemd[1]: Detected virtualization amazon. Apr 24 23:59:07.112544 systemd[1]: Detected architecture x86-64. Apr 24 23:59:07.112566 systemd[1]: Detected first boot. Apr 24 23:59:07.112598 systemd[1]: Initializing machine ID from VM UUID. Apr 24 23:59:07.112620 zram_generator::config[1453]: No configuration found. Apr 24 23:59:07.112639 systemd[1]: Populated /etc with preset unit settings. Apr 24 23:59:07.112660 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 24 23:59:07.112681 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 24 23:59:07.112703 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 24 23:59:07.114803 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 24 23:59:07.114844 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 24 23:59:07.114866 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 24 23:59:07.114890 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 24 23:59:07.114920 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 24 23:59:07.114943 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 24 23:59:07.114967 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 24 23:59:07.114990 systemd[1]: Created slice user.slice - User and Session Slice. Apr 24 23:59:07.115014 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:59:07.115042 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:59:07.115065 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 24 23:59:07.115096 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 24 23:59:07.115119 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 24 23:59:07.115143 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 24 23:59:07.115166 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 24 23:59:07.115190 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:59:07.115215 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 24 23:59:07.115238 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 24 23:59:07.115263 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 24 23:59:07.115283 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 24 23:59:07.115306 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:59:07.115325 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 24 23:59:07.115347 systemd[1]: Reached target slices.target - Slice Units. Apr 24 23:59:07.115366 systemd[1]: Reached target swap.target - Swaps. Apr 24 23:59:07.115386 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 24 23:59:07.115407 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 24 23:59:07.115432 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:59:07.115454 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 24 23:59:07.115474 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:59:07.115495 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 24 23:59:07.115517 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 24 23:59:07.115538 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 24 23:59:07.115561 systemd[1]: Mounting media.mount - External Media Directory... Apr 24 23:59:07.115582 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:59:07.115610 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 24 23:59:07.115632 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 24 23:59:07.115652 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 24 23:59:07.115676 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 24 23:59:07.115696 systemd[1]: Reached target machines.target - Containers. Apr 24 23:59:07.115741 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 24 23:59:07.115760 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 23:59:07.115781 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 24 23:59:07.115800 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 24 23:59:07.115826 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 23:59:07.115848 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 24 23:59:07.115869 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 23:59:07.115891 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 24 23:59:07.115913 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 23:59:07.115936 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 24 23:59:07.115957 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 24 23:59:07.115980 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 24 23:59:07.116004 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 24 23:59:07.116026 systemd[1]: Stopped systemd-fsck-usr.service. Apr 24 23:59:07.116047 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 24 23:59:07.116068 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 24 23:59:07.116090 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 24 23:59:07.116111 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 24 23:59:07.116133 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 24 23:59:07.116154 systemd[1]: verity-setup.service: Deactivated successfully. Apr 24 23:59:07.116176 systemd[1]: Stopped verity-setup.service. Apr 24 23:59:07.116201 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:59:07.116262 systemd-journald[1535]: Collecting audit messages is disabled. Apr 24 23:59:07.116302 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 24 23:59:07.116321 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 24 23:59:07.116339 systemd[1]: Mounted media.mount - External Media Directory. Apr 24 23:59:07.116357 kernel: loop: module loaded Apr 24 23:59:07.116378 systemd-journald[1535]: Journal started Apr 24 23:59:07.116420 systemd-journald[1535]: Runtime Journal (/run/log/journal/ec2a4f46e59dbfa5418f07dd9a7cbfd1) is 4.7M, max 38.2M, 33.4M free. Apr 24 23:59:06.761450 systemd[1]: Queued start job for default target multi-user.target. Apr 24 23:59:07.119084 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 24 23:59:07.119120 kernel: fuse: init (API version 7.39) Apr 24 23:59:06.801166 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Apr 24 23:59:06.801605 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 24 23:59:07.122789 systemd[1]: Started systemd-journald.service - Journal Service. Apr 24 23:59:07.126624 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 24 23:59:07.127435 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 24 23:59:07.132434 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:59:07.134275 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 24 23:59:07.134797 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 24 23:59:07.137036 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 23:59:07.137883 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 23:59:07.140286 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 23:59:07.140476 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 23:59:07.142674 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 24 23:59:07.142896 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 24 23:59:07.144186 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 23:59:07.144649 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 23:59:07.146895 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 24 23:59:07.147985 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 24 23:59:07.169417 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 24 23:59:07.180844 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 24 23:59:07.185163 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 24 23:59:07.187955 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 24 23:59:07.197931 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 23:59:07.203946 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 24 23:59:07.208243 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 24 23:59:07.210183 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 24 23:59:07.210934 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 24 23:59:07.216747 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 24 23:59:07.216814 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 24 23:59:07.222256 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 24 23:59:07.233756 kernel: ACPI: bus type drm_connector registered Apr 24 23:59:07.237023 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 24 23:59:07.245388 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 24 23:59:07.246653 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 23:59:07.251782 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 24 23:59:07.257786 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 24 23:59:07.258667 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 24 23:59:07.266947 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 24 23:59:07.269225 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 24 23:59:07.271839 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 24 23:59:07.272812 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 24 23:59:07.273051 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 24 23:59:07.274030 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 24 23:59:07.293847 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:59:07.307649 systemd-journald[1535]: Time spent on flushing to /var/log/journal/ec2a4f46e59dbfa5418f07dd9a7cbfd1 is 59.745ms for 989 entries. Apr 24 23:59:07.307649 systemd-journald[1535]: System Journal (/var/log/journal/ec2a4f46e59dbfa5418f07dd9a7cbfd1) is 8.0M, max 195.6M, 187.6M free. Apr 24 23:59:07.385988 systemd-journald[1535]: Received client request to flush runtime journal. Apr 24 23:59:07.386053 kernel: loop0: detected capacity change from 0 to 61336 Apr 24 23:59:07.311982 systemd-tmpfiles[1574]: ACLs are not supported, ignoring. Apr 24 23:59:07.312006 systemd-tmpfiles[1574]: ACLs are not supported, ignoring. Apr 24 23:59:07.328806 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 23:59:07.340947 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 24 23:59:07.342076 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 24 23:59:07.343941 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 24 23:59:07.355990 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 24 23:59:07.404240 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 24 23:59:07.414501 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 24 23:59:07.415393 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 24 23:59:07.436740 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 24 23:59:07.442327 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:59:07.454014 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 24 23:59:07.462112 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 24 23:59:07.477968 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 24 23:59:07.479383 kernel: loop1: detected capacity change from 0 to 140768 Apr 24 23:59:07.495354 udevadm[1602]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 24 23:59:07.521191 systemd-tmpfiles[1604]: ACLs are not supported, ignoring. Apr 24 23:59:07.521221 systemd-tmpfiles[1604]: ACLs are not supported, ignoring. Apr 24 23:59:07.537333 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:59:07.586742 kernel: loop2: detected capacity change from 0 to 228704 Apr 24 23:59:07.698743 kernel: loop3: detected capacity change from 0 to 142488 Apr 24 23:59:07.833931 kernel: loop4: detected capacity change from 0 to 61336 Apr 24 23:59:07.870752 kernel: loop5: detected capacity change from 0 to 140768 Apr 24 23:59:07.906747 kernel: loop6: detected capacity change from 0 to 228704 Apr 24 23:59:07.950743 kernel: loop7: detected capacity change from 0 to 142488 Apr 24 23:59:07.978709 (sd-merge)[1610]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Apr 24 23:59:07.980634 (sd-merge)[1610]: Merged extensions into '/usr'. Apr 24 23:59:07.986781 systemd[1]: Reloading requested from client PID 1586 ('systemd-sysext') (unit systemd-sysext.service)... Apr 24 23:59:07.986947 systemd[1]: Reloading... Apr 24 23:59:08.108957 zram_generator::config[1633]: No configuration found. Apr 24 23:59:08.341710 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:59:08.416426 systemd[1]: Reloading finished in 428 ms. Apr 24 23:59:08.450077 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 24 23:59:08.450999 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 24 23:59:08.464059 systemd[1]: Starting ensure-sysext.service... Apr 24 23:59:08.468297 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 24 23:59:08.479923 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:59:08.487883 systemd[1]: Reloading requested from client PID 1689 ('systemctl') (unit ensure-sysext.service)... Apr 24 23:59:08.487906 systemd[1]: Reloading... Apr 24 23:59:08.524005 systemd-tmpfiles[1690]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 24 23:59:08.524536 systemd-tmpfiles[1690]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 24 23:59:08.529989 systemd-tmpfiles[1690]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 24 23:59:08.530495 systemd-tmpfiles[1690]: ACLs are not supported, ignoring. Apr 24 23:59:08.530594 systemd-tmpfiles[1690]: ACLs are not supported, ignoring. Apr 24 23:59:08.539281 systemd-tmpfiles[1690]: Detected autofs mount point /boot during canonicalization of boot. Apr 24 23:59:08.539300 systemd-tmpfiles[1690]: Skipping /boot Apr 24 23:59:08.541163 systemd-udevd[1691]: Using default interface naming scheme 'v255'. Apr 24 23:59:08.564048 systemd-tmpfiles[1690]: Detected autofs mount point /boot during canonicalization of boot. Apr 24 23:59:08.564064 systemd-tmpfiles[1690]: Skipping /boot Apr 24 23:59:08.636740 zram_generator::config[1720]: No configuration found. Apr 24 23:59:08.759579 (udev-worker)[1734]: Network interface NamePolicy= disabled on kernel command line. Apr 24 23:59:08.854209 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 24 23:59:08.874764 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Apr 24 23:59:08.877779 kernel: ACPI: button: Power Button [PWRF] Apr 24 23:59:08.881881 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Apr 24 23:59:08.885736 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Apr 24 23:59:08.889737 kernel: ACPI: button: Sleep Button [SLPF] Apr 24 23:59:08.946775 ldconfig[1581]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 24 23:59:08.977435 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (1728) Apr 24 23:59:09.001162 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:59:09.034747 kernel: mousedev: PS/2 mouse device common for all mice Apr 24 23:59:09.141132 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 24 23:59:09.141481 systemd[1]: Reloading finished in 653 ms. Apr 24 23:59:09.160998 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:59:09.162754 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 24 23:59:09.170389 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:59:09.208519 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 24 23:59:09.213152 systemd[1]: Finished ensure-sysext.service. Apr 24 23:59:09.229856 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 24 23:59:09.230556 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:59:09.237912 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 24 23:59:09.240930 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 24 23:59:09.241799 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 23:59:09.246933 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 24 23:59:09.252990 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 23:59:09.260266 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 24 23:59:09.265921 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 23:59:09.272402 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 23:59:09.273291 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 23:59:09.287211 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 24 23:59:09.293907 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 24 23:59:09.300894 lvm[1886]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 24 23:59:09.312981 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 24 23:59:09.325027 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 24 23:59:09.326861 systemd[1]: Reached target time-set.target - System Time Set. Apr 24 23:59:09.339166 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 24 23:59:09.346629 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:59:09.348028 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:59:09.349584 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 24 23:59:09.357513 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 23:59:09.357997 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 23:59:09.364032 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 24 23:59:09.364257 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 24 23:59:09.365324 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 23:59:09.366781 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 23:59:09.375514 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:59:09.380815 augenrules[1910]: No rules Apr 24 23:59:09.381139 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 24 23:59:09.382437 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 24 23:59:09.391923 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 24 23:59:09.393824 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 24 23:59:09.400949 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 23:59:09.401197 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 23:59:09.411102 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 24 23:59:09.415907 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 24 23:59:09.423684 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 24 23:59:09.439709 lvm[1921]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 24 23:59:09.452254 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 24 23:59:09.458918 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 24 23:59:09.476778 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 24 23:59:09.490818 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 24 23:59:09.500424 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 24 23:59:09.511824 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 24 23:59:09.514996 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 24 23:59:09.544387 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:59:09.582426 systemd-networkd[1905]: lo: Link UP Apr 24 23:59:09.582843 systemd-networkd[1905]: lo: Gained carrier Apr 24 23:59:09.584853 systemd-networkd[1905]: Enumeration completed Apr 24 23:59:09.585090 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 24 23:59:09.586233 systemd-networkd[1905]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:59:09.587838 systemd-networkd[1905]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 23:59:09.590543 systemd-networkd[1905]: eth0: Link UP Apr 24 23:59:09.593440 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 24 23:59:09.594469 systemd-networkd[1905]: eth0: Gained carrier Apr 24 23:59:09.594498 systemd-networkd[1905]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:59:09.605936 systemd-networkd[1905]: eth0: DHCPv4 address 172.31.30.251/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 24 23:59:09.607573 systemd-resolved[1907]: Positive Trust Anchors: Apr 24 23:59:09.607591 systemd-resolved[1907]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 24 23:59:09.607641 systemd-resolved[1907]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 24 23:59:09.613039 systemd-resolved[1907]: Defaulting to hostname 'linux'. Apr 24 23:59:09.614863 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 24 23:59:09.615506 systemd[1]: Reached target network.target - Network. Apr 24 23:59:09.615980 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:59:09.616383 systemd[1]: Reached target sysinit.target - System Initialization. Apr 24 23:59:09.616918 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 24 23:59:09.617342 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 24 23:59:09.617891 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 24 23:59:09.618368 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 24 23:59:09.618759 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 24 23:59:09.619128 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 24 23:59:09.619167 systemd[1]: Reached target paths.target - Path Units. Apr 24 23:59:09.619533 systemd[1]: Reached target timers.target - Timer Units. Apr 24 23:59:09.621251 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 24 23:59:09.623175 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 24 23:59:09.628504 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 24 23:59:09.629694 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 24 23:59:09.630239 systemd[1]: Reached target sockets.target - Socket Units. Apr 24 23:59:09.630631 systemd[1]: Reached target basic.target - Basic System. Apr 24 23:59:09.631060 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 24 23:59:09.631103 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 24 23:59:09.632214 systemd[1]: Starting containerd.service - containerd container runtime... Apr 24 23:59:09.636936 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 24 23:59:09.640819 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 24 23:59:09.645801 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 24 23:59:09.651047 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 24 23:59:09.654881 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 24 23:59:09.660464 jq[1950]: false Apr 24 23:59:09.660949 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 24 23:59:09.665982 systemd[1]: Started ntpd.service - Network Time Service. Apr 24 23:59:09.672889 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 24 23:59:09.680979 systemd[1]: Starting setup-oem.service - Setup OEM... Apr 24 23:59:09.701951 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 24 23:59:09.719796 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 24 23:59:09.733901 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 24 23:59:09.735008 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 24 23:59:09.735688 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 24 23:59:09.736922 systemd[1]: Starting update-engine.service - Update Engine... Apr 24 23:59:09.747909 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 24 23:59:09.758249 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 24 23:59:09.758502 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 24 23:59:09.762189 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 24 23:59:09.762802 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 24 23:59:09.769506 extend-filesystems[1951]: Found loop4 Apr 24 23:59:09.773708 extend-filesystems[1951]: Found loop5 Apr 24 23:59:09.773708 extend-filesystems[1951]: Found loop6 Apr 24 23:59:09.773708 extend-filesystems[1951]: Found loop7 Apr 24 23:59:09.773708 extend-filesystems[1951]: Found nvme0n1 Apr 24 23:59:09.773708 extend-filesystems[1951]: Found nvme0n1p1 Apr 24 23:59:09.773708 extend-filesystems[1951]: Found nvme0n1p2 Apr 24 23:59:09.773708 extend-filesystems[1951]: Found nvme0n1p3 Apr 24 23:59:09.773708 extend-filesystems[1951]: Found usr Apr 24 23:59:09.773708 extend-filesystems[1951]: Found nvme0n1p4 Apr 24 23:59:09.773708 extend-filesystems[1951]: Found nvme0n1p6 Apr 24 23:59:09.773708 extend-filesystems[1951]: Found nvme0n1p7 Apr 24 23:59:09.773708 extend-filesystems[1951]: Found nvme0n1p9 Apr 24 23:59:09.773708 extend-filesystems[1951]: Checking size of /dev/nvme0n1p9 Apr 24 23:59:09.807197 systemd[1]: motdgen.service: Deactivated successfully. Apr 24 23:59:09.807465 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 24 23:59:09.809262 ntpd[1953]: ntpd 4.2.8p17@1.4004-o Fri Apr 24 21:46:02 UTC 2026 (1): Starting Apr 24 23:59:09.810122 ntpd[1953]: 24 Apr 23:59:09 ntpd[1953]: ntpd 4.2.8p17@1.4004-o Fri Apr 24 21:46:02 UTC 2026 (1): Starting Apr 24 23:59:09.810122 ntpd[1953]: 24 Apr 23:59:09 ntpd[1953]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 24 23:59:09.810122 ntpd[1953]: 24 Apr 23:59:09 ntpd[1953]: ---------------------------------------------------- Apr 24 23:59:09.810122 ntpd[1953]: 24 Apr 23:59:09 ntpd[1953]: ntp-4 is maintained by Network Time Foundation, Apr 24 23:59:09.810122 ntpd[1953]: 24 Apr 23:59:09 ntpd[1953]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 24 23:59:09.810122 ntpd[1953]: 24 Apr 23:59:09 ntpd[1953]: corporation. Support and training for ntp-4 are Apr 24 23:59:09.810122 ntpd[1953]: 24 Apr 23:59:09 ntpd[1953]: available at https://www.nwtime.org/support Apr 24 23:59:09.810122 ntpd[1953]: 24 Apr 23:59:09 ntpd[1953]: ---------------------------------------------------- Apr 24 23:59:09.809290 ntpd[1953]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 24 23:59:09.809300 ntpd[1953]: ---------------------------------------------------- Apr 24 23:59:09.811961 ntpd[1953]: 24 Apr 23:59:09 ntpd[1953]: proto: precision = 0.093 usec (-23) Apr 24 23:59:09.809310 ntpd[1953]: ntp-4 is maintained by Network Time Foundation, Apr 24 23:59:09.809319 ntpd[1953]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 24 23:59:09.809329 ntpd[1953]: corporation. Support and training for ntp-4 are Apr 24 23:59:09.809339 ntpd[1953]: available at https://www.nwtime.org/support Apr 24 23:59:09.809348 ntpd[1953]: ---------------------------------------------------- Apr 24 23:59:09.811610 ntpd[1953]: proto: precision = 0.093 usec (-23) Apr 24 23:59:09.813953 ntpd[1953]: basedate set to 2026-04-12 Apr 24 23:59:09.814827 ntpd[1953]: 24 Apr 23:59:09 ntpd[1953]: basedate set to 2026-04-12 Apr 24 23:59:09.814827 ntpd[1953]: 24 Apr 23:59:09 ntpd[1953]: gps base set to 2026-04-12 (week 2414) Apr 24 23:59:09.813978 ntpd[1953]: gps base set to 2026-04-12 (week 2414) Apr 24 23:59:09.833101 jq[1969]: true Apr 24 23:59:09.836599 ntpd[1953]: Listen and drop on 0 v6wildcard [::]:123 Apr 24 23:59:09.837445 ntpd[1953]: 24 Apr 23:59:09 ntpd[1953]: Listen and drop on 0 v6wildcard [::]:123 Apr 24 23:59:09.837445 ntpd[1953]: 24 Apr 23:59:09 ntpd[1953]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 24 23:59:09.836674 ntpd[1953]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 24 23:59:09.838929 ntpd[1953]: Listen normally on 2 lo 127.0.0.1:123 Apr 24 23:59:09.838986 ntpd[1953]: Listen normally on 3 eth0 172.31.30.251:123 Apr 24 23:59:09.839966 ntpd[1953]: 24 Apr 23:59:09 ntpd[1953]: Listen normally on 2 lo 127.0.0.1:123 Apr 24 23:59:09.839966 ntpd[1953]: 24 Apr 23:59:09 ntpd[1953]: Listen normally on 3 eth0 172.31.30.251:123 Apr 24 23:59:09.839966 ntpd[1953]: 24 Apr 23:59:09 ntpd[1953]: Listen normally on 4 lo [::1]:123 Apr 24 23:59:09.839966 ntpd[1953]: 24 Apr 23:59:09 ntpd[1953]: bind(21) AF_INET6 fe80::42b:adff:fe39:be23%2#123 flags 0x11 failed: Cannot assign requested address Apr 24 23:59:09.839966 ntpd[1953]: 24 Apr 23:59:09 ntpd[1953]: unable to create socket on eth0 (5) for fe80::42b:adff:fe39:be23%2#123 Apr 24 23:59:09.839966 ntpd[1953]: 24 Apr 23:59:09 ntpd[1953]: failed to init interface for address fe80::42b:adff:fe39:be23%2 Apr 24 23:59:09.839966 ntpd[1953]: 24 Apr 23:59:09 ntpd[1953]: Listening on routing socket on fd #21 for interface updates Apr 24 23:59:09.839034 ntpd[1953]: Listen normally on 4 lo [::1]:123 Apr 24 23:59:09.839091 ntpd[1953]: bind(21) AF_INET6 fe80::42b:adff:fe39:be23%2#123 flags 0x11 failed: Cannot assign requested address Apr 24 23:59:09.839115 ntpd[1953]: unable to create socket on eth0 (5) for fe80::42b:adff:fe39:be23%2#123 Apr 24 23:59:09.839131 ntpd[1953]: failed to init interface for address fe80::42b:adff:fe39:be23%2 Apr 24 23:59:09.839167 ntpd[1953]: Listening on routing socket on fd #21 for interface updates Apr 24 23:59:09.847228 update_engine[1968]: I20260424 23:59:09.847146 1968 main.cc:92] Flatcar Update Engine starting Apr 24 23:59:09.863776 ntpd[1953]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 24 23:59:09.864346 ntpd[1953]: 24 Apr 23:59:09 ntpd[1953]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 24 23:59:09.864346 ntpd[1953]: 24 Apr 23:59:09 ntpd[1953]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 24 23:59:09.863814 ntpd[1953]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 24 23:59:09.870360 dbus-daemon[1949]: [system] SELinux support is enabled Apr 24 23:59:09.870996 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 24 23:59:09.872355 (ntainerd)[1984]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 24 23:59:09.878237 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 24 23:59:09.878284 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 24 23:59:09.879101 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 24 23:59:09.879134 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 24 23:59:09.905144 extend-filesystems[1951]: Resized partition /dev/nvme0n1p9 Apr 24 23:59:09.906410 tar[1971]: linux-amd64/LICENSE Apr 24 23:59:09.906410 tar[1971]: linux-amd64/helm Apr 24 23:59:09.915385 extend-filesystems[1997]: resize2fs 1.47.1 (20-May-2024) Apr 24 23:59:09.913850 dbus-daemon[1949]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1905 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 24 23:59:09.918038 update_engine[1968]: I20260424 23:59:09.917901 1968 update_check_scheduler.cc:74] Next update check in 8m8s Apr 24 23:59:09.922000 systemd[1]: Started update-engine.service - Update Engine. Apr 24 23:59:09.929739 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Apr 24 23:59:09.934730 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 24 23:59:09.937692 jq[1985]: true Apr 24 23:59:09.938453 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 24 23:59:09.983214 systemd[1]: Finished setup-oem.service - Setup OEM. Apr 24 23:59:10.018004 coreos-metadata[1948]: Apr 24 23:59:10.017 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 24 23:59:10.024702 coreos-metadata[1948]: Apr 24 23:59:10.024 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Apr 24 23:59:10.029420 coreos-metadata[1948]: Apr 24 23:59:10.029 INFO Fetch successful Apr 24 23:59:10.029420 coreos-metadata[1948]: Apr 24 23:59:10.029 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Apr 24 23:59:10.029808 systemd-logind[1963]: Watching system buttons on /dev/input/event1 (Power Button) Apr 24 23:59:10.029848 systemd-logind[1963]: Watching system buttons on /dev/input/event3 (Sleep Button) Apr 24 23:59:10.029888 systemd-logind[1963]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 24 23:59:10.031378 systemd-logind[1963]: New seat seat0. Apr 24 23:59:10.034031 coreos-metadata[1948]: Apr 24 23:59:10.034 INFO Fetch successful Apr 24 23:59:10.034130 coreos-metadata[1948]: Apr 24 23:59:10.034 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Apr 24 23:59:10.036331 systemd[1]: Started systemd-logind.service - User Login Management. Apr 24 23:59:10.058748 coreos-metadata[1948]: Apr 24 23:59:10.052 INFO Fetch successful Apr 24 23:59:10.058748 coreos-metadata[1948]: Apr 24 23:59:10.052 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Apr 24 23:59:10.058748 coreos-metadata[1948]: Apr 24 23:59:10.057 INFO Fetch successful Apr 24 23:59:10.058748 coreos-metadata[1948]: Apr 24 23:59:10.057 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Apr 24 23:59:10.067244 coreos-metadata[1948]: Apr 24 23:59:10.066 INFO Fetch failed with 404: resource not found Apr 24 23:59:10.067244 coreos-metadata[1948]: Apr 24 23:59:10.066 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Apr 24 23:59:10.072506 coreos-metadata[1948]: Apr 24 23:59:10.072 INFO Fetch successful Apr 24 23:59:10.072506 coreos-metadata[1948]: Apr 24 23:59:10.072 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Apr 24 23:59:10.076745 coreos-metadata[1948]: Apr 24 23:59:10.074 INFO Fetch successful Apr 24 23:59:10.076745 coreos-metadata[1948]: Apr 24 23:59:10.074 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Apr 24 23:59:10.076745 coreos-metadata[1948]: Apr 24 23:59:10.076 INFO Fetch successful Apr 24 23:59:10.076745 coreos-metadata[1948]: Apr 24 23:59:10.076 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Apr 24 23:59:10.082065 coreos-metadata[1948]: Apr 24 23:59:10.081 INFO Fetch successful Apr 24 23:59:10.082065 coreos-metadata[1948]: Apr 24 23:59:10.081 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Apr 24 23:59:10.092359 coreos-metadata[1948]: Apr 24 23:59:10.087 INFO Fetch successful Apr 24 23:59:10.207745 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Apr 24 23:59:10.218437 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 24 23:59:10.219529 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 24 23:59:10.223659 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (1734) Apr 24 23:59:10.232976 extend-filesystems[1997]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Apr 24 23:59:10.232976 extend-filesystems[1997]: old_desc_blocks = 1, new_desc_blocks = 2 Apr 24 23:59:10.232976 extend-filesystems[1997]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Apr 24 23:59:10.235293 extend-filesystems[1951]: Resized filesystem in /dev/nvme0n1p9 Apr 24 23:59:10.235450 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 24 23:59:10.237438 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 24 23:59:10.240451 bash[2026]: Updated "/home/core/.ssh/authorized_keys" Apr 24 23:59:10.241095 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 24 23:59:10.256083 systemd[1]: Starting sshkeys.service... Apr 24 23:59:10.326688 dbus-daemon[1949]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 24 23:59:10.327179 dbus-daemon[1949]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2000 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 24 23:59:10.331695 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 24 23:59:10.341015 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 24 23:59:10.351244 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 24 23:59:10.363818 systemd[1]: Starting polkit.service - Authorization Manager... Apr 24 23:59:10.393816 locksmithd[2001]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 24 23:59:10.410702 polkitd[2070]: Started polkitd version 121 Apr 24 23:59:10.418927 polkitd[2070]: Loading rules from directory /etc/polkit-1/rules.d Apr 24 23:59:10.419016 polkitd[2070]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 24 23:59:10.420170 polkitd[2070]: Finished loading, compiling and executing 2 rules Apr 24 23:59:10.420947 systemd[1]: Started polkit.service - Authorization Manager. Apr 24 23:59:10.420734 dbus-daemon[1949]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 24 23:59:10.423017 polkitd[2070]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 24 23:59:10.457071 systemd-hostnamed[2000]: Hostname set to (transient) Apr 24 23:59:10.460499 systemd-resolved[1907]: System hostname changed to 'ip-172-31-30-251'. Apr 24 23:59:10.529233 coreos-metadata[2066]: Apr 24 23:59:10.528 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 24 23:59:10.529233 coreos-metadata[2066]: Apr 24 23:59:10.528 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Apr 24 23:59:10.529233 coreos-metadata[2066]: Apr 24 23:59:10.528 INFO Fetch successful Apr 24 23:59:10.529233 coreos-metadata[2066]: Apr 24 23:59:10.528 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Apr 24 23:59:10.529233 coreos-metadata[2066]: Apr 24 23:59:10.528 INFO Fetch successful Apr 24 23:59:10.530990 unknown[2066]: wrote ssh authorized keys file for user: core Apr 24 23:59:10.578590 update-ssh-keys[2123]: Updated "/home/core/.ssh/authorized_keys" Apr 24 23:59:10.580441 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 24 23:59:10.585986 systemd[1]: Finished sshkeys.service. Apr 24 23:59:10.784271 containerd[1984]: time="2026-04-24T23:59:10.784117600Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 24 23:59:10.812994 ntpd[1953]: bind(24) AF_INET6 fe80::42b:adff:fe39:be23%2#123 flags 0x11 failed: Cannot assign requested address Apr 24 23:59:10.813043 ntpd[1953]: unable to create socket on eth0 (6) for fe80::42b:adff:fe39:be23%2#123 Apr 24 23:59:10.813423 ntpd[1953]: 24 Apr 23:59:10 ntpd[1953]: bind(24) AF_INET6 fe80::42b:adff:fe39:be23%2#123 flags 0x11 failed: Cannot assign requested address Apr 24 23:59:10.813423 ntpd[1953]: 24 Apr 23:59:10 ntpd[1953]: unable to create socket on eth0 (6) for fe80::42b:adff:fe39:be23%2#123 Apr 24 23:59:10.813423 ntpd[1953]: 24 Apr 23:59:10 ntpd[1953]: failed to init interface for address fe80::42b:adff:fe39:be23%2 Apr 24 23:59:10.813057 ntpd[1953]: failed to init interface for address fe80::42b:adff:fe39:be23%2 Apr 24 23:59:10.903271 containerd[1984]: time="2026-04-24T23:59:10.903188188Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:59:10.906316 containerd[1984]: time="2026-04-24T23:59:10.906262479Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:59:10.906316 containerd[1984]: time="2026-04-24T23:59:10.906312566Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 24 23:59:10.906470 containerd[1984]: time="2026-04-24T23:59:10.906334340Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 24 23:59:10.907740 containerd[1984]: time="2026-04-24T23:59:10.906530322Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 24 23:59:10.907740 containerd[1984]: time="2026-04-24T23:59:10.906561661Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 24 23:59:10.907740 containerd[1984]: time="2026-04-24T23:59:10.906631390Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:59:10.907740 containerd[1984]: time="2026-04-24T23:59:10.906647803Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:59:10.907740 containerd[1984]: time="2026-04-24T23:59:10.906876111Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:59:10.907740 containerd[1984]: time="2026-04-24T23:59:10.906896391Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 24 23:59:10.907740 containerd[1984]: time="2026-04-24T23:59:10.906915404Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:59:10.907740 containerd[1984]: time="2026-04-24T23:59:10.906931511Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 24 23:59:10.907740 containerd[1984]: time="2026-04-24T23:59:10.907018742Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:59:10.907740 containerd[1984]: time="2026-04-24T23:59:10.907271328Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:59:10.907740 containerd[1984]: time="2026-04-24T23:59:10.907409434Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:59:10.908194 containerd[1984]: time="2026-04-24T23:59:10.907428171Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 24 23:59:10.908194 containerd[1984]: time="2026-04-24T23:59:10.907517487Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 24 23:59:10.908194 containerd[1984]: time="2026-04-24T23:59:10.907570682Z" level=info msg="metadata content store policy set" policy=shared Apr 24 23:59:10.914735 containerd[1984]: time="2026-04-24T23:59:10.914683129Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 24 23:59:10.914842 containerd[1984]: time="2026-04-24T23:59:10.914779522Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 24 23:59:10.914842 containerd[1984]: time="2026-04-24T23:59:10.914803436Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 24 23:59:10.914934 containerd[1984]: time="2026-04-24T23:59:10.914864377Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 24 23:59:10.914934 containerd[1984]: time="2026-04-24T23:59:10.914891159Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 24 23:59:10.915097 containerd[1984]: time="2026-04-24T23:59:10.915075981Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 24 23:59:10.915852 containerd[1984]: time="2026-04-24T23:59:10.915829569Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 24 23:59:10.915993 containerd[1984]: time="2026-04-24T23:59:10.915973268Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 24 23:59:10.916040 containerd[1984]: time="2026-04-24T23:59:10.916002421Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 24 23:59:10.916040 containerd[1984]: time="2026-04-24T23:59:10.916023827Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 24 23:59:10.916111 containerd[1984]: time="2026-04-24T23:59:10.916048860Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 24 23:59:10.916111 containerd[1984]: time="2026-04-24T23:59:10.916072968Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 24 23:59:10.916111 containerd[1984]: time="2026-04-24T23:59:10.916092039Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 24 23:59:10.916213 containerd[1984]: time="2026-04-24T23:59:10.916118926Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 24 23:59:10.916213 containerd[1984]: time="2026-04-24T23:59:10.916141154Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 24 23:59:10.916213 containerd[1984]: time="2026-04-24T23:59:10.916167756Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 24 23:59:10.916213 containerd[1984]: time="2026-04-24T23:59:10.916189750Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 24 23:59:10.916213 containerd[1984]: time="2026-04-24T23:59:10.916208858Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 24 23:59:10.916390 containerd[1984]: time="2026-04-24T23:59:10.916238805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 24 23:59:10.916390 containerd[1984]: time="2026-04-24T23:59:10.916259983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 24 23:59:10.916390 containerd[1984]: time="2026-04-24T23:59:10.916279202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 24 23:59:10.916390 containerd[1984]: time="2026-04-24T23:59:10.916299236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 24 23:59:10.916390 containerd[1984]: time="2026-04-24T23:59:10.916317788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 24 23:59:10.916390 containerd[1984]: time="2026-04-24T23:59:10.916348708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 24 23:59:10.916390 containerd[1984]: time="2026-04-24T23:59:10.916367038Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 24 23:59:10.916390 containerd[1984]: time="2026-04-24T23:59:10.916386284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 24 23:59:10.916690 containerd[1984]: time="2026-04-24T23:59:10.916406453Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 24 23:59:10.916690 containerd[1984]: time="2026-04-24T23:59:10.916437917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 24 23:59:10.916690 containerd[1984]: time="2026-04-24T23:59:10.916457292Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 24 23:59:10.916690 containerd[1984]: time="2026-04-24T23:59:10.916476240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 24 23:59:10.916690 containerd[1984]: time="2026-04-24T23:59:10.916501808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 24 23:59:10.916690 containerd[1984]: time="2026-04-24T23:59:10.916525900Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 24 23:59:10.916690 containerd[1984]: time="2026-04-24T23:59:10.916566444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 24 23:59:10.916690 containerd[1984]: time="2026-04-24T23:59:10.916585693Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 24 23:59:10.916690 containerd[1984]: time="2026-04-24T23:59:10.916601931Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 24 23:59:10.918425 containerd[1984]: time="2026-04-24T23:59:10.917344331Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 24 23:59:10.918425 containerd[1984]: time="2026-04-24T23:59:10.917455469Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 24 23:59:10.918425 containerd[1984]: time="2026-04-24T23:59:10.917475044Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 24 23:59:10.918425 containerd[1984]: time="2026-04-24T23:59:10.917494196Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 24 23:59:10.918425 containerd[1984]: time="2026-04-24T23:59:10.917509664Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 24 23:59:10.918425 containerd[1984]: time="2026-04-24T23:59:10.917528920Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 24 23:59:10.918425 containerd[1984]: time="2026-04-24T23:59:10.917548772Z" level=info msg="NRI interface is disabled by configuration." Apr 24 23:59:10.918425 containerd[1984]: time="2026-04-24T23:59:10.917564088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 24 23:59:10.918799 containerd[1984]: time="2026-04-24T23:59:10.917985548Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 24 23:59:10.918799 containerd[1984]: time="2026-04-24T23:59:10.918070672Z" level=info msg="Connect containerd service" Apr 24 23:59:10.918799 containerd[1984]: time="2026-04-24T23:59:10.918112853Z" level=info msg="using legacy CRI server" Apr 24 23:59:10.918799 containerd[1984]: time="2026-04-24T23:59:10.918122188Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 24 23:59:10.918799 containerd[1984]: time="2026-04-24T23:59:10.918250988Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 24 23:59:10.922739 containerd[1984]: time="2026-04-24T23:59:10.920756482Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 24 23:59:10.922739 containerd[1984]: time="2026-04-24T23:59:10.920904984Z" level=info msg="Start subscribing containerd event" Apr 24 23:59:10.922739 containerd[1984]: time="2026-04-24T23:59:10.920963509Z" level=info msg="Start recovering state" Apr 24 23:59:10.922739 containerd[1984]: time="2026-04-24T23:59:10.921045194Z" level=info msg="Start event monitor" Apr 24 23:59:10.922739 containerd[1984]: time="2026-04-24T23:59:10.921063032Z" level=info msg="Start snapshots syncer" Apr 24 23:59:10.922739 containerd[1984]: time="2026-04-24T23:59:10.921074980Z" level=info msg="Start cni network conf syncer for default" Apr 24 23:59:10.922739 containerd[1984]: time="2026-04-24T23:59:10.921086366Z" level=info msg="Start streaming server" Apr 24 23:59:10.922739 containerd[1984]: time="2026-04-24T23:59:10.922125859Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 24 23:59:10.922739 containerd[1984]: time="2026-04-24T23:59:10.922582705Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 24 23:59:10.922794 systemd[1]: Started containerd.service - containerd container runtime. Apr 24 23:59:10.929979 containerd[1984]: time="2026-04-24T23:59:10.929929723Z" level=info msg="containerd successfully booted in 0.148507s" Apr 24 23:59:10.989647 sshd_keygen[1988]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 24 23:59:11.026515 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 24 23:59:11.037098 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 24 23:59:11.048408 systemd[1]: issuegen.service: Deactivated successfully. Apr 24 23:59:11.048670 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 24 23:59:11.061672 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 24 23:59:11.075702 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 24 23:59:11.086255 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 24 23:59:11.094102 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 24 23:59:11.095372 systemd[1]: Reached target getty.target - Login Prompts. Apr 24 23:59:11.226193 tar[1971]: linux-amd64/README.md Apr 24 23:59:11.237657 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 24 23:59:11.544930 systemd-networkd[1905]: eth0: Gained IPv6LL Apr 24 23:59:11.548780 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 24 23:59:11.550451 systemd[1]: Reached target network-online.target - Network is Online. Apr 24 23:59:11.561110 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Apr 24 23:59:11.568844 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:59:11.575520 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 24 23:59:11.620280 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 24 23:59:11.634662 amazon-ssm-agent[2173]: Initializing new seelog logger Apr 24 23:59:11.635042 amazon-ssm-agent[2173]: New Seelog Logger Creation Complete Apr 24 23:59:11.635042 amazon-ssm-agent[2173]: 2026/04/24 23:59:11 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 24 23:59:11.635042 amazon-ssm-agent[2173]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 24 23:59:11.635355 amazon-ssm-agent[2173]: 2026/04/24 23:59:11 processing appconfig overrides Apr 24 23:59:11.635745 amazon-ssm-agent[2173]: 2026/04/24 23:59:11 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 24 23:59:11.635745 amazon-ssm-agent[2173]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 24 23:59:11.635850 amazon-ssm-agent[2173]: 2026/04/24 23:59:11 processing appconfig overrides Apr 24 23:59:11.636293 amazon-ssm-agent[2173]: 2026/04/24 23:59:11 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 24 23:59:11.636293 amazon-ssm-agent[2173]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 24 23:59:11.636377 amazon-ssm-agent[2173]: 2026/04/24 23:59:11 processing appconfig overrides Apr 24 23:59:11.636932 amazon-ssm-agent[2173]: 2026-04-24 23:59:11 INFO Proxy environment variables: Apr 24 23:59:11.638692 amazon-ssm-agent[2173]: 2026/04/24 23:59:11 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 24 23:59:11.638692 amazon-ssm-agent[2173]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 24 23:59:11.638863 amazon-ssm-agent[2173]: 2026/04/24 23:59:11 processing appconfig overrides Apr 24 23:59:11.736345 amazon-ssm-agent[2173]: 2026-04-24 23:59:11 INFO no_proxy: Apr 24 23:59:11.837654 amazon-ssm-agent[2173]: 2026-04-24 23:59:11 INFO https_proxy: Apr 24 23:59:11.934111 amazon-ssm-agent[2173]: 2026-04-24 23:59:11 INFO http_proxy: Apr 24 23:59:11.935723 amazon-ssm-agent[2173]: 2026-04-24 23:59:11 INFO Checking if agent identity type OnPrem can be assumed Apr 24 23:59:11.935723 amazon-ssm-agent[2173]: 2026-04-24 23:59:11 INFO Checking if agent identity type EC2 can be assumed Apr 24 23:59:11.935723 amazon-ssm-agent[2173]: 2026-04-24 23:59:11 INFO Agent will take identity from EC2 Apr 24 23:59:11.935723 amazon-ssm-agent[2173]: 2026-04-24 23:59:11 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 24 23:59:11.935723 amazon-ssm-agent[2173]: 2026-04-24 23:59:11 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 24 23:59:11.935723 amazon-ssm-agent[2173]: 2026-04-24 23:59:11 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 24 23:59:11.935723 amazon-ssm-agent[2173]: 2026-04-24 23:59:11 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Apr 24 23:59:11.935723 amazon-ssm-agent[2173]: 2026-04-24 23:59:11 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Apr 24 23:59:11.935723 amazon-ssm-agent[2173]: 2026-04-24 23:59:11 INFO [amazon-ssm-agent] Starting Core Agent Apr 24 23:59:11.935723 amazon-ssm-agent[2173]: 2026-04-24 23:59:11 INFO [amazon-ssm-agent] registrar detected. Attempting registration Apr 24 23:59:11.935723 amazon-ssm-agent[2173]: 2026-04-24 23:59:11 INFO [Registrar] Starting registrar module Apr 24 23:59:11.935723 amazon-ssm-agent[2173]: 2026-04-24 23:59:11 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Apr 24 23:59:11.935723 amazon-ssm-agent[2173]: 2026-04-24 23:59:11 INFO [EC2Identity] EC2 registration was successful. Apr 24 23:59:11.935723 amazon-ssm-agent[2173]: 2026-04-24 23:59:11 INFO [CredentialRefresher] credentialRefresher has started Apr 24 23:59:11.935723 amazon-ssm-agent[2173]: 2026-04-24 23:59:11 INFO [CredentialRefresher] Starting credentials refresher loop Apr 24 23:59:11.935723 amazon-ssm-agent[2173]: 2026-04-24 23:59:11 INFO EC2RoleProvider Successfully connected with instance profile role credentials Apr 24 23:59:12.032798 amazon-ssm-agent[2173]: 2026-04-24 23:59:11 INFO [CredentialRefresher] Next credential rotation will be in 30.1166599732 minutes Apr 24 23:59:12.950167 amazon-ssm-agent[2173]: 2026-04-24 23:59:12 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Apr 24 23:59:13.050990 amazon-ssm-agent[2173]: 2026-04-24 23:59:12 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2192) started Apr 24 23:59:13.152058 amazon-ssm-agent[2173]: 2026-04-24 23:59:12 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Apr 24 23:59:13.593107 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:59:13.593423 (kubelet)[2208]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:59:13.594975 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 24 23:59:13.596335 systemd[1]: Startup finished in 610ms (kernel) + 7.138s (initrd) + 7.801s (userspace) = 15.550s. Apr 24 23:59:13.809769 ntpd[1953]: Listen normally on 7 eth0 [fe80::42b:adff:fe39:be23%2]:123 Apr 24 23:59:13.810157 ntpd[1953]: 24 Apr 23:59:13 ntpd[1953]: Listen normally on 7 eth0 [fe80::42b:adff:fe39:be23%2]:123 Apr 24 23:59:14.452774 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 24 23:59:14.458075 systemd[1]: Started sshd@0-172.31.30.251:22-4.175.71.9:51612.service - OpenSSH per-connection server daemon (4.175.71.9:51612). Apr 24 23:59:14.707232 kubelet[2208]: E0424 23:59:14.707118 2208 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:59:14.710260 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:59:14.710447 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:59:14.710798 systemd[1]: kubelet.service: Consumed 1.045s CPU time. Apr 24 23:59:15.507156 sshd[2218]: Accepted publickey for core from 4.175.71.9 port 51612 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 24 23:59:15.509984 sshd[2218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:59:15.519913 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 24 23:59:15.525064 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 24 23:59:15.528349 systemd-logind[1963]: New session 1 of user core. Apr 24 23:59:15.543057 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 24 23:59:15.551080 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 24 23:59:15.555461 (systemd)[2223]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 24 23:59:15.675362 systemd[2223]: Queued start job for default target default.target. Apr 24 23:59:15.685231 systemd[2223]: Created slice app.slice - User Application Slice. Apr 24 23:59:15.685274 systemd[2223]: Reached target paths.target - Paths. Apr 24 23:59:15.685294 systemd[2223]: Reached target timers.target - Timers. Apr 24 23:59:15.686623 systemd[2223]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 24 23:59:15.698977 systemd[2223]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 24 23:59:15.699129 systemd[2223]: Reached target sockets.target - Sockets. Apr 24 23:59:15.699148 systemd[2223]: Reached target basic.target - Basic System. Apr 24 23:59:15.699198 systemd[2223]: Reached target default.target - Main User Target. Apr 24 23:59:15.699239 systemd[2223]: Startup finished in 136ms. Apr 24 23:59:15.699640 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 24 23:59:15.708926 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 24 23:59:16.427078 systemd[1]: Started sshd@1-172.31.30.251:22-4.175.71.9:37104.service - OpenSSH per-connection server daemon (4.175.71.9:37104). Apr 24 23:59:17.601399 systemd-resolved[1907]: Clock change detected. Flushing caches. Apr 24 23:59:18.220735 sshd[2234]: Accepted publickey for core from 4.175.71.9 port 37104 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 24 23:59:18.222278 sshd[2234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:59:18.228164 systemd-logind[1963]: New session 2 of user core. Apr 24 23:59:18.234096 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 24 23:59:18.920534 sshd[2234]: pam_unix(sshd:session): session closed for user core Apr 24 23:59:18.924752 systemd[1]: sshd@1-172.31.30.251:22-4.175.71.9:37104.service: Deactivated successfully. Apr 24 23:59:18.926871 systemd[1]: session-2.scope: Deactivated successfully. Apr 24 23:59:18.927576 systemd-logind[1963]: Session 2 logged out. Waiting for processes to exit. Apr 24 23:59:18.928736 systemd-logind[1963]: Removed session 2. Apr 24 23:59:19.083203 systemd[1]: Started sshd@2-172.31.30.251:22-4.175.71.9:37106.service - OpenSSH per-connection server daemon (4.175.71.9:37106). Apr 24 23:59:20.028596 sshd[2241]: Accepted publickey for core from 4.175.71.9 port 37106 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 24 23:59:20.029302 sshd[2241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:59:20.033715 systemd-logind[1963]: New session 3 of user core. Apr 24 23:59:20.042049 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 24 23:59:20.684302 sshd[2241]: pam_unix(sshd:session): session closed for user core Apr 24 23:59:20.687752 systemd-logind[1963]: Session 3 logged out. Waiting for processes to exit. Apr 24 23:59:20.688342 systemd[1]: sshd@2-172.31.30.251:22-4.175.71.9:37106.service: Deactivated successfully. Apr 24 23:59:20.690461 systemd[1]: session-3.scope: Deactivated successfully. Apr 24 23:59:20.692311 systemd-logind[1963]: Removed session 3. Apr 24 23:59:20.852503 systemd[1]: Started sshd@3-172.31.30.251:22-4.175.71.9:37110.service - OpenSSH per-connection server daemon (4.175.71.9:37110). Apr 24 23:59:21.828773 sshd[2248]: Accepted publickey for core from 4.175.71.9 port 37110 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 24 23:59:21.829431 sshd[2248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:59:21.834454 systemd-logind[1963]: New session 4 of user core. Apr 24 23:59:21.840090 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 24 23:59:22.506243 sshd[2248]: pam_unix(sshd:session): session closed for user core Apr 24 23:59:22.509760 systemd[1]: sshd@3-172.31.30.251:22-4.175.71.9:37110.service: Deactivated successfully. Apr 24 23:59:22.511820 systemd[1]: session-4.scope: Deactivated successfully. Apr 24 23:59:22.513330 systemd-logind[1963]: Session 4 logged out. Waiting for processes to exit. Apr 24 23:59:22.514510 systemd-logind[1963]: Removed session 4. Apr 24 23:59:22.675199 systemd[1]: Started sshd@4-172.31.30.251:22-4.175.71.9:37126.service - OpenSSH per-connection server daemon (4.175.71.9:37126). Apr 24 23:59:23.620855 sshd[2255]: Accepted publickey for core from 4.175.71.9 port 37126 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 24 23:59:23.622411 sshd[2255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:59:23.626922 systemd-logind[1963]: New session 5 of user core. Apr 24 23:59:23.634108 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 24 23:59:24.142569 sudo[2258]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 24 23:59:24.143016 sudo[2258]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:59:24.156895 sudo[2258]: pam_unix(sudo:session): session closed for user root Apr 24 23:59:24.311695 sshd[2255]: pam_unix(sshd:session): session closed for user core Apr 24 23:59:24.315451 systemd[1]: sshd@4-172.31.30.251:22-4.175.71.9:37126.service: Deactivated successfully. Apr 24 23:59:24.317686 systemd[1]: session-5.scope: Deactivated successfully. Apr 24 23:59:24.319780 systemd-logind[1963]: Session 5 logged out. Waiting for processes to exit. Apr 24 23:59:24.321026 systemd-logind[1963]: Removed session 5. Apr 24 23:59:24.483188 systemd[1]: Started sshd@5-172.31.30.251:22-4.175.71.9:37136.service - OpenSSH per-connection server daemon (4.175.71.9:37136). Apr 24 23:59:25.457012 sshd[2263]: Accepted publickey for core from 4.175.71.9 port 37136 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 24 23:59:25.458550 sshd[2263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:59:25.464228 systemd-logind[1963]: New session 6 of user core. Apr 24 23:59:25.470103 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 24 23:59:25.658057 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 24 23:59:25.663115 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:59:25.868285 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:59:25.879290 (kubelet)[2274]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:59:25.924185 kubelet[2274]: E0424 23:59:25.924135 2274 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:59:25.928431 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:59:25.928650 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:59:25.977125 sudo[2282]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 24 23:59:25.977515 sudo[2282]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:59:25.981396 sudo[2282]: pam_unix(sudo:session): session closed for user root Apr 24 23:59:25.986920 sudo[2281]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 24 23:59:25.987300 sudo[2281]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:59:26.001187 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 24 23:59:26.004011 auditctl[2285]: No rules Apr 24 23:59:26.004419 systemd[1]: audit-rules.service: Deactivated successfully. Apr 24 23:59:26.004638 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 24 23:59:26.007308 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 24 23:59:26.037979 augenrules[2303]: No rules Apr 24 23:59:26.039408 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 24 23:59:26.040586 sudo[2281]: pam_unix(sudo:session): session closed for user root Apr 24 23:59:26.200022 sshd[2263]: pam_unix(sshd:session): session closed for user core Apr 24 23:59:26.203618 systemd[1]: sshd@5-172.31.30.251:22-4.175.71.9:37136.service: Deactivated successfully. Apr 24 23:59:26.205595 systemd[1]: session-6.scope: Deactivated successfully. Apr 24 23:59:26.207333 systemd-logind[1963]: Session 6 logged out. Waiting for processes to exit. Apr 24 23:59:26.208616 systemd-logind[1963]: Removed session 6. Apr 24 23:59:26.372584 systemd[1]: Started sshd@6-172.31.30.251:22-4.175.71.9:53680.service - OpenSSH per-connection server daemon (4.175.71.9:53680). Apr 24 23:59:27.344490 sshd[2311]: Accepted publickey for core from 4.175.71.9 port 53680 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 24 23:59:27.345983 sshd[2311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:59:27.351686 systemd-logind[1963]: New session 7 of user core. Apr 24 23:59:27.358086 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 24 23:59:27.864685 sudo[2314]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 24 23:59:27.865113 sudo[2314]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:59:28.352176 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 24 23:59:28.354147 (dockerd)[2330]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 24 23:59:28.873735 dockerd[2330]: time="2026-04-24T23:59:28.873660781Z" level=info msg="Starting up" Apr 24 23:59:29.048116 dockerd[2330]: time="2026-04-24T23:59:29.048064717Z" level=info msg="Loading containers: start." Apr 24 23:59:29.186852 kernel: Initializing XFRM netlink socket Apr 24 23:59:29.259281 (udev-worker)[2353]: Network interface NamePolicy= disabled on kernel command line. Apr 24 23:59:29.328800 systemd-networkd[1905]: docker0: Link UP Apr 24 23:59:29.355746 dockerd[2330]: time="2026-04-24T23:59:29.355706338Z" level=info msg="Loading containers: done." Apr 24 23:59:29.384936 dockerd[2330]: time="2026-04-24T23:59:29.384872222Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 24 23:59:29.385216 dockerd[2330]: time="2026-04-24T23:59:29.385010281Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 24 23:59:29.385216 dockerd[2330]: time="2026-04-24T23:59:29.385154543Z" level=info msg="Daemon has completed initialization" Apr 24 23:59:29.420502 dockerd[2330]: time="2026-04-24T23:59:29.420084225Z" level=info msg="API listen on /run/docker.sock" Apr 24 23:59:29.420365 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 24 23:59:30.337075 containerd[1984]: time="2026-04-24T23:59:30.337026435Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 24 23:59:30.872130 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2373170057.mount: Deactivated successfully. Apr 24 23:59:32.503209 containerd[1984]: time="2026-04-24T23:59:32.503157479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:59:32.504695 containerd[1984]: time="2026-04-24T23:59:32.504643440Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=30193989" Apr 24 23:59:32.506170 containerd[1984]: time="2026-04-24T23:59:32.505757899Z" level=info msg="ImageCreate event name:\"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:59:32.509149 containerd[1984]: time="2026-04-24T23:59:32.509110789Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:59:32.512846 containerd[1984]: time="2026-04-24T23:59:32.512785624Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"30190588\" in 2.175718351s" Apr 24 23:59:32.513056 containerd[1984]: time="2026-04-24T23:59:32.513024958Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\"" Apr 24 23:59:32.515730 containerd[1984]: time="2026-04-24T23:59:32.515700654Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 24 23:59:34.255639 containerd[1984]: time="2026-04-24T23:59:34.255583588Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:59:34.256997 containerd[1984]: time="2026-04-24T23:59:34.256947486Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=26171447" Apr 24 23:59:34.258243 containerd[1984]: time="2026-04-24T23:59:34.258188033Z" level=info msg="ImageCreate event name:\"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:59:34.261646 containerd[1984]: time="2026-04-24T23:59:34.261589142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:59:34.262870 containerd[1984]: time="2026-04-24T23:59:34.262710906Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"27737794\" in 1.746845986s" Apr 24 23:59:34.262870 containerd[1984]: time="2026-04-24T23:59:34.262756220Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\"" Apr 24 23:59:34.264085 containerd[1984]: time="2026-04-24T23:59:34.264055656Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 24 23:59:35.591908 containerd[1984]: time="2026-04-24T23:59:35.591842037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:59:35.593239 containerd[1984]: time="2026-04-24T23:59:35.593177443Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=20289756" Apr 24 23:59:35.594874 containerd[1984]: time="2026-04-24T23:59:35.594798765Z" level=info msg="ImageCreate event name:\"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:59:35.598148 containerd[1984]: time="2026-04-24T23:59:35.598087260Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:59:35.599819 containerd[1984]: time="2026-04-24T23:59:35.599281362Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"21856121\" in 1.335188984s" Apr 24 23:59:35.599819 containerd[1984]: time="2026-04-24T23:59:35.599342292Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\"" Apr 24 23:59:35.600151 containerd[1984]: time="2026-04-24T23:59:35.600127948Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 24 23:59:36.158262 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 24 23:59:36.167118 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:59:36.434528 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:59:36.446618 (kubelet)[2547]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:59:36.507855 kubelet[2547]: E0424 23:59:36.506970 2547 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:59:36.510520 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:59:36.510727 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:59:36.774171 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3540401771.mount: Deactivated successfully. Apr 24 23:59:37.393076 containerd[1984]: time="2026-04-24T23:59:37.393010806Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:59:37.394251 containerd[1984]: time="2026-04-24T23:59:37.394161315Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=32010711" Apr 24 23:59:37.395685 containerd[1984]: time="2026-04-24T23:59:37.395613873Z" level=info msg="ImageCreate event name:\"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:59:37.398301 containerd[1984]: time="2026-04-24T23:59:37.398246218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:59:37.398971 containerd[1984]: time="2026-04-24T23:59:37.398934091Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"32009730\" in 1.798685147s" Apr 24 23:59:37.399061 containerd[1984]: time="2026-04-24T23:59:37.398977924Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\"" Apr 24 23:59:37.400290 containerd[1984]: time="2026-04-24T23:59:37.400007601Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 24 23:59:37.885176 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount173059538.mount: Deactivated successfully. Apr 24 23:59:39.036435 containerd[1984]: time="2026-04-24T23:59:39.036373254Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:59:39.037893 containerd[1984]: time="2026-04-24T23:59:39.037839196Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Apr 24 23:59:39.039268 containerd[1984]: time="2026-04-24T23:59:39.039193971Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:59:39.042739 containerd[1984]: time="2026-04-24T23:59:39.042665195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:59:39.044272 containerd[1984]: time="2026-04-24T23:59:39.044027229Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.643983153s" Apr 24 23:59:39.044272 containerd[1984]: time="2026-04-24T23:59:39.044079062Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 24 23:59:39.044962 containerd[1984]: time="2026-04-24T23:59:39.044937399Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 24 23:59:39.511594 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1642613991.mount: Deactivated successfully. Apr 24 23:59:39.523127 containerd[1984]: time="2026-04-24T23:59:39.523068209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:59:39.525090 containerd[1984]: time="2026-04-24T23:59:39.524915956Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Apr 24 23:59:39.527332 containerd[1984]: time="2026-04-24T23:59:39.527226917Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:59:39.530932 containerd[1984]: time="2026-04-24T23:59:39.530862239Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:59:39.532567 containerd[1984]: time="2026-04-24T23:59:39.531938195Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 486.853527ms" Apr 24 23:59:39.532567 containerd[1984]: time="2026-04-24T23:59:39.531979596Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 24 23:59:39.532755 containerd[1984]: time="2026-04-24T23:59:39.532635920Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 24 23:59:40.092914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount513956297.mount: Deactivated successfully. Apr 24 23:59:41.261440 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 24 23:59:41.417645 containerd[1984]: time="2026-04-24T23:59:41.417568351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:59:41.418906 containerd[1984]: time="2026-04-24T23:59:41.418855769Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23719426" Apr 24 23:59:41.421716 containerd[1984]: time="2026-04-24T23:59:41.420232841Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:59:41.424293 containerd[1984]: time="2026-04-24T23:59:41.424257350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:59:41.425623 containerd[1984]: time="2026-04-24T23:59:41.425576586Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.892910076s" Apr 24 23:59:41.425724 containerd[1984]: time="2026-04-24T23:59:41.425628349Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 24 23:59:45.436813 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:59:45.450263 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:59:45.488591 systemd[1]: Reloading requested from client PID 2708 ('systemctl') (unit session-7.scope)... Apr 24 23:59:45.488609 systemd[1]: Reloading... Apr 24 23:59:45.616850 zram_generator::config[2748]: No configuration found. Apr 24 23:59:45.766059 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:59:45.851457 systemd[1]: Reloading finished in 362 ms. Apr 24 23:59:45.902689 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 24 23:59:45.902811 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 24 23:59:45.903162 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:59:45.905084 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:59:46.111080 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:59:46.122533 (kubelet)[2810]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 24 23:59:46.188125 kubelet[2810]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:59:46.188125 kubelet[2810]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 24 23:59:46.188125 kubelet[2810]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:59:46.188697 kubelet[2810]: I0424 23:59:46.188238 2810 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 24 23:59:46.462523 kubelet[2810]: I0424 23:59:46.462478 2810 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 24 23:59:46.462523 kubelet[2810]: I0424 23:59:46.462511 2810 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 24 23:59:46.462841 kubelet[2810]: I0424 23:59:46.462810 2810 server.go:956] "Client rotation is on, will bootstrap in background" Apr 24 23:59:46.519684 kubelet[2810]: I0424 23:59:46.519640 2810 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 24 23:59:46.522322 kubelet[2810]: E0424 23:59:46.522253 2810 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.30.251:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.30.251:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 24 23:59:46.537850 kubelet[2810]: E0424 23:59:46.536032 2810 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 24 23:59:46.537850 kubelet[2810]: I0424 23:59:46.536078 2810 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 24 23:59:46.541178 kubelet[2810]: I0424 23:59:46.541154 2810 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 24 23:59:46.547996 kubelet[2810]: I0424 23:59:46.547938 2810 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 24 23:59:46.551852 kubelet[2810]: I0424 23:59:46.547994 2810 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-30-251","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 24 23:59:46.552054 kubelet[2810]: I0424 23:59:46.551860 2810 topology_manager.go:138] "Creating topology manager with none policy" Apr 24 23:59:46.552054 kubelet[2810]: I0424 23:59:46.551882 2810 container_manager_linux.go:303] "Creating device plugin manager" Apr 24 23:59:46.552054 kubelet[2810]: I0424 23:59:46.552044 2810 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:59:46.559193 kubelet[2810]: I0424 23:59:46.559155 2810 kubelet.go:480] "Attempting to sync node with API server" Apr 24 23:59:46.559193 kubelet[2810]: I0424 23:59:46.559202 2810 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 24 23:59:46.559434 kubelet[2810]: I0424 23:59:46.559238 2810 kubelet.go:386] "Adding apiserver pod source" Apr 24 23:59:46.567868 kubelet[2810]: I0424 23:59:46.567057 2810 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 24 23:59:46.573875 kubelet[2810]: E0424 23:59:46.572571 2810 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.30.251:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-251&limit=500&resourceVersion=0\": dial tcp 172.31.30.251:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 24 23:59:46.573875 kubelet[2810]: I0424 23:59:46.573040 2810 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 24 23:59:46.573875 kubelet[2810]: I0424 23:59:46.573746 2810 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 24 23:59:46.574814 kubelet[2810]: W0424 23:59:46.574787 2810 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 24 23:59:46.575980 kubelet[2810]: E0424 23:59:46.575442 2810 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.30.251:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.30.251:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 24 23:59:46.583139 kubelet[2810]: I0424 23:59:46.583098 2810 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 24 23:59:46.583271 kubelet[2810]: I0424 23:59:46.583172 2810 server.go:1289] "Started kubelet" Apr 24 23:59:46.583563 kubelet[2810]: I0424 23:59:46.583507 2810 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 24 23:59:46.585042 kubelet[2810]: I0424 23:59:46.584554 2810 server.go:317] "Adding debug handlers to kubelet server" Apr 24 23:59:46.587759 kubelet[2810]: I0424 23:59:46.586923 2810 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 24 23:59:46.587759 kubelet[2810]: I0424 23:59:46.587433 2810 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 24 23:59:46.589522 kubelet[2810]: E0424 23:59:46.587589 2810 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.30.251:6443/api/v1/namespaces/default/events\": dial tcp 172.31.30.251:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-30-251.18a970778372cb4f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-30-251,UID:ip-172-31-30-251,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-30-251,},FirstTimestamp:2026-04-24 23:59:46.583128911 +0000 UTC m=+0.455365403,LastTimestamp:2026-04-24 23:59:46.583128911 +0000 UTC m=+0.455365403,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-30-251,}" Apr 24 23:59:46.590620 kubelet[2810]: I0424 23:59:46.590317 2810 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 24 23:59:46.590620 kubelet[2810]: I0424 23:59:46.590591 2810 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 24 23:59:46.593917 kubelet[2810]: E0424 23:59:46.593303 2810 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-30-251\" not found" Apr 24 23:59:46.593917 kubelet[2810]: I0424 23:59:46.593348 2810 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 24 23:59:46.593917 kubelet[2810]: I0424 23:59:46.593556 2810 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 24 23:59:46.593917 kubelet[2810]: I0424 23:59:46.593604 2810 reconciler.go:26] "Reconciler: start to sync state" Apr 24 23:59:46.597526 kubelet[2810]: E0424 23:59:46.596735 2810 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.30.251:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.30.251:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 24 23:59:46.597526 kubelet[2810]: I0424 23:59:46.597114 2810 factory.go:223] Registration of the systemd container factory successfully Apr 24 23:59:46.597526 kubelet[2810]: I0424 23:59:46.597204 2810 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 24 23:59:46.601363 kubelet[2810]: E0424 23:59:46.601076 2810 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.251:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-251?timeout=10s\": dial tcp 172.31.30.251:6443: connect: connection refused" interval="200ms" Apr 24 23:59:46.602061 kubelet[2810]: I0424 23:59:46.602031 2810 factory.go:223] Registration of the containerd container factory successfully Apr 24 23:59:46.632880 kubelet[2810]: I0424 23:59:46.632712 2810 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 24 23:59:46.632880 kubelet[2810]: I0424 23:59:46.632732 2810 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 24 23:59:46.632880 kubelet[2810]: I0424 23:59:46.632750 2810 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:59:46.634748 kubelet[2810]: I0424 23:59:46.634722 2810 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 24 23:59:46.637335 kubelet[2810]: I0424 23:59:46.637310 2810 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 24 23:59:46.637761 kubelet[2810]: I0424 23:59:46.637440 2810 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 24 23:59:46.637761 kubelet[2810]: I0424 23:59:46.637469 2810 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 24 23:59:46.637761 kubelet[2810]: I0424 23:59:46.637481 2810 kubelet.go:2436] "Starting kubelet main sync loop" Apr 24 23:59:46.637761 kubelet[2810]: E0424 23:59:46.637531 2810 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 24 23:59:46.643075 kubelet[2810]: E0424 23:59:46.643048 2810 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.30.251:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.30.251:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 24 23:59:46.643425 kubelet[2810]: I0424 23:59:46.643406 2810 policy_none.go:49] "None policy: Start" Apr 24 23:59:46.643543 kubelet[2810]: I0424 23:59:46.643532 2810 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 24 23:59:46.643624 kubelet[2810]: I0424 23:59:46.643614 2810 state_mem.go:35] "Initializing new in-memory state store" Apr 24 23:59:46.653751 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 24 23:59:46.668149 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 24 23:59:46.672932 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 24 23:59:46.687579 kubelet[2810]: E0424 23:59:46.686908 2810 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 24 23:59:46.687579 kubelet[2810]: I0424 23:59:46.687158 2810 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 24 23:59:46.687579 kubelet[2810]: I0424 23:59:46.687173 2810 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 24 23:59:46.687818 kubelet[2810]: I0424 23:59:46.687614 2810 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 24 23:59:46.689591 kubelet[2810]: E0424 23:59:46.689565 2810 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 24 23:59:46.689718 kubelet[2810]: E0424 23:59:46.689614 2810 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-30-251\" not found" Apr 24 23:59:46.752681 systemd[1]: Created slice kubepods-burstable-podeedf47871e974a151378938775ec838f.slice - libcontainer container kubepods-burstable-podeedf47871e974a151378938775ec838f.slice. Apr 24 23:59:46.759918 kubelet[2810]: E0424 23:59:46.759877 2810 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-251\" not found" node="ip-172-31-30-251" Apr 24 23:59:46.767666 systemd[1]: Created slice kubepods-burstable-pod4528b29d957be0e39658aa8d2cef92e0.slice - libcontainer container kubepods-burstable-pod4528b29d957be0e39658aa8d2cef92e0.slice. Apr 24 23:59:46.770035 kubelet[2810]: E0424 23:59:46.770009 2810 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-251\" not found" node="ip-172-31-30-251" Apr 24 23:59:46.772116 systemd[1]: Created slice kubepods-burstable-podf7b79ef7432903ef4447042b71d71af6.slice - libcontainer container kubepods-burstable-podf7b79ef7432903ef4447042b71d71af6.slice. Apr 24 23:59:46.774215 kubelet[2810]: E0424 23:59:46.774189 2810 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-251\" not found" node="ip-172-31-30-251" Apr 24 23:59:46.789618 kubelet[2810]: I0424 23:59:46.789590 2810 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-251" Apr 24 23:59:46.790018 kubelet[2810]: E0424 23:59:46.789986 2810 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.30.251:6443/api/v1/nodes\": dial tcp 172.31.30.251:6443: connect: connection refused" node="ip-172-31-30-251" Apr 24 23:59:46.801985 kubelet[2810]: E0424 23:59:46.801917 2810 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.251:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-251?timeout=10s\": dial tcp 172.31.30.251:6443: connect: connection refused" interval="400ms" Apr 24 23:59:46.896120 kubelet[2810]: I0424 23:59:46.895805 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4528b29d957be0e39658aa8d2cef92e0-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-251\" (UID: \"4528b29d957be0e39658aa8d2cef92e0\") " pod="kube-system/kube-controller-manager-ip-172-31-30-251" Apr 24 23:59:46.896120 kubelet[2810]: I0424 23:59:46.895886 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eedf47871e974a151378938775ec838f-ca-certs\") pod \"kube-apiserver-ip-172-31-30-251\" (UID: \"eedf47871e974a151378938775ec838f\") " pod="kube-system/kube-apiserver-ip-172-31-30-251" Apr 24 23:59:46.896120 kubelet[2810]: I0424 23:59:46.895924 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eedf47871e974a151378938775ec838f-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-251\" (UID: \"eedf47871e974a151378938775ec838f\") " pod="kube-system/kube-apiserver-ip-172-31-30-251" Apr 24 23:59:46.896120 kubelet[2810]: I0424 23:59:46.895948 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eedf47871e974a151378938775ec838f-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-251\" (UID: \"eedf47871e974a151378938775ec838f\") " pod="kube-system/kube-apiserver-ip-172-31-30-251" Apr 24 23:59:46.896120 kubelet[2810]: I0424 23:59:46.895971 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4528b29d957be0e39658aa8d2cef92e0-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-251\" (UID: \"4528b29d957be0e39658aa8d2cef92e0\") " pod="kube-system/kube-controller-manager-ip-172-31-30-251" Apr 24 23:59:46.896366 kubelet[2810]: I0424 23:59:46.895993 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7b79ef7432903ef4447042b71d71af6-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-251\" (UID: \"f7b79ef7432903ef4447042b71d71af6\") " pod="kube-system/kube-scheduler-ip-172-31-30-251" Apr 24 23:59:46.896366 kubelet[2810]: I0424 23:59:46.896025 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4528b29d957be0e39658aa8d2cef92e0-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-251\" (UID: \"4528b29d957be0e39658aa8d2cef92e0\") " pod="kube-system/kube-controller-manager-ip-172-31-30-251" Apr 24 23:59:46.896366 kubelet[2810]: I0424 23:59:46.896047 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4528b29d957be0e39658aa8d2cef92e0-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-251\" (UID: \"4528b29d957be0e39658aa8d2cef92e0\") " pod="kube-system/kube-controller-manager-ip-172-31-30-251" Apr 24 23:59:46.896366 kubelet[2810]: I0424 23:59:46.896061 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4528b29d957be0e39658aa8d2cef92e0-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-251\" (UID: \"4528b29d957be0e39658aa8d2cef92e0\") " pod="kube-system/kube-controller-manager-ip-172-31-30-251" Apr 24 23:59:46.953581 kubelet[2810]: E0424 23:59:46.953477 2810 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.30.251:6443/api/v1/namespaces/default/events\": dial tcp 172.31.30.251:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-30-251.18a970778372cb4f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-30-251,UID:ip-172-31-30-251,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-30-251,},FirstTimestamp:2026-04-24 23:59:46.583128911 +0000 UTC m=+0.455365403,LastTimestamp:2026-04-24 23:59:46.583128911 +0000 UTC m=+0.455365403,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-30-251,}" Apr 24 23:59:46.992475 kubelet[2810]: I0424 23:59:46.992187 2810 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-251" Apr 24 23:59:46.992628 kubelet[2810]: E0424 23:59:46.992540 2810 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.30.251:6443/api/v1/nodes\": dial tcp 172.31.30.251:6443: connect: connection refused" node="ip-172-31-30-251" Apr 24 23:59:47.061524 containerd[1984]: time="2026-04-24T23:59:47.061404987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-251,Uid:eedf47871e974a151378938775ec838f,Namespace:kube-system,Attempt:0,}" Apr 24 23:59:47.071697 containerd[1984]: time="2026-04-24T23:59:47.071607927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-251,Uid:4528b29d957be0e39658aa8d2cef92e0,Namespace:kube-system,Attempt:0,}" Apr 24 23:59:47.076120 containerd[1984]: time="2026-04-24T23:59:47.076079049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-251,Uid:f7b79ef7432903ef4447042b71d71af6,Namespace:kube-system,Attempt:0,}" Apr 24 23:59:47.203459 kubelet[2810]: E0424 23:59:47.203227 2810 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.251:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-251?timeout=10s\": dial tcp 172.31.30.251:6443: connect: connection refused" interval="800ms" Apr 24 23:59:47.394585 kubelet[2810]: I0424 23:59:47.394268 2810 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-251" Apr 24 23:59:47.394934 kubelet[2810]: E0424 23:59:47.394781 2810 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.30.251:6443/api/v1/nodes\": dial tcp 172.31.30.251:6443: connect: connection refused" node="ip-172-31-30-251" Apr 24 23:59:47.433788 kubelet[2810]: E0424 23:59:47.433741 2810 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.30.251:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.30.251:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 24 23:59:47.466687 kubelet[2810]: E0424 23:59:47.466634 2810 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.30.251:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-251&limit=500&resourceVersion=0\": dial tcp 172.31.30.251:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 24 23:59:47.558642 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1146358796.mount: Deactivated successfully. Apr 24 23:59:47.576133 containerd[1984]: time="2026-04-24T23:59:47.576075769Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:59:47.578087 containerd[1984]: time="2026-04-24T23:59:47.578035161Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Apr 24 23:59:47.580039 containerd[1984]: time="2026-04-24T23:59:47.579994135Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:59:47.582112 containerd[1984]: time="2026-04-24T23:59:47.582073694Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:59:47.584209 containerd[1984]: time="2026-04-24T23:59:47.583970259Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 24 23:59:47.586531 containerd[1984]: time="2026-04-24T23:59:47.586490602Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:59:47.588240 containerd[1984]: time="2026-04-24T23:59:47.587942607Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 24 23:59:47.591577 containerd[1984]: time="2026-04-24T23:59:47.591542614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:59:47.592558 containerd[1984]: time="2026-04-24T23:59:47.592521859Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 520.813907ms" Apr 24 23:59:47.596299 containerd[1984]: time="2026-04-24T23:59:47.595215290Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 519.052154ms" Apr 24 23:59:47.596936 containerd[1984]: time="2026-04-24T23:59:47.596906485Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 535.417782ms" Apr 24 23:59:47.861648 kubelet[2810]: E0424 23:59:47.861579 2810 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.30.251:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.30.251:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 24 23:59:47.888394 containerd[1984]: time="2026-04-24T23:59:47.885763094Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:59:47.888394 containerd[1984]: time="2026-04-24T23:59:47.885886585Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:59:47.888394 containerd[1984]: time="2026-04-24T23:59:47.885913779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:59:47.888394 containerd[1984]: time="2026-04-24T23:59:47.888058689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:59:47.901010 containerd[1984]: time="2026-04-24T23:59:47.900892056Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:59:47.901193 containerd[1984]: time="2026-04-24T23:59:47.901035166Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:59:47.901255 containerd[1984]: time="2026-04-24T23:59:47.901207987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:59:47.903007 containerd[1984]: time="2026-04-24T23:59:47.902944249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:59:47.922913 containerd[1984]: time="2026-04-24T23:59:47.920768047Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:59:47.922913 containerd[1984]: time="2026-04-24T23:59:47.920848404Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:59:47.922913 containerd[1984]: time="2026-04-24T23:59:47.920866420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:59:47.922913 containerd[1984]: time="2026-04-24T23:59:47.920960738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:59:47.942227 systemd[1]: Started cri-containerd-cfd7e8fb09b8db57af07abdabfc39b3739b770abdab276d5c8ed11f685952352.scope - libcontainer container cfd7e8fb09b8db57af07abdabfc39b3739b770abdab276d5c8ed11f685952352. Apr 24 23:59:47.944917 systemd[1]: Started cri-containerd-f01fb7f661683b908ca152e7fa9050fd5817245e5cbe5de4532419473a38c763.scope - libcontainer container f01fb7f661683b908ca152e7fa9050fd5817245e5cbe5de4532419473a38c763. Apr 24 23:59:47.964097 systemd[1]: Started cri-containerd-559d4bca40e9da65326f85b9ae2c771c5bb22ddd7bfd0f4ed1f16e365dbc2467.scope - libcontainer container 559d4bca40e9da65326f85b9ae2c771c5bb22ddd7bfd0f4ed1f16e365dbc2467. Apr 24 23:59:48.003947 kubelet[2810]: E0424 23:59:48.003872 2810 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.251:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-251?timeout=10s\": dial tcp 172.31.30.251:6443: connect: connection refused" interval="1.6s" Apr 24 23:59:48.047656 containerd[1984]: time="2026-04-24T23:59:48.047576233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-251,Uid:4528b29d957be0e39658aa8d2cef92e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"cfd7e8fb09b8db57af07abdabfc39b3739b770abdab276d5c8ed11f685952352\"" Apr 24 23:59:48.052568 containerd[1984]: time="2026-04-24T23:59:48.052063765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-251,Uid:eedf47871e974a151378938775ec838f,Namespace:kube-system,Attempt:0,} returns sandbox id \"f01fb7f661683b908ca152e7fa9050fd5817245e5cbe5de4532419473a38c763\"" Apr 24 23:59:48.067738 containerd[1984]: time="2026-04-24T23:59:48.067300975Z" level=info msg="CreateContainer within sandbox \"f01fb7f661683b908ca152e7fa9050fd5817245e5cbe5de4532419473a38c763\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 24 23:59:48.072461 containerd[1984]: time="2026-04-24T23:59:48.072245182Z" level=info msg="CreateContainer within sandbox \"cfd7e8fb09b8db57af07abdabfc39b3739b770abdab276d5c8ed11f685952352\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 24 23:59:48.081311 containerd[1984]: time="2026-04-24T23:59:48.081266478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-251,Uid:f7b79ef7432903ef4447042b71d71af6,Namespace:kube-system,Attempt:0,} returns sandbox id \"559d4bca40e9da65326f85b9ae2c771c5bb22ddd7bfd0f4ed1f16e365dbc2467\"" Apr 24 23:59:48.092868 containerd[1984]: time="2026-04-24T23:59:48.091959214Z" level=info msg="CreateContainer within sandbox \"559d4bca40e9da65326f85b9ae2c771c5bb22ddd7bfd0f4ed1f16e365dbc2467\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 24 23:59:48.101536 containerd[1984]: time="2026-04-24T23:59:48.101495419Z" level=info msg="CreateContainer within sandbox \"cfd7e8fb09b8db57af07abdabfc39b3739b770abdab276d5c8ed11f685952352\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3ed9576879496d6eacdb0063497c9a8b4744cb0ac67a31eda9e4741d6233d6c3\"" Apr 24 23:59:48.102511 containerd[1984]: time="2026-04-24T23:59:48.102476185Z" level=info msg="StartContainer for \"3ed9576879496d6eacdb0063497c9a8b4744cb0ac67a31eda9e4741d6233d6c3\"" Apr 24 23:59:48.113161 containerd[1984]: time="2026-04-24T23:59:48.113037810Z" level=info msg="CreateContainer within sandbox \"f01fb7f661683b908ca152e7fa9050fd5817245e5cbe5de4532419473a38c763\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"075736676c8afc3a64e0806a626148b4a1c7f5a9e6f314c9a1f2d696c40c8123\"" Apr 24 23:59:48.114856 containerd[1984]: time="2026-04-24T23:59:48.114781614Z" level=info msg="StartContainer for \"075736676c8afc3a64e0806a626148b4a1c7f5a9e6f314c9a1f2d696c40c8123\"" Apr 24 23:59:48.136060 containerd[1984]: time="2026-04-24T23:59:48.135877913Z" level=info msg="CreateContainer within sandbox \"559d4bca40e9da65326f85b9ae2c771c5bb22ddd7bfd0f4ed1f16e365dbc2467\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"344c0d255fef5e828c45e2af4d5158c6cc8f9fd7c1b0689a2e480d21d9a2534b\"" Apr 24 23:59:48.136666 containerd[1984]: time="2026-04-24T23:59:48.136612109Z" level=info msg="StartContainer for \"344c0d255fef5e828c45e2af4d5158c6cc8f9fd7c1b0689a2e480d21d9a2534b\"" Apr 24 23:59:48.152068 systemd[1]: Started cri-containerd-3ed9576879496d6eacdb0063497c9a8b4744cb0ac67a31eda9e4741d6233d6c3.scope - libcontainer container 3ed9576879496d6eacdb0063497c9a8b4744cb0ac67a31eda9e4741d6233d6c3. Apr 24 23:59:48.156403 systemd[1]: Started cri-containerd-075736676c8afc3a64e0806a626148b4a1c7f5a9e6f314c9a1f2d696c40c8123.scope - libcontainer container 075736676c8afc3a64e0806a626148b4a1c7f5a9e6f314c9a1f2d696c40c8123. Apr 24 23:59:48.199512 kubelet[2810]: I0424 23:59:48.199476 2810 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-251" Apr 24 23:59:48.199901 kubelet[2810]: E0424 23:59:48.199864 2810 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.30.251:6443/api/v1/nodes\": dial tcp 172.31.30.251:6443: connect: connection refused" node="ip-172-31-30-251" Apr 24 23:59:48.217418 systemd[1]: Started cri-containerd-344c0d255fef5e828c45e2af4d5158c6cc8f9fd7c1b0689a2e480d21d9a2534b.scope - libcontainer container 344c0d255fef5e828c45e2af4d5158c6cc8f9fd7c1b0689a2e480d21d9a2534b. Apr 24 23:59:48.232744 kubelet[2810]: E0424 23:59:48.232314 2810 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.30.251:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.30.251:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 24 23:59:48.259788 containerd[1984]: time="2026-04-24T23:59:48.259091014Z" level=info msg="StartContainer for \"3ed9576879496d6eacdb0063497c9a8b4744cb0ac67a31eda9e4741d6233d6c3\" returns successfully" Apr 24 23:59:48.265177 containerd[1984]: time="2026-04-24T23:59:48.265109051Z" level=info msg="StartContainer for \"075736676c8afc3a64e0806a626148b4a1c7f5a9e6f314c9a1f2d696c40c8123\" returns successfully" Apr 24 23:59:48.308123 containerd[1984]: time="2026-04-24T23:59:48.308073689Z" level=info msg="StartContainer for \"344c0d255fef5e828c45e2af4d5158c6cc8f9fd7c1b0689a2e480d21d9a2534b\" returns successfully" Apr 24 23:59:48.621996 kubelet[2810]: E0424 23:59:48.621952 2810 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.30.251:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.30.251:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 24 23:59:48.653894 kubelet[2810]: E0424 23:59:48.653860 2810 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-251\" not found" node="ip-172-31-30-251" Apr 24 23:59:48.656858 kubelet[2810]: E0424 23:59:48.656814 2810 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-251\" not found" node="ip-172-31-30-251" Apr 24 23:59:48.657914 kubelet[2810]: E0424 23:59:48.657891 2810 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-251\" not found" node="ip-172-31-30-251" Apr 24 23:59:49.557108 kubelet[2810]: E0424 23:59:49.557060 2810 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.30.251:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.30.251:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 24 23:59:49.605091 kubelet[2810]: E0424 23:59:49.605040 2810 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.251:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-251?timeout=10s\": dial tcp 172.31.30.251:6443: connect: connection refused" interval="3.2s" Apr 24 23:59:49.659561 kubelet[2810]: E0424 23:59:49.659168 2810 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-251\" not found" node="ip-172-31-30-251" Apr 24 23:59:49.659561 kubelet[2810]: E0424 23:59:49.659330 2810 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-251\" not found" node="ip-172-31-30-251" Apr 24 23:59:49.802299 kubelet[2810]: I0424 23:59:49.802262 2810 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-251" Apr 24 23:59:51.433390 kubelet[2810]: I0424 23:59:51.433091 2810 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-30-251" Apr 24 23:59:51.501651 kubelet[2810]: I0424 23:59:51.501599 2810 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-30-251" Apr 24 23:59:51.507194 kubelet[2810]: I0424 23:59:51.507159 2810 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-30-251" Apr 24 23:59:51.512562 kubelet[2810]: E0424 23:59:51.512525 2810 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-30-251\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-30-251" Apr 24 23:59:51.512778 kubelet[2810]: E0424 23:59:51.512525 2810 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-30-251\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-30-251" Apr 24 23:59:51.512863 kubelet[2810]: I0424 23:59:51.512782 2810 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-30-251" Apr 24 23:59:51.514651 kubelet[2810]: E0424 23:59:51.514617 2810 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-30-251\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-30-251" Apr 24 23:59:51.514651 kubelet[2810]: I0424 23:59:51.514645 2810 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-30-251" Apr 24 23:59:51.516480 kubelet[2810]: E0424 23:59:51.516441 2810 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-30-251\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-30-251" Apr 24 23:59:51.575651 kubelet[2810]: I0424 23:59:51.575601 2810 apiserver.go:52] "Watching apiserver" Apr 24 23:59:51.593871 kubelet[2810]: I0424 23:59:51.593796 2810 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 24 23:59:51.818265 kubelet[2810]: I0424 23:59:51.818146 2810 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-30-251" Apr 24 23:59:51.820837 kubelet[2810]: E0424 23:59:51.820790 2810 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-30-251\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-30-251" Apr 24 23:59:52.428622 kubelet[2810]: I0424 23:59:52.428333 2810 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-30-251" Apr 24 23:59:53.104201 systemd[1]: Reloading requested from client PID 3093 ('systemctl') (unit session-7.scope)... Apr 24 23:59:53.104219 systemd[1]: Reloading... Apr 24 23:59:53.202868 zram_generator::config[3129]: No configuration found. Apr 24 23:59:53.343268 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:59:53.444789 systemd[1]: Reloading finished in 340 ms. Apr 24 23:59:53.490729 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:59:53.506482 systemd[1]: kubelet.service: Deactivated successfully. Apr 24 23:59:53.506845 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:59:53.513197 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:59:53.827108 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:59:53.833448 (kubelet)[3193]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 24 23:59:53.893258 kubelet[3193]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:59:53.893258 kubelet[3193]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 24 23:59:53.893258 kubelet[3193]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:59:53.893765 kubelet[3193]: I0424 23:59:53.893329 3193 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 24 23:59:53.908836 kubelet[3193]: I0424 23:59:53.908697 3193 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 24 23:59:53.908836 kubelet[3193]: I0424 23:59:53.908727 3193 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 24 23:59:53.909326 kubelet[3193]: I0424 23:59:53.909301 3193 server.go:956] "Client rotation is on, will bootstrap in background" Apr 24 23:59:53.911187 kubelet[3193]: I0424 23:59:53.911159 3193 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 24 23:59:53.915164 kubelet[3193]: I0424 23:59:53.915127 3193 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 24 23:59:53.919105 kubelet[3193]: E0424 23:59:53.919082 3193 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 24 23:59:53.919861 kubelet[3193]: I0424 23:59:53.919311 3193 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 24 23:59:53.921686 kubelet[3193]: I0424 23:59:53.921666 3193 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 24 23:59:53.922958 kubelet[3193]: I0424 23:59:53.922917 3193 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 24 23:59:53.923125 kubelet[3193]: I0424 23:59:53.922953 3193 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-30-251","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 24 23:59:53.923375 kubelet[3193]: I0424 23:59:53.923128 3193 topology_manager.go:138] "Creating topology manager with none policy" Apr 24 23:59:53.923375 kubelet[3193]: I0424 23:59:53.923144 3193 container_manager_linux.go:303] "Creating device plugin manager" Apr 24 23:59:53.923375 kubelet[3193]: I0424 23:59:53.923264 3193 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:59:53.923506 kubelet[3193]: I0424 23:59:53.923461 3193 kubelet.go:480] "Attempting to sync node with API server" Apr 24 23:59:53.923506 kubelet[3193]: I0424 23:59:53.923477 3193 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 24 23:59:53.923645 kubelet[3193]: I0424 23:59:53.923620 3193 kubelet.go:386] "Adding apiserver pod source" Apr 24 23:59:53.923694 kubelet[3193]: I0424 23:59:53.923653 3193 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 24 23:59:53.937893 kubelet[3193]: I0424 23:59:53.937437 3193 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 24 23:59:53.938581 kubelet[3193]: I0424 23:59:53.938554 3193 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 24 23:59:53.949391 kubelet[3193]: I0424 23:59:53.949352 3193 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 24 23:59:53.951861 kubelet[3193]: I0424 23:59:53.949520 3193 server.go:1289] "Started kubelet" Apr 24 23:59:53.951861 kubelet[3193]: I0424 23:59:53.951126 3193 apiserver.go:52] "Watching apiserver" Apr 24 23:59:53.951861 kubelet[3193]: I0424 23:59:53.951281 3193 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 24 23:59:53.953774 kubelet[3193]: I0424 23:59:53.953726 3193 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 24 23:59:53.956053 kubelet[3193]: I0424 23:59:53.956028 3193 server.go:317] "Adding debug handlers to kubelet server" Apr 24 23:59:53.963029 kubelet[3193]: I0424 23:59:53.962224 3193 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 24 23:59:53.963029 kubelet[3193]: I0424 23:59:53.962483 3193 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 24 23:59:53.963029 kubelet[3193]: I0424 23:59:53.962762 3193 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 24 23:59:53.965910 kubelet[3193]: I0424 23:59:53.965888 3193 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 24 23:59:53.966037 kubelet[3193]: I0424 23:59:53.966002 3193 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 24 23:59:53.966141 kubelet[3193]: I0424 23:59:53.966126 3193 reconciler.go:26] "Reconciler: start to sync state" Apr 24 23:59:53.969050 kubelet[3193]: I0424 23:59:53.969027 3193 factory.go:223] Registration of the systemd container factory successfully Apr 24 23:59:53.969158 kubelet[3193]: I0424 23:59:53.969136 3193 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 24 23:59:53.973819 kubelet[3193]: E0424 23:59:53.973543 3193 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 24 23:59:53.975448 kubelet[3193]: I0424 23:59:53.973984 3193 factory.go:223] Registration of the containerd container factory successfully Apr 24 23:59:53.981796 kubelet[3193]: I0424 23:59:53.981676 3193 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 24 23:59:53.983185 kubelet[3193]: I0424 23:59:53.982904 3193 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 24 23:59:53.983185 kubelet[3193]: I0424 23:59:53.982922 3193 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 24 23:59:53.983185 kubelet[3193]: I0424 23:59:53.982946 3193 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 24 23:59:53.983185 kubelet[3193]: I0424 23:59:53.982952 3193 kubelet.go:2436] "Starting kubelet main sync loop" Apr 24 23:59:53.983185 kubelet[3193]: E0424 23:59:53.982989 3193 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 24 23:59:54.039672 kubelet[3193]: I0424 23:59:54.039646 3193 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 24 23:59:54.039821 kubelet[3193]: I0424 23:59:54.039796 3193 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 24 23:59:54.039821 kubelet[3193]: I0424 23:59:54.039822 3193 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:59:54.039821 kubelet[3193]: I0424 23:59:54.040010 3193 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 24 23:59:54.039821 kubelet[3193]: I0424 23:59:54.040022 3193 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 24 23:59:54.039821 kubelet[3193]: I0424 23:59:54.040044 3193 policy_none.go:49] "None policy: Start" Apr 24 23:59:54.039821 kubelet[3193]: I0424 23:59:54.040057 3193 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 24 23:59:54.039821 kubelet[3193]: I0424 23:59:54.040068 3193 state_mem.go:35] "Initializing new in-memory state store" Apr 24 23:59:54.039821 kubelet[3193]: I0424 23:59:54.040179 3193 state_mem.go:75] "Updated machine memory state" Apr 24 23:59:54.046069 kubelet[3193]: E0424 23:59:54.045534 3193 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 24 23:59:54.046069 kubelet[3193]: I0424 23:59:54.045740 3193 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 24 23:59:54.046069 kubelet[3193]: I0424 23:59:54.045753 3193 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 24 23:59:54.048032 kubelet[3193]: I0424 23:59:54.047735 3193 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 24 23:59:54.048032 kubelet[3193]: E0424 23:59:54.047931 3193 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 24 23:59:54.086874 kubelet[3193]: I0424 23:59:54.084923 3193 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-30-251" Apr 24 23:59:54.086874 kubelet[3193]: I0424 23:59:54.085312 3193 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-30-251" Apr 24 23:59:54.118295 kubelet[3193]: I0424 23:59:54.117667 3193 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-30-251" podStartSLOduration=2.117629497 podStartE2EDuration="2.117629497s" podCreationTimestamp="2026-04-24 23:59:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:59:54.117183451 +0000 UTC m=+0.275155692" watchObservedRunningTime="2026-04-24 23:59:54.117629497 +0000 UTC m=+0.275601716" Apr 24 23:59:54.141224 kubelet[3193]: I0424 23:59:54.140914 3193 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-30-251" podStartSLOduration=0.14089575 podStartE2EDuration="140.89575ms" podCreationTimestamp="2026-04-24 23:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:59:54.140264909 +0000 UTC m=+0.298237135" watchObservedRunningTime="2026-04-24 23:59:54.14089575 +0000 UTC m=+0.298867974" Apr 24 23:59:54.141224 kubelet[3193]: I0424 23:59:54.141069 3193 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-30-251" podStartSLOduration=0.141060473 podStartE2EDuration="141.060473ms" podCreationTimestamp="2026-04-24 23:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:59:54.130100434 +0000 UTC m=+0.288072662" watchObservedRunningTime="2026-04-24 23:59:54.141060473 +0000 UTC m=+0.299032700" Apr 24 23:59:54.148738 sudo[3230]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 24 23:59:54.149102 sudo[3230]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 24 23:59:54.161712 kubelet[3193]: I0424 23:59:54.161166 3193 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-251" Apr 24 23:59:54.167893 kubelet[3193]: I0424 23:59:54.166975 3193 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 24 23:59:54.168153 kubelet[3193]: I0424 23:59:54.168118 3193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4528b29d957be0e39658aa8d2cef92e0-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-251\" (UID: \"4528b29d957be0e39658aa8d2cef92e0\") " pod="kube-system/kube-controller-manager-ip-172-31-30-251" Apr 24 23:59:54.168265 kubelet[3193]: I0424 23:59:54.168220 3193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7b79ef7432903ef4447042b71d71af6-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-251\" (UID: \"f7b79ef7432903ef4447042b71d71af6\") " pod="kube-system/kube-scheduler-ip-172-31-30-251" Apr 24 23:59:54.168366 kubelet[3193]: I0424 23:59:54.168288 3193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eedf47871e974a151378938775ec838f-ca-certs\") pod \"kube-apiserver-ip-172-31-30-251\" (UID: \"eedf47871e974a151378938775ec838f\") " pod="kube-system/kube-apiserver-ip-172-31-30-251" Apr 24 23:59:54.168366 kubelet[3193]: I0424 23:59:54.168351 3193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eedf47871e974a151378938775ec838f-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-251\" (UID: \"eedf47871e974a151378938775ec838f\") " pod="kube-system/kube-apiserver-ip-172-31-30-251" Apr 24 23:59:54.168462 kubelet[3193]: I0424 23:59:54.168382 3193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eedf47871e974a151378938775ec838f-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-251\" (UID: \"eedf47871e974a151378938775ec838f\") " pod="kube-system/kube-apiserver-ip-172-31-30-251" Apr 24 23:59:54.168512 kubelet[3193]: I0424 23:59:54.168436 3193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4528b29d957be0e39658aa8d2cef92e0-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-251\" (UID: \"4528b29d957be0e39658aa8d2cef92e0\") " pod="kube-system/kube-controller-manager-ip-172-31-30-251" Apr 24 23:59:54.168570 kubelet[3193]: I0424 23:59:54.168513 3193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4528b29d957be0e39658aa8d2cef92e0-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-251\" (UID: \"4528b29d957be0e39658aa8d2cef92e0\") " pod="kube-system/kube-controller-manager-ip-172-31-30-251" Apr 24 23:59:54.168623 kubelet[3193]: I0424 23:59:54.168573 3193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4528b29d957be0e39658aa8d2cef92e0-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-251\" (UID: \"4528b29d957be0e39658aa8d2cef92e0\") " pod="kube-system/kube-controller-manager-ip-172-31-30-251" Apr 24 23:59:54.168669 kubelet[3193]: I0424 23:59:54.168605 3193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4528b29d957be0e39658aa8d2cef92e0-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-251\" (UID: \"4528b29d957be0e39658aa8d2cef92e0\") " pod="kube-system/kube-controller-manager-ip-172-31-30-251" Apr 24 23:59:54.172920 kubelet[3193]: I0424 23:59:54.172701 3193 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-30-251" Apr 24 23:59:54.172920 kubelet[3193]: I0424 23:59:54.172770 3193 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-30-251" Apr 24 23:59:54.922673 sudo[3230]: pam_unix(sudo:session): session closed for user root Apr 24 23:59:55.839012 update_engine[1968]: I20260424 23:59:55.838924 1968 update_attempter.cc:509] Updating boot flags... Apr 24 23:59:55.930400 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (3254) Apr 24 23:59:56.228855 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (3256) Apr 24 23:59:57.185571 sudo[2314]: pam_unix(sudo:session): session closed for user root Apr 24 23:59:57.345066 sshd[2311]: pam_unix(sshd:session): session closed for user core Apr 24 23:59:57.350525 systemd[1]: sshd@6-172.31.30.251:22-4.175.71.9:53680.service: Deactivated successfully. Apr 24 23:59:57.353303 systemd[1]: session-7.scope: Deactivated successfully. Apr 24 23:59:57.353517 systemd[1]: session-7.scope: Consumed 6.436s CPU time, 143.8M memory peak, 0B memory swap peak. Apr 24 23:59:57.354578 systemd-logind[1963]: Session 7 logged out. Waiting for processes to exit. Apr 24 23:59:57.356198 systemd-logind[1963]: Removed session 7. Apr 24 23:59:59.680538 kubelet[3193]: I0424 23:59:59.680488 3193 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 24 23:59:59.681248 kubelet[3193]: I0424 23:59:59.681220 3193 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 24 23:59:59.681302 containerd[1984]: time="2026-04-24T23:59:59.681000143Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 24 23:59:59.760183 systemd[1]: Created slice kubepods-besteffort-pod8fd7a758_0a40_46ab_8e9d_596fee9a80a3.slice - libcontainer container kubepods-besteffort-pod8fd7a758_0a40_46ab_8e9d_596fee9a80a3.slice. Apr 24 23:59:59.779526 systemd[1]: Created slice kubepods-burstable-poddc37fa9b_717a_49c9_be15_2be707baec3a.slice - libcontainer container kubepods-burstable-poddc37fa9b_717a_49c9_be15_2be707baec3a.slice. Apr 24 23:59:59.904604 kubelet[3193]: I0424 23:59:59.904510 3193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dc37fa9b-717a-49c9-be15-2be707baec3a-cilium-cgroup\") pod \"cilium-ncc2q\" (UID: \"dc37fa9b-717a-49c9-be15-2be707baec3a\") " pod="kube-system/cilium-ncc2q" Apr 24 23:59:59.904604 kubelet[3193]: I0424 23:59:59.904613 3193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dc37fa9b-717a-49c9-be15-2be707baec3a-cni-path\") pod \"cilium-ncc2q\" (UID: \"dc37fa9b-717a-49c9-be15-2be707baec3a\") " pod="kube-system/cilium-ncc2q" Apr 24 23:59:59.905036 kubelet[3193]: I0424 23:59:59.904648 3193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc37fa9b-717a-49c9-be15-2be707baec3a-lib-modules\") pod \"cilium-ncc2q\" (UID: \"dc37fa9b-717a-49c9-be15-2be707baec3a\") " pod="kube-system/cilium-ncc2q" Apr 24 23:59:59.905036 kubelet[3193]: I0424 23:59:59.904669 3193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc37fa9b-717a-49c9-be15-2be707baec3a-xtables-lock\") pod \"cilium-ncc2q\" (UID: \"dc37fa9b-717a-49c9-be15-2be707baec3a\") " pod="kube-system/cilium-ncc2q" Apr 24 23:59:59.905036 kubelet[3193]: I0424 23:59:59.904693 3193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dc37fa9b-717a-49c9-be15-2be707baec3a-clustermesh-secrets\") pod \"cilium-ncc2q\" (UID: \"dc37fa9b-717a-49c9-be15-2be707baec3a\") " pod="kube-system/cilium-ncc2q" Apr 24 23:59:59.905036 kubelet[3193]: I0424 23:59:59.904713 3193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dc37fa9b-717a-49c9-be15-2be707baec3a-cilium-config-path\") pod \"cilium-ncc2q\" (UID: \"dc37fa9b-717a-49c9-be15-2be707baec3a\") " pod="kube-system/cilium-ncc2q" Apr 24 23:59:59.905036 kubelet[3193]: I0424 23:59:59.904735 3193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dc37fa9b-717a-49c9-be15-2be707baec3a-hubble-tls\") pod \"cilium-ncc2q\" (UID: \"dc37fa9b-717a-49c9-be15-2be707baec3a\") " pod="kube-system/cilium-ncc2q" Apr 24 23:59:59.905036 kubelet[3193]: I0424 23:59:59.904760 3193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8fd7a758-0a40-46ab-8e9d-596fee9a80a3-xtables-lock\") pod \"kube-proxy-pxzb9\" (UID: \"8fd7a758-0a40-46ab-8e9d-596fee9a80a3\") " pod="kube-system/kube-proxy-pxzb9" Apr 24 23:59:59.907357 kubelet[3193]: I0424 23:59:59.904784 3193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dc37fa9b-717a-49c9-be15-2be707baec3a-host-proc-sys-net\") pod \"cilium-ncc2q\" (UID: \"dc37fa9b-717a-49c9-be15-2be707baec3a\") " pod="kube-system/cilium-ncc2q" Apr 24 23:59:59.907357 kubelet[3193]: I0424 23:59:59.904806 3193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dc37fa9b-717a-49c9-be15-2be707baec3a-host-proc-sys-kernel\") pod \"cilium-ncc2q\" (UID: \"dc37fa9b-717a-49c9-be15-2be707baec3a\") " pod="kube-system/cilium-ncc2q" Apr 24 23:59:59.907357 kubelet[3193]: I0424 23:59:59.904846 3193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57v96\" (UniqueName: \"kubernetes.io/projected/dc37fa9b-717a-49c9-be15-2be707baec3a-kube-api-access-57v96\") pod \"cilium-ncc2q\" (UID: \"dc37fa9b-717a-49c9-be15-2be707baec3a\") " pod="kube-system/cilium-ncc2q" Apr 24 23:59:59.907357 kubelet[3193]: I0424 23:59:59.904871 3193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8fd7a758-0a40-46ab-8e9d-596fee9a80a3-kube-proxy\") pod \"kube-proxy-pxzb9\" (UID: \"8fd7a758-0a40-46ab-8e9d-596fee9a80a3\") " pod="kube-system/kube-proxy-pxzb9" Apr 24 23:59:59.907357 kubelet[3193]: I0424 23:59:59.904892 3193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8fd7a758-0a40-46ab-8e9d-596fee9a80a3-lib-modules\") pod \"kube-proxy-pxzb9\" (UID: \"8fd7a758-0a40-46ab-8e9d-596fee9a80a3\") " pod="kube-system/kube-proxy-pxzb9" Apr 24 23:59:59.907602 kubelet[3193]: I0424 23:59:59.904920 3193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxtqv\" (UniqueName: \"kubernetes.io/projected/8fd7a758-0a40-46ab-8e9d-596fee9a80a3-kube-api-access-hxtqv\") pod \"kube-proxy-pxzb9\" (UID: \"8fd7a758-0a40-46ab-8e9d-596fee9a80a3\") " pod="kube-system/kube-proxy-pxzb9" Apr 24 23:59:59.907602 kubelet[3193]: I0424 23:59:59.904952 3193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dc37fa9b-717a-49c9-be15-2be707baec3a-cilium-run\") pod \"cilium-ncc2q\" (UID: \"dc37fa9b-717a-49c9-be15-2be707baec3a\") " pod="kube-system/cilium-ncc2q" Apr 24 23:59:59.907602 kubelet[3193]: I0424 23:59:59.904974 3193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dc37fa9b-717a-49c9-be15-2be707baec3a-hostproc\") pod \"cilium-ncc2q\" (UID: \"dc37fa9b-717a-49c9-be15-2be707baec3a\") " pod="kube-system/cilium-ncc2q" Apr 24 23:59:59.907602 kubelet[3193]: I0424 23:59:59.904996 3193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dc37fa9b-717a-49c9-be15-2be707baec3a-etc-cni-netd\") pod \"cilium-ncc2q\" (UID: \"dc37fa9b-717a-49c9-be15-2be707baec3a\") " pod="kube-system/cilium-ncc2q" Apr 24 23:59:59.907602 kubelet[3193]: I0424 23:59:59.905019 3193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dc37fa9b-717a-49c9-be15-2be707baec3a-bpf-maps\") pod \"cilium-ncc2q\" (UID: \"dc37fa9b-717a-49c9-be15-2be707baec3a\") " pod="kube-system/cilium-ncc2q" Apr 25 00:00:00.084237 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. Apr 25 00:00:00.090679 kubelet[3193]: E0425 00:00:00.088664 3193 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 25 00:00:00.090679 kubelet[3193]: E0425 00:00:00.088744 3193 projected.go:194] Error preparing data for projected volume kube-api-access-hxtqv for pod kube-system/kube-proxy-pxzb9: configmap "kube-root-ca.crt" not found Apr 25 00:00:00.090679 kubelet[3193]: E0425 00:00:00.089163 3193 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8fd7a758-0a40-46ab-8e9d-596fee9a80a3-kube-api-access-hxtqv podName:8fd7a758-0a40-46ab-8e9d-596fee9a80a3 nodeName:}" failed. No retries permitted until 2026-04-25 00:00:00.589011139 +0000 UTC m=+6.746983364 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hxtqv" (UniqueName: "kubernetes.io/projected/8fd7a758-0a40-46ab-8e9d-596fee9a80a3-kube-api-access-hxtqv") pod "kube-proxy-pxzb9" (UID: "8fd7a758-0a40-46ab-8e9d-596fee9a80a3") : configmap "kube-root-ca.crt" not found Apr 25 00:00:00.091463 kubelet[3193]: E0425 00:00:00.091052 3193 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 25 00:00:00.091463 kubelet[3193]: E0425 00:00:00.091081 3193 projected.go:194] Error preparing data for projected volume kube-api-access-57v96 for pod kube-system/cilium-ncc2q: configmap "kube-root-ca.crt" not found Apr 25 00:00:00.091463 kubelet[3193]: E0425 00:00:00.091153 3193 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dc37fa9b-717a-49c9-be15-2be707baec3a-kube-api-access-57v96 podName:dc37fa9b-717a-49c9-be15-2be707baec3a nodeName:}" failed. No retries permitted until 2026-04-25 00:00:00.591120838 +0000 UTC m=+6.749093043 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-57v96" (UniqueName: "kubernetes.io/projected/dc37fa9b-717a-49c9-be15-2be707baec3a-kube-api-access-57v96") pod "cilium-ncc2q" (UID: "dc37fa9b-717a-49c9-be15-2be707baec3a") : configmap "kube-root-ca.crt" not found Apr 25 00:00:00.116744 systemd[1]: logrotate.service: Deactivated successfully. Apr 25 00:00:00.620811 kubelet[3193]: E0425 00:00:00.620637 3193 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 25 00:00:00.620811 kubelet[3193]: E0425 00:00:00.620668 3193 projected.go:194] Error preparing data for projected volume kube-api-access-hxtqv for pod kube-system/kube-proxy-pxzb9: configmap "kube-root-ca.crt" not found Apr 25 00:00:00.620811 kubelet[3193]: E0425 00:00:00.620729 3193 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8fd7a758-0a40-46ab-8e9d-596fee9a80a3-kube-api-access-hxtqv podName:8fd7a758-0a40-46ab-8e9d-596fee9a80a3 nodeName:}" failed. No retries permitted until 2026-04-25 00:00:01.620707013 +0000 UTC m=+7.778679219 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-hxtqv" (UniqueName: "kubernetes.io/projected/8fd7a758-0a40-46ab-8e9d-596fee9a80a3-kube-api-access-hxtqv") pod "kube-proxy-pxzb9" (UID: "8fd7a758-0a40-46ab-8e9d-596fee9a80a3") : configmap "kube-root-ca.crt" not found Apr 25 00:00:00.621385 kubelet[3193]: E0425 00:00:00.621238 3193 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 25 00:00:00.621385 kubelet[3193]: E0425 00:00:00.621266 3193 projected.go:194] Error preparing data for projected volume kube-api-access-57v96 for pod kube-system/cilium-ncc2q: configmap "kube-root-ca.crt" not found Apr 25 00:00:00.621385 kubelet[3193]: E0425 00:00:00.621356 3193 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dc37fa9b-717a-49c9-be15-2be707baec3a-kube-api-access-57v96 podName:dc37fa9b-717a-49c9-be15-2be707baec3a nodeName:}" failed. No retries permitted until 2026-04-25 00:00:01.621338241 +0000 UTC m=+7.779310457 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-57v96" (UniqueName: "kubernetes.io/projected/dc37fa9b-717a-49c9-be15-2be707baec3a-kube-api-access-57v96") pod "cilium-ncc2q" (UID: "dc37fa9b-717a-49c9-be15-2be707baec3a") : configmap "kube-root-ca.crt" not found Apr 25 00:00:00.986966 systemd[1]: Created slice kubepods-besteffort-pod9a90b521_7ed3_4db6_ba85_db810c0452db.slice - libcontainer container kubepods-besteffort-pod9a90b521_7ed3_4db6_ba85_db810c0452db.slice. Apr 25 00:00:01.028055 kubelet[3193]: I0425 00:00:01.027987 3193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wld7t\" (UniqueName: \"kubernetes.io/projected/9a90b521-7ed3-4db6-ba85-db810c0452db-kube-api-access-wld7t\") pod \"cilium-operator-6c4d7847fc-k6vcc\" (UID: \"9a90b521-7ed3-4db6-ba85-db810c0452db\") " pod="kube-system/cilium-operator-6c4d7847fc-k6vcc" Apr 25 00:00:01.028524 kubelet[3193]: I0425 00:00:01.028103 3193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9a90b521-7ed3-4db6-ba85-db810c0452db-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-k6vcc\" (UID: \"9a90b521-7ed3-4db6-ba85-db810c0452db\") " pod="kube-system/cilium-operator-6c4d7847fc-k6vcc" Apr 25 00:00:01.294669 containerd[1984]: time="2026-04-25T00:00:01.292717841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-k6vcc,Uid:9a90b521-7ed3-4db6-ba85-db810c0452db,Namespace:kube-system,Attempt:0,}" Apr 25 00:00:02.114935 containerd[1984]: time="2026-04-25T00:00:02.112028867Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 00:00:02.114935 containerd[1984]: time="2026-04-25T00:00:02.112219256Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 00:00:02.114935 containerd[1984]: time="2026-04-25T00:00:02.112255897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:00:02.115298 containerd[1984]: time="2026-04-25T00:00:02.115017570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:00:02.185223 containerd[1984]: time="2026-04-25T00:00:02.177643999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pxzb9,Uid:8fd7a758-0a40-46ab-8e9d-596fee9a80a3,Namespace:kube-system,Attempt:0,}" Apr 25 00:00:02.197594 containerd[1984]: time="2026-04-25T00:00:02.197551581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ncc2q,Uid:dc37fa9b-717a-49c9-be15-2be707baec3a,Namespace:kube-system,Attempt:0,}" Apr 25 00:00:02.340254 systemd[1]: Started cri-containerd-65131dd26add77e85ed143bac3876426230056f5c87f2d769b385f44f8e501e3.scope - libcontainer container 65131dd26add77e85ed143bac3876426230056f5c87f2d769b385f44f8e501e3. Apr 25 00:00:02.506461 containerd[1984]: time="2026-04-25T00:00:02.506331095Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 00:00:02.506461 containerd[1984]: time="2026-04-25T00:00:02.506409991Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 00:00:02.506461 containerd[1984]: time="2026-04-25T00:00:02.506439742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:00:02.507315 containerd[1984]: time="2026-04-25T00:00:02.507257803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:00:02.508495 containerd[1984]: time="2026-04-25T00:00:02.508409873Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 00:00:02.511861 containerd[1984]: time="2026-04-25T00:00:02.510889699Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 00:00:02.512106 containerd[1984]: time="2026-04-25T00:00:02.512059091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:00:02.512846 containerd[1984]: time="2026-04-25T00:00:02.512263955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:00:02.655093 systemd[1]: Started cri-containerd-9144b97fb1899bdf85596efaf9e5c9ded0b5175c9751c9ffe48373a79abfa67e.scope - libcontainer container 9144b97fb1899bdf85596efaf9e5c9ded0b5175c9751c9ffe48373a79abfa67e. Apr 25 00:00:02.713899 systemd[1]: Started cri-containerd-b0448393495dd7bb1cb4a5788e930a57370e169b4d876d3f3afb3fc7c32c31b2.scope - libcontainer container b0448393495dd7bb1cb4a5788e930a57370e169b4d876d3f3afb3fc7c32c31b2. Apr 25 00:00:02.875238 containerd[1984]: time="2026-04-25T00:00:02.866728396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-k6vcc,Uid:9a90b521-7ed3-4db6-ba85-db810c0452db,Namespace:kube-system,Attempt:0,} returns sandbox id \"65131dd26add77e85ed143bac3876426230056f5c87f2d769b385f44f8e501e3\"" Apr 25 00:00:02.948870 containerd[1984]: time="2026-04-25T00:00:02.948066509Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 25 00:00:02.957694 containerd[1984]: time="2026-04-25T00:00:02.957550048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ncc2q,Uid:dc37fa9b-717a-49c9-be15-2be707baec3a,Namespace:kube-system,Attempt:0,} returns sandbox id \"9144b97fb1899bdf85596efaf9e5c9ded0b5175c9751c9ffe48373a79abfa67e\"" Apr 25 00:00:03.039637 containerd[1984]: time="2026-04-25T00:00:03.039582500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pxzb9,Uid:8fd7a758-0a40-46ab-8e9d-596fee9a80a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"b0448393495dd7bb1cb4a5788e930a57370e169b4d876d3f3afb3fc7c32c31b2\"" Apr 25 00:00:03.068557 containerd[1984]: time="2026-04-25T00:00:03.068512457Z" level=info msg="CreateContainer within sandbox \"b0448393495dd7bb1cb4a5788e930a57370e169b4d876d3f3afb3fc7c32c31b2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 25 00:00:03.177072 containerd[1984]: time="2026-04-25T00:00:03.176745861Z" level=info msg="CreateContainer within sandbox \"b0448393495dd7bb1cb4a5788e930a57370e169b4d876d3f3afb3fc7c32c31b2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2b08c34e76f42d52dea1f21622c10114204becf14fc6ef64757131fe70232158\"" Apr 25 00:00:03.181416 containerd[1984]: time="2026-04-25T00:00:03.181377044Z" level=info msg="StartContainer for \"2b08c34e76f42d52dea1f21622c10114204becf14fc6ef64757131fe70232158\"" Apr 25 00:00:03.378122 systemd[1]: Started cri-containerd-2b08c34e76f42d52dea1f21622c10114204becf14fc6ef64757131fe70232158.scope - libcontainer container 2b08c34e76f42d52dea1f21622c10114204becf14fc6ef64757131fe70232158. Apr 25 00:00:03.532864 containerd[1984]: time="2026-04-25T00:00:03.532689835Z" level=info msg="StartContainer for \"2b08c34e76f42d52dea1f21622c10114204becf14fc6ef64757131fe70232158\" returns successfully" Apr 25 00:00:04.779630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3711116384.mount: Deactivated successfully. Apr 25 00:00:06.355443 kubelet[3193]: I0425 00:00:06.355203 3193 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pxzb9" podStartSLOduration=7.3546533929999995 podStartE2EDuration="7.354653393s" podCreationTimestamp="2026-04-24 23:59:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-25 00:00:04.222531508 +0000 UTC m=+10.380503729" watchObservedRunningTime="2026-04-25 00:00:06.354653393 +0000 UTC m=+12.512625619" Apr 25 00:00:09.034937 containerd[1984]: time="2026-04-25T00:00:09.034886577Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:00:09.041805 containerd[1984]: time="2026-04-25T00:00:09.037855808Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 25 00:00:09.043435 containerd[1984]: time="2026-04-25T00:00:09.042274335Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:00:09.053048 containerd[1984]: time="2026-04-25T00:00:09.052658314Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 6.104538366s" Apr 25 00:00:09.053048 containerd[1984]: time="2026-04-25T00:00:09.052741498Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 25 00:00:09.101543 containerd[1984]: time="2026-04-25T00:00:09.097624432Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 25 00:00:09.115053 containerd[1984]: time="2026-04-25T00:00:09.114855290Z" level=info msg="CreateContainer within sandbox \"65131dd26add77e85ed143bac3876426230056f5c87f2d769b385f44f8e501e3\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 25 00:00:09.180711 containerd[1984]: time="2026-04-25T00:00:09.180665364Z" level=info msg="CreateContainer within sandbox \"65131dd26add77e85ed143bac3876426230056f5c87f2d769b385f44f8e501e3\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"af13dcebf7c713855682e67719d9bf65812f744daf46139ce4dfcb90f32ae6f4\"" Apr 25 00:00:09.182891 containerd[1984]: time="2026-04-25T00:00:09.181817502Z" level=info msg="StartContainer for \"af13dcebf7c713855682e67719d9bf65812f744daf46139ce4dfcb90f32ae6f4\"" Apr 25 00:00:09.280752 systemd[1]: Started cri-containerd-af13dcebf7c713855682e67719d9bf65812f744daf46139ce4dfcb90f32ae6f4.scope - libcontainer container af13dcebf7c713855682e67719d9bf65812f744daf46139ce4dfcb90f32ae6f4. Apr 25 00:00:09.343601 containerd[1984]: time="2026-04-25T00:00:09.340633993Z" level=info msg="StartContainer for \"af13dcebf7c713855682e67719d9bf65812f744daf46139ce4dfcb90f32ae6f4\" returns successfully" Apr 25 00:00:15.672030 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1788057864.mount: Deactivated successfully. Apr 25 00:00:18.320215 containerd[1984]: time="2026-04-25T00:00:18.320155630Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:00:18.323397 containerd[1984]: time="2026-04-25T00:00:18.322723867Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 25 00:00:18.323397 containerd[1984]: time="2026-04-25T00:00:18.322792343Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 25 00:00:18.325508 containerd[1984]: time="2026-04-25T00:00:18.325459334Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.227790727s" Apr 25 00:00:18.325508 containerd[1984]: time="2026-04-25T00:00:18.325502641Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 25 00:00:18.330230 containerd[1984]: time="2026-04-25T00:00:18.330099453Z" level=info msg="CreateContainer within sandbox \"9144b97fb1899bdf85596efaf9e5c9ded0b5175c9751c9ffe48373a79abfa67e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 25 00:00:18.467610 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount296912472.mount: Deactivated successfully. Apr 25 00:00:18.495159 containerd[1984]: time="2026-04-25T00:00:18.495103710Z" level=info msg="CreateContainer within sandbox \"9144b97fb1899bdf85596efaf9e5c9ded0b5175c9751c9ffe48373a79abfa67e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"466a4fa4ebc375bc39fdfb773e6803e8bc60815986ed476c130c3a5100e15a92\"" Apr 25 00:00:18.496008 containerd[1984]: time="2026-04-25T00:00:18.495788000Z" level=info msg="StartContainer for \"466a4fa4ebc375bc39fdfb773e6803e8bc60815986ed476c130c3a5100e15a92\"" Apr 25 00:00:18.696586 systemd[1]: run-containerd-runc-k8s.io-466a4fa4ebc375bc39fdfb773e6803e8bc60815986ed476c130c3a5100e15a92-runc.BtBQt5.mount: Deactivated successfully. Apr 25 00:00:18.722048 systemd[1]: Started cri-containerd-466a4fa4ebc375bc39fdfb773e6803e8bc60815986ed476c130c3a5100e15a92.scope - libcontainer container 466a4fa4ebc375bc39fdfb773e6803e8bc60815986ed476c130c3a5100e15a92. Apr 25 00:00:18.754137 containerd[1984]: time="2026-04-25T00:00:18.754093622Z" level=info msg="StartContainer for \"466a4fa4ebc375bc39fdfb773e6803e8bc60815986ed476c130c3a5100e15a92\" returns successfully" Apr 25 00:00:18.766009 systemd[1]: cri-containerd-466a4fa4ebc375bc39fdfb773e6803e8bc60815986ed476c130c3a5100e15a92.scope: Deactivated successfully. Apr 25 00:00:18.929076 containerd[1984]: time="2026-04-25T00:00:18.917243492Z" level=info msg="shim disconnected" id=466a4fa4ebc375bc39fdfb773e6803e8bc60815986ed476c130c3a5100e15a92 namespace=k8s.io Apr 25 00:00:18.929076 containerd[1984]: time="2026-04-25T00:00:18.929070485Z" level=warning msg="cleaning up after shim disconnected" id=466a4fa4ebc375bc39fdfb773e6803e8bc60815986ed476c130c3a5100e15a92 namespace=k8s.io Apr 25 00:00:18.929382 containerd[1984]: time="2026-04-25T00:00:18.929092391Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 25 00:00:19.280631 containerd[1984]: time="2026-04-25T00:00:19.280586515Z" level=info msg="CreateContainer within sandbox \"9144b97fb1899bdf85596efaf9e5c9ded0b5175c9751c9ffe48373a79abfa67e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 25 00:00:19.298236 containerd[1984]: time="2026-04-25T00:00:19.298182816Z" level=info msg="CreateContainer within sandbox \"9144b97fb1899bdf85596efaf9e5c9ded0b5175c9751c9ffe48373a79abfa67e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"cc8f6849aef27f0a205dd08c95b71072c901e72a617331e072f8e26db9c82025\"" Apr 25 00:00:19.300893 containerd[1984]: time="2026-04-25T00:00:19.300855299Z" level=info msg="StartContainer for \"cc8f6849aef27f0a205dd08c95b71072c901e72a617331e072f8e26db9c82025\"" Apr 25 00:00:19.320756 kubelet[3193]: I0425 00:00:19.318376 3193 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-k6vcc" podStartSLOduration=13.17322 podStartE2EDuration="19.31834986s" podCreationTimestamp="2026-04-25 00:00:00 +0000 UTC" firstStartedPulling="2026-04-25 00:00:02.944963709 +0000 UTC m=+9.102935919" lastFinishedPulling="2026-04-25 00:00:09.090093576 +0000 UTC m=+15.248065779" observedRunningTime="2026-04-25 00:00:10.316638875 +0000 UTC m=+16.474611097" watchObservedRunningTime="2026-04-25 00:00:19.31834986 +0000 UTC m=+25.476322085" Apr 25 00:00:19.350061 systemd[1]: Started cri-containerd-cc8f6849aef27f0a205dd08c95b71072c901e72a617331e072f8e26db9c82025.scope - libcontainer container cc8f6849aef27f0a205dd08c95b71072c901e72a617331e072f8e26db9c82025. Apr 25 00:00:19.385887 containerd[1984]: time="2026-04-25T00:00:19.385760540Z" level=info msg="StartContainer for \"cc8f6849aef27f0a205dd08c95b71072c901e72a617331e072f8e26db9c82025\" returns successfully" Apr 25 00:00:19.401686 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 25 00:00:19.402061 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 25 00:00:19.402149 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 25 00:00:19.408377 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 25 00:00:19.408620 systemd[1]: cri-containerd-cc8f6849aef27f0a205dd08c95b71072c901e72a617331e072f8e26db9c82025.scope: Deactivated successfully. Apr 25 00:00:19.450183 containerd[1984]: time="2026-04-25T00:00:19.450103584Z" level=info msg="shim disconnected" id=cc8f6849aef27f0a205dd08c95b71072c901e72a617331e072f8e26db9c82025 namespace=k8s.io Apr 25 00:00:19.450183 containerd[1984]: time="2026-04-25T00:00:19.450166339Z" level=warning msg="cleaning up after shim disconnected" id=cc8f6849aef27f0a205dd08c95b71072c901e72a617331e072f8e26db9c82025 namespace=k8s.io Apr 25 00:00:19.450183 containerd[1984]: time="2026-04-25T00:00:19.450180972Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 25 00:00:19.463609 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-466a4fa4ebc375bc39fdfb773e6803e8bc60815986ed476c130c3a5100e15a92-rootfs.mount: Deactivated successfully. Apr 25 00:00:19.487821 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 25 00:00:20.301338 containerd[1984]: time="2026-04-25T00:00:20.301241830Z" level=info msg="CreateContainer within sandbox \"9144b97fb1899bdf85596efaf9e5c9ded0b5175c9751c9ffe48373a79abfa67e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 25 00:00:20.345244 containerd[1984]: time="2026-04-25T00:00:20.345099248Z" level=info msg="CreateContainer within sandbox \"9144b97fb1899bdf85596efaf9e5c9ded0b5175c9751c9ffe48373a79abfa67e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b4b4a17a64cd8c66a440985e9826fccbd5a5ce1bff7f8c23b2ec6c8197b0702f\"" Apr 25 00:00:20.348265 containerd[1984]: time="2026-04-25T00:00:20.346381346Z" level=info msg="StartContainer for \"b4b4a17a64cd8c66a440985e9826fccbd5a5ce1bff7f8c23b2ec6c8197b0702f\"" Apr 25 00:00:20.425071 systemd[1]: Started cri-containerd-b4b4a17a64cd8c66a440985e9826fccbd5a5ce1bff7f8c23b2ec6c8197b0702f.scope - libcontainer container b4b4a17a64cd8c66a440985e9826fccbd5a5ce1bff7f8c23b2ec6c8197b0702f. Apr 25 00:00:20.465159 systemd[1]: run-containerd-runc-k8s.io-b4b4a17a64cd8c66a440985e9826fccbd5a5ce1bff7f8c23b2ec6c8197b0702f-runc.6PXigU.mount: Deactivated successfully. Apr 25 00:00:20.487558 containerd[1984]: time="2026-04-25T00:00:20.487447682Z" level=info msg="StartContainer for \"b4b4a17a64cd8c66a440985e9826fccbd5a5ce1bff7f8c23b2ec6c8197b0702f\" returns successfully" Apr 25 00:00:20.497328 systemd[1]: cri-containerd-b4b4a17a64cd8c66a440985e9826fccbd5a5ce1bff7f8c23b2ec6c8197b0702f.scope: Deactivated successfully. Apr 25 00:00:20.536643 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b4b4a17a64cd8c66a440985e9826fccbd5a5ce1bff7f8c23b2ec6c8197b0702f-rootfs.mount: Deactivated successfully. Apr 25 00:00:20.545113 containerd[1984]: time="2026-04-25T00:00:20.545028659Z" level=info msg="shim disconnected" id=b4b4a17a64cd8c66a440985e9826fccbd5a5ce1bff7f8c23b2ec6c8197b0702f namespace=k8s.io Apr 25 00:00:20.545113 containerd[1984]: time="2026-04-25T00:00:20.545107009Z" level=warning msg="cleaning up after shim disconnected" id=b4b4a17a64cd8c66a440985e9826fccbd5a5ce1bff7f8c23b2ec6c8197b0702f namespace=k8s.io Apr 25 00:00:20.545113 containerd[1984]: time="2026-04-25T00:00:20.545119667Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 25 00:00:21.308042 containerd[1984]: time="2026-04-25T00:00:21.307996759Z" level=info msg="CreateContainer within sandbox \"9144b97fb1899bdf85596efaf9e5c9ded0b5175c9751c9ffe48373a79abfa67e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 25 00:00:21.339802 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2057200149.mount: Deactivated successfully. Apr 25 00:00:21.343591 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3307741129.mount: Deactivated successfully. Apr 25 00:00:21.345922 containerd[1984]: time="2026-04-25T00:00:21.345880964Z" level=info msg="CreateContainer within sandbox \"9144b97fb1899bdf85596efaf9e5c9ded0b5175c9751c9ffe48373a79abfa67e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"38e31c2d91a6bb95362e15d403966c510caaf213d90a94018c8e13f069f4cac1\"" Apr 25 00:00:21.346806 containerd[1984]: time="2026-04-25T00:00:21.346751186Z" level=info msg="StartContainer for \"38e31c2d91a6bb95362e15d403966c510caaf213d90a94018c8e13f069f4cac1\"" Apr 25 00:00:21.380053 systemd[1]: Started cri-containerd-38e31c2d91a6bb95362e15d403966c510caaf213d90a94018c8e13f069f4cac1.scope - libcontainer container 38e31c2d91a6bb95362e15d403966c510caaf213d90a94018c8e13f069f4cac1. Apr 25 00:00:21.409532 systemd[1]: cri-containerd-38e31c2d91a6bb95362e15d403966c510caaf213d90a94018c8e13f069f4cac1.scope: Deactivated successfully. Apr 25 00:00:21.411642 containerd[1984]: time="2026-04-25T00:00:21.411592870Z" level=info msg="StartContainer for \"38e31c2d91a6bb95362e15d403966c510caaf213d90a94018c8e13f069f4cac1\" returns successfully" Apr 25 00:00:21.440695 containerd[1984]: time="2026-04-25T00:00:21.440633129Z" level=info msg="shim disconnected" id=38e31c2d91a6bb95362e15d403966c510caaf213d90a94018c8e13f069f4cac1 namespace=k8s.io Apr 25 00:00:21.440695 containerd[1984]: time="2026-04-25T00:00:21.440687656Z" level=warning msg="cleaning up after shim disconnected" id=38e31c2d91a6bb95362e15d403966c510caaf213d90a94018c8e13f069f4cac1 namespace=k8s.io Apr 25 00:00:21.440695 containerd[1984]: time="2026-04-25T00:00:21.440700039Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 25 00:00:22.315099 containerd[1984]: time="2026-04-25T00:00:22.315043053Z" level=info msg="CreateContainer within sandbox \"9144b97fb1899bdf85596efaf9e5c9ded0b5175c9751c9ffe48373a79abfa67e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 25 00:00:22.336422 containerd[1984]: time="2026-04-25T00:00:22.335746977Z" level=info msg="CreateContainer within sandbox \"9144b97fb1899bdf85596efaf9e5c9ded0b5175c9751c9ffe48373a79abfa67e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c625e32e70a316a114341a701d0baf70e8d6b3f952b6fb773222f00a5ca7a4db\"" Apr 25 00:00:22.339589 containerd[1984]: time="2026-04-25T00:00:22.339446251Z" level=info msg="StartContainer for \"c625e32e70a316a114341a701d0baf70e8d6b3f952b6fb773222f00a5ca7a4db\"" Apr 25 00:00:22.379169 systemd[1]: Started cri-containerd-c625e32e70a316a114341a701d0baf70e8d6b3f952b6fb773222f00a5ca7a4db.scope - libcontainer container c625e32e70a316a114341a701d0baf70e8d6b3f952b6fb773222f00a5ca7a4db. Apr 25 00:00:22.412710 containerd[1984]: time="2026-04-25T00:00:22.412659007Z" level=info msg="StartContainer for \"c625e32e70a316a114341a701d0baf70e8d6b3f952b6fb773222f00a5ca7a4db\" returns successfully" Apr 25 00:00:22.535001 systemd[1]: run-containerd-runc-k8s.io-c625e32e70a316a114341a701d0baf70e8d6b3f952b6fb773222f00a5ca7a4db-runc.z2lKa5.mount: Deactivated successfully. Apr 25 00:00:22.702811 kubelet[3193]: I0425 00:00:22.702591 3193 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 25 00:00:22.796706 systemd[1]: Created slice kubepods-burstable-pod97f8b8ec_97bb_4f8a_8c6b_484ee489375d.slice - libcontainer container kubepods-burstable-pod97f8b8ec_97bb_4f8a_8c6b_484ee489375d.slice. Apr 25 00:00:22.805727 systemd[1]: Created slice kubepods-burstable-pod87228786_9cd3_45d2_9dfe_e9ed7b0f130d.slice - libcontainer container kubepods-burstable-pod87228786_9cd3_45d2_9dfe_e9ed7b0f130d.slice. Apr 25 00:00:22.837881 kubelet[3193]: I0425 00:00:22.835810 3193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28fx5\" (UniqueName: \"kubernetes.io/projected/97f8b8ec-97bb-4f8a-8c6b-484ee489375d-kube-api-access-28fx5\") pod \"coredns-674b8bbfcf-hkg9n\" (UID: \"97f8b8ec-97bb-4f8a-8c6b-484ee489375d\") " pod="kube-system/coredns-674b8bbfcf-hkg9n" Apr 25 00:00:22.837881 kubelet[3193]: I0425 00:00:22.837627 3193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/97f8b8ec-97bb-4f8a-8c6b-484ee489375d-config-volume\") pod \"coredns-674b8bbfcf-hkg9n\" (UID: \"97f8b8ec-97bb-4f8a-8c6b-484ee489375d\") " pod="kube-system/coredns-674b8bbfcf-hkg9n" Apr 25 00:00:22.837881 kubelet[3193]: I0425 00:00:22.837872 3193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87228786-9cd3-45d2-9dfe-e9ed7b0f130d-config-volume\") pod \"coredns-674b8bbfcf-kq6t8\" (UID: \"87228786-9cd3-45d2-9dfe-e9ed7b0f130d\") " pod="kube-system/coredns-674b8bbfcf-kq6t8" Apr 25 00:00:22.838236 kubelet[3193]: I0425 00:00:22.838199 3193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scgrt\" (UniqueName: \"kubernetes.io/projected/87228786-9cd3-45d2-9dfe-e9ed7b0f130d-kube-api-access-scgrt\") pod \"coredns-674b8bbfcf-kq6t8\" (UID: \"87228786-9cd3-45d2-9dfe-e9ed7b0f130d\") " pod="kube-system/coredns-674b8bbfcf-kq6t8" Apr 25 00:00:23.106704 containerd[1984]: time="2026-04-25T00:00:23.106587695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hkg9n,Uid:97f8b8ec-97bb-4f8a-8c6b-484ee489375d,Namespace:kube-system,Attempt:0,}" Apr 25 00:00:23.109841 containerd[1984]: time="2026-04-25T00:00:23.109506469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kq6t8,Uid:87228786-9cd3-45d2-9dfe-e9ed7b0f130d,Namespace:kube-system,Attempt:0,}" Apr 25 00:00:23.333581 kubelet[3193]: I0425 00:00:23.332587 3193 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ncc2q" podStartSLOduration=8.986403365 podStartE2EDuration="24.332568918s" podCreationTimestamp="2026-04-24 23:59:59 +0000 UTC" firstStartedPulling="2026-04-25 00:00:02.980023926 +0000 UTC m=+9.137996141" lastFinishedPulling="2026-04-25 00:00:18.326189487 +0000 UTC m=+24.484161694" observedRunningTime="2026-04-25 00:00:23.332095758 +0000 UTC m=+29.490067984" watchObservedRunningTime="2026-04-25 00:00:23.332568918 +0000 UTC m=+29.490541143" Apr 25 00:00:24.849252 systemd-networkd[1905]: cilium_host: Link UP Apr 25 00:00:24.849802 systemd-networkd[1905]: cilium_net: Link UP Apr 25 00:00:24.850389 systemd-networkd[1905]: cilium_net: Gained carrier Apr 25 00:00:24.851069 systemd-networkd[1905]: cilium_host: Gained carrier Apr 25 00:00:24.851815 (udev-worker)[4152]: Network interface NamePolicy= disabled on kernel command line. Apr 25 00:00:24.853574 (udev-worker)[4208]: Network interface NamePolicy= disabled on kernel command line. Apr 25 00:00:25.000056 systemd-networkd[1905]: cilium_vxlan: Link UP Apr 25 00:00:25.000068 systemd-networkd[1905]: cilium_vxlan: Gained carrier Apr 25 00:00:25.343992 systemd-networkd[1905]: cilium_net: Gained IPv6LL Apr 25 00:00:25.591989 kernel: NET: Registered PF_ALG protocol family Apr 25 00:00:25.808223 systemd-networkd[1905]: cilium_host: Gained IPv6LL Apr 25 00:00:26.313715 systemd-networkd[1905]: lxc_health: Link UP Apr 25 00:00:26.315302 (udev-worker)[4234]: Network interface NamePolicy= disabled on kernel command line. Apr 25 00:00:26.321957 systemd-networkd[1905]: lxc_health: Gained carrier Apr 25 00:00:26.705941 systemd-networkd[1905]: cilium_vxlan: Gained IPv6LL Apr 25 00:00:26.743448 systemd-networkd[1905]: lxcd1f528e2e917: Link UP Apr 25 00:00:26.756879 kernel: eth0: renamed from tmp07412 Apr 25 00:00:26.758725 systemd-networkd[1905]: lxcae09c4efd641: Link UP Apr 25 00:00:26.770231 systemd-networkd[1905]: lxcd1f528e2e917: Gained carrier Apr 25 00:00:26.772975 kernel: eth0: renamed from tmpe3f4c Apr 25 00:00:26.778230 systemd-networkd[1905]: lxcae09c4efd641: Gained carrier Apr 25 00:00:26.779677 (udev-worker)[4555]: Network interface NamePolicy= disabled on kernel command line. Apr 25 00:00:28.177989 systemd-networkd[1905]: lxcae09c4efd641: Gained IPv6LL Apr 25 00:00:28.304002 systemd-networkd[1905]: lxc_health: Gained IPv6LL Apr 25 00:00:28.304362 systemd-networkd[1905]: lxcd1f528e2e917: Gained IPv6LL Apr 25 00:00:30.600965 ntpd[1953]: Listen normally on 8 cilium_host 192.168.0.102:123 Apr 25 00:00:30.601061 ntpd[1953]: Listen normally on 9 cilium_net [fe80::a4ab:4fff:fe44:be4f%4]:123 Apr 25 00:00:30.602211 ntpd[1953]: 25 Apr 00:00:30 ntpd[1953]: Listen normally on 8 cilium_host 192.168.0.102:123 Apr 25 00:00:30.602211 ntpd[1953]: 25 Apr 00:00:30 ntpd[1953]: Listen normally on 9 cilium_net [fe80::a4ab:4fff:fe44:be4f%4]:123 Apr 25 00:00:30.602211 ntpd[1953]: 25 Apr 00:00:30 ntpd[1953]: Listen normally on 10 cilium_host [fe80::501f:72ff:fe31:8f2d%5]:123 Apr 25 00:00:30.602211 ntpd[1953]: 25 Apr 00:00:30 ntpd[1953]: Listen normally on 11 cilium_vxlan [fe80::a04b:31ff:fe6a:4207%6]:123 Apr 25 00:00:30.602211 ntpd[1953]: 25 Apr 00:00:30 ntpd[1953]: Listen normally on 12 lxc_health [fe80::5414:edff:feaf:aaa3%8]:123 Apr 25 00:00:30.602211 ntpd[1953]: 25 Apr 00:00:30 ntpd[1953]: Listen normally on 13 lxcd1f528e2e917 [fe80::c098:34ff:fe0d:62f7%10]:123 Apr 25 00:00:30.602211 ntpd[1953]: 25 Apr 00:00:30 ntpd[1953]: Listen normally on 14 lxcae09c4efd641 [fe80::1c8b:56ff:feae:e9c3%12]:123 Apr 25 00:00:30.601121 ntpd[1953]: Listen normally on 10 cilium_host [fe80::501f:72ff:fe31:8f2d%5]:123 Apr 25 00:00:30.601164 ntpd[1953]: Listen normally on 11 cilium_vxlan [fe80::a04b:31ff:fe6a:4207%6]:123 Apr 25 00:00:30.601208 ntpd[1953]: Listen normally on 12 lxc_health [fe80::5414:edff:feaf:aaa3%8]:123 Apr 25 00:00:30.601256 ntpd[1953]: Listen normally on 13 lxcd1f528e2e917 [fe80::c098:34ff:fe0d:62f7%10]:123 Apr 25 00:00:30.601295 ntpd[1953]: Listen normally on 14 lxcae09c4efd641 [fe80::1c8b:56ff:feae:e9c3%12]:123 Apr 25 00:00:31.288177 containerd[1984]: time="2026-04-25T00:00:31.287732199Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 00:00:31.288177 containerd[1984]: time="2026-04-25T00:00:31.287856133Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 00:00:31.288177 containerd[1984]: time="2026-04-25T00:00:31.287899902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:00:31.288177 containerd[1984]: time="2026-04-25T00:00:31.288040597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:00:31.308180 containerd[1984]: time="2026-04-25T00:00:31.304505097Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 00:00:31.308180 containerd[1984]: time="2026-04-25T00:00:31.304575201Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 00:00:31.308180 containerd[1984]: time="2026-04-25T00:00:31.304618512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:00:31.308180 containerd[1984]: time="2026-04-25T00:00:31.304745803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:00:31.357896 systemd[1]: Started cri-containerd-e3f4ce600c096440a590627c865c9a216bdbe40a81baf8e43b48bc92278066e7.scope - libcontainer container e3f4ce600c096440a590627c865c9a216bdbe40a81baf8e43b48bc92278066e7. Apr 25 00:00:31.383053 systemd[1]: Started cri-containerd-074128f61da29ef87db03e9beab855c6102869f585707360f878e1fbc0f85041.scope - libcontainer container 074128f61da29ef87db03e9beab855c6102869f585707360f878e1fbc0f85041. Apr 25 00:00:31.532441 containerd[1984]: time="2026-04-25T00:00:31.532380441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hkg9n,Uid:97f8b8ec-97bb-4f8a-8c6b-484ee489375d,Namespace:kube-system,Attempt:0,} returns sandbox id \"e3f4ce600c096440a590627c865c9a216bdbe40a81baf8e43b48bc92278066e7\"" Apr 25 00:00:31.553060 containerd[1984]: time="2026-04-25T00:00:31.552693005Z" level=info msg="CreateContainer within sandbox \"e3f4ce600c096440a590627c865c9a216bdbe40a81baf8e43b48bc92278066e7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 25 00:00:31.569716 containerd[1984]: time="2026-04-25T00:00:31.569655193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kq6t8,Uid:87228786-9cd3-45d2-9dfe-e9ed7b0f130d,Namespace:kube-system,Attempt:0,} returns sandbox id \"074128f61da29ef87db03e9beab855c6102869f585707360f878e1fbc0f85041\"" Apr 25 00:00:31.588883 containerd[1984]: time="2026-04-25T00:00:31.584608837Z" level=info msg="CreateContainer within sandbox \"074128f61da29ef87db03e9beab855c6102869f585707360f878e1fbc0f85041\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 25 00:00:31.665549 containerd[1984]: time="2026-04-25T00:00:31.665494304Z" level=info msg="CreateContainer within sandbox \"e3f4ce600c096440a590627c865c9a216bdbe40a81baf8e43b48bc92278066e7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a78ba12d26215878c98f0874c9e100041bcfc96e9d9fa617ba976551d686d403\"" Apr 25 00:00:31.666422 containerd[1984]: time="2026-04-25T00:00:31.666063589Z" level=info msg="StartContainer for \"a78ba12d26215878c98f0874c9e100041bcfc96e9d9fa617ba976551d686d403\"" Apr 25 00:00:31.670131 containerd[1984]: time="2026-04-25T00:00:31.670059665Z" level=info msg="CreateContainer within sandbox \"074128f61da29ef87db03e9beab855c6102869f585707360f878e1fbc0f85041\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"33065c733b09785738b4d33ed38e30087f359373d1b0d778affefee90f0380c9\"" Apr 25 00:00:31.672635 containerd[1984]: time="2026-04-25T00:00:31.671007175Z" level=info msg="StartContainer for \"33065c733b09785738b4d33ed38e30087f359373d1b0d778affefee90f0380c9\"" Apr 25 00:00:31.727138 systemd[1]: Started cri-containerd-33065c733b09785738b4d33ed38e30087f359373d1b0d778affefee90f0380c9.scope - libcontainer container 33065c733b09785738b4d33ed38e30087f359373d1b0d778affefee90f0380c9. Apr 25 00:00:31.728889 systemd[1]: Started cri-containerd-a78ba12d26215878c98f0874c9e100041bcfc96e9d9fa617ba976551d686d403.scope - libcontainer container a78ba12d26215878c98f0874c9e100041bcfc96e9d9fa617ba976551d686d403. Apr 25 00:00:31.794422 containerd[1984]: time="2026-04-25T00:00:31.794360695Z" level=info msg="StartContainer for \"33065c733b09785738b4d33ed38e30087f359373d1b0d778affefee90f0380c9\" returns successfully" Apr 25 00:00:31.794639 containerd[1984]: time="2026-04-25T00:00:31.794358999Z" level=info msg="StartContainer for \"a78ba12d26215878c98f0874c9e100041bcfc96e9d9fa617ba976551d686d403\" returns successfully" Apr 25 00:00:32.299715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2074300821.mount: Deactivated successfully. Apr 25 00:00:32.366110 kubelet[3193]: I0425 00:00:32.366038 3193 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-kq6t8" podStartSLOduration=32.366019686 podStartE2EDuration="32.366019686s" podCreationTimestamp="2026-04-25 00:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-25 00:00:32.352161923 +0000 UTC m=+38.510134149" watchObservedRunningTime="2026-04-25 00:00:32.366019686 +0000 UTC m=+38.523991911" Apr 25 00:00:34.446227 systemd[1]: Started sshd@7-172.31.30.251:22-4.175.71.9:54010.service - OpenSSH per-connection server daemon (4.175.71.9:54010). Apr 25 00:00:35.728246 sshd[4743]: Accepted publickey for core from 4.175.71.9 port 54010 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 25 00:00:35.743414 sshd[4743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:00:35.752586 systemd-logind[1963]: New session 8 of user core. Apr 25 00:00:35.759158 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 25 00:00:37.456975 sshd[4743]: pam_unix(sshd:session): session closed for user core Apr 25 00:00:37.461877 systemd-logind[1963]: Session 8 logged out. Waiting for processes to exit. Apr 25 00:00:37.462413 systemd[1]: sshd@7-172.31.30.251:22-4.175.71.9:54010.service: Deactivated successfully. Apr 25 00:00:37.465545 systemd[1]: session-8.scope: Deactivated successfully. Apr 25 00:00:37.466862 systemd-logind[1963]: Removed session 8. Apr 25 00:00:42.635317 systemd[1]: Started sshd@8-172.31.30.251:22-4.175.71.9:55210.service - OpenSSH per-connection server daemon (4.175.71.9:55210). Apr 25 00:00:43.364591 kubelet[3193]: I0425 00:00:43.364500 3193 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-hkg9n" podStartSLOduration=43.364477518 podStartE2EDuration="43.364477518s" podCreationTimestamp="2026-04-25 00:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-25 00:00:32.367142111 +0000 UTC m=+38.525114336" watchObservedRunningTime="2026-04-25 00:00:43.364477518 +0000 UTC m=+49.522449745" Apr 25 00:00:43.653002 sshd[4765]: Accepted publickey for core from 4.175.71.9 port 55210 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 25 00:00:43.654765 sshd[4765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:00:43.659219 systemd-logind[1963]: New session 9 of user core. Apr 25 00:00:43.666045 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 25 00:00:44.431028 sshd[4765]: pam_unix(sshd:session): session closed for user core Apr 25 00:00:44.435147 systemd[1]: sshd@8-172.31.30.251:22-4.175.71.9:55210.service: Deactivated successfully. Apr 25 00:00:44.438265 systemd[1]: session-9.scope: Deactivated successfully. Apr 25 00:00:44.439261 systemd-logind[1963]: Session 9 logged out. Waiting for processes to exit. Apr 25 00:00:44.440361 systemd-logind[1963]: Removed session 9. Apr 25 00:00:49.599226 systemd[1]: Started sshd@9-172.31.30.251:22-4.175.71.9:47340.service - OpenSSH per-connection server daemon (4.175.71.9:47340). Apr 25 00:00:50.592482 sshd[4785]: Accepted publickey for core from 4.175.71.9 port 47340 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 25 00:00:50.600781 sshd[4785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:00:50.606059 systemd-logind[1963]: New session 10 of user core. Apr 25 00:00:50.611048 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 25 00:00:51.363160 sshd[4785]: pam_unix(sshd:session): session closed for user core Apr 25 00:00:51.368431 systemd[1]: sshd@9-172.31.30.251:22-4.175.71.9:47340.service: Deactivated successfully. Apr 25 00:00:51.371009 systemd[1]: session-10.scope: Deactivated successfully. Apr 25 00:00:51.372901 systemd-logind[1963]: Session 10 logged out. Waiting for processes to exit. Apr 25 00:00:51.373977 systemd-logind[1963]: Removed session 10. Apr 25 00:00:51.536609 systemd[1]: Started sshd@10-172.31.30.251:22-4.175.71.9:47348.service - OpenSSH per-connection server daemon (4.175.71.9:47348). Apr 25 00:00:52.526970 sshd[4799]: Accepted publickey for core from 4.175.71.9 port 47348 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 25 00:00:52.528485 sshd[4799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:00:52.534527 systemd-logind[1963]: New session 11 of user core. Apr 25 00:00:52.541073 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 25 00:00:53.331447 sshd[4799]: pam_unix(sshd:session): session closed for user core Apr 25 00:00:53.336315 systemd-logind[1963]: Session 11 logged out. Waiting for processes to exit. Apr 25 00:00:53.336772 systemd[1]: sshd@10-172.31.30.251:22-4.175.71.9:47348.service: Deactivated successfully. Apr 25 00:00:53.339551 systemd[1]: session-11.scope: Deactivated successfully. Apr 25 00:00:53.342362 systemd-logind[1963]: Removed session 11. Apr 25 00:00:53.513240 systemd[1]: Started sshd@11-172.31.30.251:22-4.175.71.9:47356.service - OpenSSH per-connection server daemon (4.175.71.9:47356). Apr 25 00:00:54.524255 sshd[4809]: Accepted publickey for core from 4.175.71.9 port 47356 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 25 00:00:54.525887 sshd[4809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:00:54.531324 systemd-logind[1963]: New session 12 of user core. Apr 25 00:00:54.536088 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 25 00:00:55.292405 sshd[4809]: pam_unix(sshd:session): session closed for user core Apr 25 00:00:55.296147 systemd[1]: sshd@11-172.31.30.251:22-4.175.71.9:47356.service: Deactivated successfully. Apr 25 00:00:55.298682 systemd[1]: session-12.scope: Deactivated successfully. Apr 25 00:00:55.300297 systemd-logind[1963]: Session 12 logged out. Waiting for processes to exit. Apr 25 00:00:55.301874 systemd-logind[1963]: Removed session 12. Apr 25 00:01:00.457510 systemd[1]: Started sshd@12-172.31.30.251:22-4.175.71.9:42336.service - OpenSSH per-connection server daemon (4.175.71.9:42336). Apr 25 00:01:01.441752 sshd[4824]: Accepted publickey for core from 4.175.71.9 port 42336 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 25 00:01:01.446192 sshd[4824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:01:01.460612 systemd-logind[1963]: New session 13 of user core. Apr 25 00:01:01.477768 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 25 00:01:02.848445 sshd[4824]: pam_unix(sshd:session): session closed for user core Apr 25 00:01:02.880916 systemd[1]: sshd@12-172.31.30.251:22-4.175.71.9:42336.service: Deactivated successfully. Apr 25 00:01:02.906956 systemd[1]: session-13.scope: Deactivated successfully. Apr 25 00:01:02.910490 systemd-logind[1963]: Session 13 logged out. Waiting for processes to exit. Apr 25 00:01:02.936165 systemd-logind[1963]: Removed session 13. Apr 25 00:01:08.035194 systemd[1]: Started sshd@13-172.31.30.251:22-4.175.71.9:37314.service - OpenSSH per-connection server daemon (4.175.71.9:37314). Apr 25 00:01:09.116152 sshd[4839]: Accepted publickey for core from 4.175.71.9 port 37314 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 25 00:01:09.117755 sshd[4839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:01:09.124382 systemd-logind[1963]: New session 14 of user core. Apr 25 00:01:09.130083 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 25 00:01:09.920092 sshd[4839]: pam_unix(sshd:session): session closed for user core Apr 25 00:01:09.927157 systemd[1]: sshd@13-172.31.30.251:22-4.175.71.9:37314.service: Deactivated successfully. Apr 25 00:01:09.929914 systemd[1]: session-14.scope: Deactivated successfully. Apr 25 00:01:09.931796 systemd-logind[1963]: Session 14 logged out. Waiting for processes to exit. Apr 25 00:01:09.933539 systemd-logind[1963]: Removed session 14. Apr 25 00:01:10.084189 systemd[1]: Started sshd@14-172.31.30.251:22-4.175.71.9:37330.service - OpenSSH per-connection server daemon (4.175.71.9:37330). Apr 25 00:01:11.068212 sshd[4852]: Accepted publickey for core from 4.175.71.9 port 37330 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 25 00:01:11.068936 sshd[4852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:01:11.073533 systemd-logind[1963]: New session 15 of user core. Apr 25 00:01:11.080068 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 25 00:01:12.901348 sshd[4852]: pam_unix(sshd:session): session closed for user core Apr 25 00:01:12.911050 systemd[1]: sshd@14-172.31.30.251:22-4.175.71.9:37330.service: Deactivated successfully. Apr 25 00:01:12.913571 systemd[1]: session-15.scope: Deactivated successfully. Apr 25 00:01:12.914699 systemd-logind[1963]: Session 15 logged out. Waiting for processes to exit. Apr 25 00:01:12.916045 systemd-logind[1963]: Removed session 15. Apr 25 00:01:13.073387 systemd[1]: Started sshd@15-172.31.30.251:22-4.175.71.9:37332.service - OpenSSH per-connection server daemon (4.175.71.9:37332). Apr 25 00:01:14.061963 sshd[4863]: Accepted publickey for core from 4.175.71.9 port 37332 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 25 00:01:14.063687 sshd[4863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:01:14.068998 systemd-logind[1963]: New session 16 of user core. Apr 25 00:01:14.078109 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 25 00:01:15.375060 sshd[4863]: pam_unix(sshd:session): session closed for user core Apr 25 00:01:15.379384 systemd[1]: sshd@15-172.31.30.251:22-4.175.71.9:37332.service: Deactivated successfully. Apr 25 00:01:15.381854 systemd[1]: session-16.scope: Deactivated successfully. Apr 25 00:01:15.382680 systemd-logind[1963]: Session 16 logged out. Waiting for processes to exit. Apr 25 00:01:15.384198 systemd-logind[1963]: Removed session 16. Apr 25 00:01:15.547567 systemd[1]: Started sshd@16-172.31.30.251:22-4.175.71.9:50944.service - OpenSSH per-connection server daemon (4.175.71.9:50944). Apr 25 00:01:16.528639 sshd[4881]: Accepted publickey for core from 4.175.71.9 port 50944 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 25 00:01:16.530256 sshd[4881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:01:16.535904 systemd-logind[1963]: New session 17 of user core. Apr 25 00:01:16.543081 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 25 00:01:17.449324 sshd[4881]: pam_unix(sshd:session): session closed for user core Apr 25 00:01:17.454112 systemd-logind[1963]: Session 17 logged out. Waiting for processes to exit. Apr 25 00:01:17.454973 systemd[1]: sshd@16-172.31.30.251:22-4.175.71.9:50944.service: Deactivated successfully. Apr 25 00:01:17.457034 systemd[1]: session-17.scope: Deactivated successfully. Apr 25 00:01:17.458587 systemd-logind[1963]: Removed session 17. Apr 25 00:01:17.621186 systemd[1]: Started sshd@17-172.31.30.251:22-4.175.71.9:50960.service - OpenSSH per-connection server daemon (4.175.71.9:50960). Apr 25 00:01:18.601652 sshd[4893]: Accepted publickey for core from 4.175.71.9 port 50960 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 25 00:01:18.603562 sshd[4893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:01:18.608949 systemd-logind[1963]: New session 18 of user core. Apr 25 00:01:18.614112 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 25 00:01:19.345605 sshd[4893]: pam_unix(sshd:session): session closed for user core Apr 25 00:01:19.349261 systemd[1]: sshd@17-172.31.30.251:22-4.175.71.9:50960.service: Deactivated successfully. Apr 25 00:01:19.352010 systemd[1]: session-18.scope: Deactivated successfully. Apr 25 00:01:19.353612 systemd-logind[1963]: Session 18 logged out. Waiting for processes to exit. Apr 25 00:01:19.355198 systemd-logind[1963]: Removed session 18. Apr 25 00:01:24.536186 systemd[1]: Started sshd@18-172.31.30.251:22-4.175.71.9:50972.service - OpenSSH per-connection server daemon (4.175.71.9:50972). Apr 25 00:01:25.552359 sshd[4908]: Accepted publickey for core from 4.175.71.9 port 50972 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 25 00:01:25.554075 sshd[4908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:01:25.559444 systemd-logind[1963]: New session 19 of user core. Apr 25 00:01:25.564008 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 25 00:01:26.337948 sshd[4908]: pam_unix(sshd:session): session closed for user core Apr 25 00:01:26.341998 systemd[1]: sshd@18-172.31.30.251:22-4.175.71.9:50972.service: Deactivated successfully. Apr 25 00:01:26.344961 systemd[1]: session-19.scope: Deactivated successfully. Apr 25 00:01:26.345982 systemd-logind[1963]: Session 19 logged out. Waiting for processes to exit. Apr 25 00:01:26.347442 systemd-logind[1963]: Removed session 19. Apr 25 00:01:31.499817 systemd[1]: Started sshd@19-172.31.30.251:22-4.175.71.9:37322.service - OpenSSH per-connection server daemon (4.175.71.9:37322). Apr 25 00:01:32.500184 sshd[4921]: Accepted publickey for core from 4.175.71.9 port 37322 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 25 00:01:32.500944 sshd[4921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:01:32.506672 systemd-logind[1963]: New session 20 of user core. Apr 25 00:01:32.514045 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 25 00:01:33.284231 sshd[4921]: pam_unix(sshd:session): session closed for user core Apr 25 00:01:33.288695 systemd-logind[1963]: Session 20 logged out. Waiting for processes to exit. Apr 25 00:01:33.289906 systemd[1]: sshd@19-172.31.30.251:22-4.175.71.9:37322.service: Deactivated successfully. Apr 25 00:01:33.292276 systemd[1]: session-20.scope: Deactivated successfully. Apr 25 00:01:33.293423 systemd-logind[1963]: Removed session 20. Apr 25 00:01:33.466526 systemd[1]: Started sshd@20-172.31.30.251:22-4.175.71.9:37334.service - OpenSSH per-connection server daemon (4.175.71.9:37334). Apr 25 00:01:34.440974 sshd[4934]: Accepted publickey for core from 4.175.71.9 port 37334 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 25 00:01:34.442711 sshd[4934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:01:34.447493 systemd-logind[1963]: New session 21 of user core. Apr 25 00:01:34.451038 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 25 00:01:36.794342 containerd[1984]: time="2026-04-25T00:01:36.794151213Z" level=info msg="StopContainer for \"af13dcebf7c713855682e67719d9bf65812f744daf46139ce4dfcb90f32ae6f4\" with timeout 30 (s)" Apr 25 00:01:36.797758 containerd[1984]: time="2026-04-25T00:01:36.797442299Z" level=info msg="Stop container \"af13dcebf7c713855682e67719d9bf65812f744daf46139ce4dfcb90f32ae6f4\" with signal terminated" Apr 25 00:01:36.804783 systemd[1]: run-containerd-runc-k8s.io-c625e32e70a316a114341a701d0baf70e8d6b3f952b6fb773222f00a5ca7a4db-runc.8wtalw.mount: Deactivated successfully. Apr 25 00:01:36.856862 systemd[1]: cri-containerd-af13dcebf7c713855682e67719d9bf65812f744daf46139ce4dfcb90f32ae6f4.scope: Deactivated successfully. Apr 25 00:01:36.883753 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af13dcebf7c713855682e67719d9bf65812f744daf46139ce4dfcb90f32ae6f4-rootfs.mount: Deactivated successfully. Apr 25 00:01:36.939761 containerd[1984]: time="2026-04-25T00:01:36.939708743Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 25 00:01:36.944354 containerd[1984]: time="2026-04-25T00:01:36.944118484Z" level=info msg="shim disconnected" id=af13dcebf7c713855682e67719d9bf65812f744daf46139ce4dfcb90f32ae6f4 namespace=k8s.io Apr 25 00:01:36.944354 containerd[1984]: time="2026-04-25T00:01:36.944178078Z" level=warning msg="cleaning up after shim disconnected" id=af13dcebf7c713855682e67719d9bf65812f744daf46139ce4dfcb90f32ae6f4 namespace=k8s.io Apr 25 00:01:36.944354 containerd[1984]: time="2026-04-25T00:01:36.944192854Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 25 00:01:36.954347 containerd[1984]: time="2026-04-25T00:01:36.954010594Z" level=info msg="StopContainer for \"c625e32e70a316a114341a701d0baf70e8d6b3f952b6fb773222f00a5ca7a4db\" with timeout 2 (s)" Apr 25 00:01:36.954733 containerd[1984]: time="2026-04-25T00:01:36.954705593Z" level=info msg="Stop container \"c625e32e70a316a114341a701d0baf70e8d6b3f952b6fb773222f00a5ca7a4db\" with signal terminated" Apr 25 00:01:36.968100 systemd-networkd[1905]: lxc_health: Link DOWN Apr 25 00:01:36.968111 systemd-networkd[1905]: lxc_health: Lost carrier Apr 25 00:01:36.981183 systemd[1]: cri-containerd-c625e32e70a316a114341a701d0baf70e8d6b3f952b6fb773222f00a5ca7a4db.scope: Deactivated successfully. Apr 25 00:01:36.982345 containerd[1984]: time="2026-04-25T00:01:36.980976761Z" level=info msg="StopContainer for \"af13dcebf7c713855682e67719d9bf65812f744daf46139ce4dfcb90f32ae6f4\" returns successfully" Apr 25 00:01:36.981479 systemd[1]: cri-containerd-c625e32e70a316a114341a701d0baf70e8d6b3f952b6fb773222f00a5ca7a4db.scope: Consumed 8.265s CPU time. Apr 25 00:01:36.989092 containerd[1984]: time="2026-04-25T00:01:36.988309208Z" level=info msg="StopPodSandbox for \"65131dd26add77e85ed143bac3876426230056f5c87f2d769b385f44f8e501e3\"" Apr 25 00:01:36.989092 containerd[1984]: time="2026-04-25T00:01:36.988368724Z" level=info msg="Container to stop \"af13dcebf7c713855682e67719d9bf65812f744daf46139ce4dfcb90f32ae6f4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 25 00:01:36.992533 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-65131dd26add77e85ed143bac3876426230056f5c87f2d769b385f44f8e501e3-shm.mount: Deactivated successfully. Apr 25 00:01:37.005436 systemd[1]: cri-containerd-65131dd26add77e85ed143bac3876426230056f5c87f2d769b385f44f8e501e3.scope: Deactivated successfully. Apr 25 00:01:37.030209 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c625e32e70a316a114341a701d0baf70e8d6b3f952b6fb773222f00a5ca7a4db-rootfs.mount: Deactivated successfully. Apr 25 00:01:37.044663 containerd[1984]: time="2026-04-25T00:01:37.044522041Z" level=info msg="shim disconnected" id=c625e32e70a316a114341a701d0baf70e8d6b3f952b6fb773222f00a5ca7a4db namespace=k8s.io Apr 25 00:01:37.044663 containerd[1984]: time="2026-04-25T00:01:37.044589510Z" level=warning msg="cleaning up after shim disconnected" id=c625e32e70a316a114341a701d0baf70e8d6b3f952b6fb773222f00a5ca7a4db namespace=k8s.io Apr 25 00:01:37.044663 containerd[1984]: time="2026-04-25T00:01:37.044600548Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 25 00:01:37.063251 containerd[1984]: time="2026-04-25T00:01:37.062970793Z" level=info msg="shim disconnected" id=65131dd26add77e85ed143bac3876426230056f5c87f2d769b385f44f8e501e3 namespace=k8s.io Apr 25 00:01:37.063251 containerd[1984]: time="2026-04-25T00:01:37.063070872Z" level=warning msg="cleaning up after shim disconnected" id=65131dd26add77e85ed143bac3876426230056f5c87f2d769b385f44f8e501e3 namespace=k8s.io Apr 25 00:01:37.063251 containerd[1984]: time="2026-04-25T00:01:37.063101218Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 25 00:01:37.077936 containerd[1984]: time="2026-04-25T00:01:37.077886142Z" level=info msg="StopContainer for \"c625e32e70a316a114341a701d0baf70e8d6b3f952b6fb773222f00a5ca7a4db\" returns successfully" Apr 25 00:01:37.078517 containerd[1984]: time="2026-04-25T00:01:37.078481988Z" level=info msg="StopPodSandbox for \"9144b97fb1899bdf85596efaf9e5c9ded0b5175c9751c9ffe48373a79abfa67e\"" Apr 25 00:01:37.078604 containerd[1984]: time="2026-04-25T00:01:37.078522307Z" level=info msg="Container to stop \"466a4fa4ebc375bc39fdfb773e6803e8bc60815986ed476c130c3a5100e15a92\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 25 00:01:37.078604 containerd[1984]: time="2026-04-25T00:01:37.078544027Z" level=info msg="Container to stop \"cc8f6849aef27f0a205dd08c95b71072c901e72a617331e072f8e26db9c82025\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 25 00:01:37.078604 containerd[1984]: time="2026-04-25T00:01:37.078557683Z" level=info msg="Container to stop \"c625e32e70a316a114341a701d0baf70e8d6b3f952b6fb773222f00a5ca7a4db\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 25 00:01:37.078604 containerd[1984]: time="2026-04-25T00:01:37.078572426Z" level=info msg="Container to stop \"b4b4a17a64cd8c66a440985e9826fccbd5a5ce1bff7f8c23b2ec6c8197b0702f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 25 00:01:37.078604 containerd[1984]: time="2026-04-25T00:01:37.078585726Z" level=info msg="Container to stop \"38e31c2d91a6bb95362e15d403966c510caaf213d90a94018c8e13f069f4cac1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 25 00:01:37.095331 systemd[1]: cri-containerd-9144b97fb1899bdf85596efaf9e5c9ded0b5175c9751c9ffe48373a79abfa67e.scope: Deactivated successfully. Apr 25 00:01:37.111244 containerd[1984]: time="2026-04-25T00:01:37.109122208Z" level=info msg="TearDown network for sandbox \"65131dd26add77e85ed143bac3876426230056f5c87f2d769b385f44f8e501e3\" successfully" Apr 25 00:01:37.111244 containerd[1984]: time="2026-04-25T00:01:37.109168176Z" level=info msg="StopPodSandbox for \"65131dd26add77e85ed143bac3876426230056f5c87f2d769b385f44f8e501e3\" returns successfully" Apr 25 00:01:37.149553 containerd[1984]: time="2026-04-25T00:01:37.149463099Z" level=info msg="shim disconnected" id=9144b97fb1899bdf85596efaf9e5c9ded0b5175c9751c9ffe48373a79abfa67e namespace=k8s.io Apr 25 00:01:37.149553 containerd[1984]: time="2026-04-25T00:01:37.149538175Z" level=warning msg="cleaning up after shim disconnected" id=9144b97fb1899bdf85596efaf9e5c9ded0b5175c9751c9ffe48373a79abfa67e namespace=k8s.io Apr 25 00:01:37.149553 containerd[1984]: time="2026-04-25T00:01:37.149551351Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 25 00:01:37.164913 containerd[1984]: time="2026-04-25T00:01:37.164862316Z" level=info msg="TearDown network for sandbox \"9144b97fb1899bdf85596efaf9e5c9ded0b5175c9751c9ffe48373a79abfa67e\" successfully" Apr 25 00:01:37.164913 containerd[1984]: time="2026-04-25T00:01:37.164903829Z" level=info msg="StopPodSandbox for \"9144b97fb1899bdf85596efaf9e5c9ded0b5175c9751c9ffe48373a79abfa67e\" returns successfully" Apr 25 00:01:37.218555 kubelet[3193]: I0425 00:01:37.217740 3193 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc37fa9b-717a-49c9-be15-2be707baec3a-lib-modules\") pod \"dc37fa9b-717a-49c9-be15-2be707baec3a\" (UID: \"dc37fa9b-717a-49c9-be15-2be707baec3a\") " Apr 25 00:01:37.218555 kubelet[3193]: I0425 00:01:37.217849 3193 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dc37fa9b-717a-49c9-be15-2be707baec3a-host-proc-sys-net\") pod \"dc37fa9b-717a-49c9-be15-2be707baec3a\" (UID: \"dc37fa9b-717a-49c9-be15-2be707baec3a\") " Apr 25 00:01:37.218555 kubelet[3193]: I0425 00:01:37.217885 3193 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dc37fa9b-717a-49c9-be15-2be707baec3a-cni-path\") pod \"dc37fa9b-717a-49c9-be15-2be707baec3a\" (UID: \"dc37fa9b-717a-49c9-be15-2be707baec3a\") " Apr 25 00:01:37.218555 kubelet[3193]: I0425 00:01:37.217921 3193 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9a90b521-7ed3-4db6-ba85-db810c0452db-cilium-config-path\") pod \"9a90b521-7ed3-4db6-ba85-db810c0452db\" (UID: \"9a90b521-7ed3-4db6-ba85-db810c0452db\") " Apr 25 00:01:37.218555 kubelet[3193]: I0425 00:01:37.217951 3193 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dc37fa9b-717a-49c9-be15-2be707baec3a-clustermesh-secrets\") pod \"dc37fa9b-717a-49c9-be15-2be707baec3a\" (UID: \"dc37fa9b-717a-49c9-be15-2be707baec3a\") " Apr 25 00:01:37.218555 kubelet[3193]: I0425 00:01:37.217977 3193 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dc37fa9b-717a-49c9-be15-2be707baec3a-hubble-tls\") pod \"dc37fa9b-717a-49c9-be15-2be707baec3a\" (UID: \"dc37fa9b-717a-49c9-be15-2be707baec3a\") " Apr 25 00:01:37.219216 kubelet[3193]: I0425 00:01:37.217998 3193 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dc37fa9b-717a-49c9-be15-2be707baec3a-etc-cni-netd\") pod \"dc37fa9b-717a-49c9-be15-2be707baec3a\" (UID: \"dc37fa9b-717a-49c9-be15-2be707baec3a\") " Apr 25 00:01:37.219216 kubelet[3193]: I0425 00:01:37.218020 3193 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dc37fa9b-717a-49c9-be15-2be707baec3a-cilium-cgroup\") pod \"dc37fa9b-717a-49c9-be15-2be707baec3a\" (UID: \"dc37fa9b-717a-49c9-be15-2be707baec3a\") " Apr 25 00:01:37.219216 kubelet[3193]: I0425 00:01:37.218043 3193 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dc37fa9b-717a-49c9-be15-2be707baec3a-cilium-run\") pod \"dc37fa9b-717a-49c9-be15-2be707baec3a\" (UID: \"dc37fa9b-717a-49c9-be15-2be707baec3a\") " Apr 25 00:01:37.219216 kubelet[3193]: I0425 00:01:37.218068 3193 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc37fa9b-717a-49c9-be15-2be707baec3a-xtables-lock\") pod \"dc37fa9b-717a-49c9-be15-2be707baec3a\" (UID: \"dc37fa9b-717a-49c9-be15-2be707baec3a\") " Apr 25 00:01:37.219216 kubelet[3193]: I0425 00:01:37.218157 3193 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dc37fa9b-717a-49c9-be15-2be707baec3a-host-proc-sys-kernel\") pod \"dc37fa9b-717a-49c9-be15-2be707baec3a\" (UID: \"dc37fa9b-717a-49c9-be15-2be707baec3a\") " Apr 25 00:01:37.219216 kubelet[3193]: I0425 00:01:37.218180 3193 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dc37fa9b-717a-49c9-be15-2be707baec3a-hostproc\") pod \"dc37fa9b-717a-49c9-be15-2be707baec3a\" (UID: \"dc37fa9b-717a-49c9-be15-2be707baec3a\") " Apr 25 00:01:37.219485 kubelet[3193]: I0425 00:01:37.218202 3193 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dc37fa9b-717a-49c9-be15-2be707baec3a-bpf-maps\") pod \"dc37fa9b-717a-49c9-be15-2be707baec3a\" (UID: \"dc37fa9b-717a-49c9-be15-2be707baec3a\") " Apr 25 00:01:37.219485 kubelet[3193]: I0425 00:01:37.218229 3193 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dc37fa9b-717a-49c9-be15-2be707baec3a-cilium-config-path\") pod \"dc37fa9b-717a-49c9-be15-2be707baec3a\" (UID: \"dc37fa9b-717a-49c9-be15-2be707baec3a\") " Apr 25 00:01:37.219485 kubelet[3193]: I0425 00:01:37.218255 3193 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57v96\" (UniqueName: \"kubernetes.io/projected/dc37fa9b-717a-49c9-be15-2be707baec3a-kube-api-access-57v96\") pod \"dc37fa9b-717a-49c9-be15-2be707baec3a\" (UID: \"dc37fa9b-717a-49c9-be15-2be707baec3a\") " Apr 25 00:01:37.219485 kubelet[3193]: I0425 00:01:37.218282 3193 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wld7t\" (UniqueName: \"kubernetes.io/projected/9a90b521-7ed3-4db6-ba85-db810c0452db-kube-api-access-wld7t\") pod \"9a90b521-7ed3-4db6-ba85-db810c0452db\" (UID: \"9a90b521-7ed3-4db6-ba85-db810c0452db\") " Apr 25 00:01:37.228944 kubelet[3193]: I0425 00:01:37.223371 3193 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc37fa9b-717a-49c9-be15-2be707baec3a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "dc37fa9b-717a-49c9-be15-2be707baec3a" (UID: "dc37fa9b-717a-49c9-be15-2be707baec3a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 25 00:01:37.228944 kubelet[3193]: I0425 00:01:37.227949 3193 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc37fa9b-717a-49c9-be15-2be707baec3a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "dc37fa9b-717a-49c9-be15-2be707baec3a" (UID: "dc37fa9b-717a-49c9-be15-2be707baec3a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 25 00:01:37.228944 kubelet[3193]: I0425 00:01:37.227976 3193 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc37fa9b-717a-49c9-be15-2be707baec3a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "dc37fa9b-717a-49c9-be15-2be707baec3a" (UID: "dc37fa9b-717a-49c9-be15-2be707baec3a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 25 00:01:37.228944 kubelet[3193]: I0425 00:01:37.227998 3193 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc37fa9b-717a-49c9-be15-2be707baec3a-cni-path" (OuterVolumeSpecName: "cni-path") pod "dc37fa9b-717a-49c9-be15-2be707baec3a" (UID: "dc37fa9b-717a-49c9-be15-2be707baec3a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 25 00:01:37.236556 kubelet[3193]: I0425 00:01:37.236447 3193 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a90b521-7ed3-4db6-ba85-db810c0452db-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9a90b521-7ed3-4db6-ba85-db810c0452db" (UID: "9a90b521-7ed3-4db6-ba85-db810c0452db"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 25 00:01:37.236556 kubelet[3193]: I0425 00:01:37.236499 3193 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc37fa9b-717a-49c9-be15-2be707baec3a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "dc37fa9b-717a-49c9-be15-2be707baec3a" (UID: "dc37fa9b-717a-49c9-be15-2be707baec3a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 25 00:01:37.239906 kubelet[3193]: I0425 00:01:37.239181 3193 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a90b521-7ed3-4db6-ba85-db810c0452db-kube-api-access-wld7t" (OuterVolumeSpecName: "kube-api-access-wld7t") pod "9a90b521-7ed3-4db6-ba85-db810c0452db" (UID: "9a90b521-7ed3-4db6-ba85-db810c0452db"). InnerVolumeSpecName "kube-api-access-wld7t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 25 00:01:37.239906 kubelet[3193]: I0425 00:01:37.239260 3193 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc37fa9b-717a-49c9-be15-2be707baec3a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "dc37fa9b-717a-49c9-be15-2be707baec3a" (UID: "dc37fa9b-717a-49c9-be15-2be707baec3a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 25 00:01:37.239906 kubelet[3193]: I0425 00:01:37.239286 3193 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc37fa9b-717a-49c9-be15-2be707baec3a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "dc37fa9b-717a-49c9-be15-2be707baec3a" (UID: "dc37fa9b-717a-49c9-be15-2be707baec3a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 25 00:01:37.239906 kubelet[3193]: I0425 00:01:37.239311 3193 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc37fa9b-717a-49c9-be15-2be707baec3a-hostproc" (OuterVolumeSpecName: "hostproc") pod "dc37fa9b-717a-49c9-be15-2be707baec3a" (UID: "dc37fa9b-717a-49c9-be15-2be707baec3a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 25 00:01:37.239906 kubelet[3193]: I0425 00:01:37.239331 3193 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc37fa9b-717a-49c9-be15-2be707baec3a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "dc37fa9b-717a-49c9-be15-2be707baec3a" (UID: "dc37fa9b-717a-49c9-be15-2be707baec3a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 25 00:01:37.240248 kubelet[3193]: I0425 00:01:37.239709 3193 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc37fa9b-717a-49c9-be15-2be707baec3a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "dc37fa9b-717a-49c9-be15-2be707baec3a" (UID: "dc37fa9b-717a-49c9-be15-2be707baec3a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 25 00:01:37.242774 kubelet[3193]: I0425 00:01:37.242726 3193 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc37fa9b-717a-49c9-be15-2be707baec3a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "dc37fa9b-717a-49c9-be15-2be707baec3a" (UID: "dc37fa9b-717a-49c9-be15-2be707baec3a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 25 00:01:37.242925 kubelet[3193]: I0425 00:01:37.242786 3193 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc37fa9b-717a-49c9-be15-2be707baec3a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "dc37fa9b-717a-49c9-be15-2be707baec3a" (UID: "dc37fa9b-717a-49c9-be15-2be707baec3a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 25 00:01:37.243092 kubelet[3193]: I0425 00:01:37.243066 3193 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc37fa9b-717a-49c9-be15-2be707baec3a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "dc37fa9b-717a-49c9-be15-2be707baec3a" (UID: "dc37fa9b-717a-49c9-be15-2be707baec3a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 25 00:01:37.245239 kubelet[3193]: I0425 00:01:37.245195 3193 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc37fa9b-717a-49c9-be15-2be707baec3a-kube-api-access-57v96" (OuterVolumeSpecName: "kube-api-access-57v96") pod "dc37fa9b-717a-49c9-be15-2be707baec3a" (UID: "dc37fa9b-717a-49c9-be15-2be707baec3a"). InnerVolumeSpecName "kube-api-access-57v96". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 25 00:01:37.323659 kubelet[3193]: I0425 00:01:37.321760 3193 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dc37fa9b-717a-49c9-be15-2be707baec3a-cni-path\") on node \"ip-172-31-30-251\" DevicePath \"\"" Apr 25 00:01:37.323659 kubelet[3193]: I0425 00:01:37.321844 3193 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9a90b521-7ed3-4db6-ba85-db810c0452db-cilium-config-path\") on node \"ip-172-31-30-251\" DevicePath \"\"" Apr 25 00:01:37.323659 kubelet[3193]: I0425 00:01:37.321864 3193 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dc37fa9b-717a-49c9-be15-2be707baec3a-clustermesh-secrets\") on node \"ip-172-31-30-251\" DevicePath \"\"" Apr 25 00:01:37.323659 kubelet[3193]: I0425 00:01:37.321880 3193 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dc37fa9b-717a-49c9-be15-2be707baec3a-hubble-tls\") on node \"ip-172-31-30-251\" DevicePath \"\"" Apr 25 00:01:37.323659 kubelet[3193]: I0425 00:01:37.321895 3193 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dc37fa9b-717a-49c9-be15-2be707baec3a-etc-cni-netd\") on node \"ip-172-31-30-251\" DevicePath \"\"" Apr 25 00:01:37.323659 kubelet[3193]: I0425 00:01:37.321907 3193 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dc37fa9b-717a-49c9-be15-2be707baec3a-cilium-cgroup\") on node \"ip-172-31-30-251\" DevicePath \"\"" Apr 25 00:01:37.323659 kubelet[3193]: I0425 00:01:37.321920 3193 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dc37fa9b-717a-49c9-be15-2be707baec3a-cilium-run\") on node \"ip-172-31-30-251\" DevicePath \"\"" Apr 25 00:01:37.323659 kubelet[3193]: I0425 00:01:37.321931 3193 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc37fa9b-717a-49c9-be15-2be707baec3a-xtables-lock\") on node \"ip-172-31-30-251\" DevicePath \"\"" Apr 25 00:01:37.323978 kubelet[3193]: I0425 00:01:37.321942 3193 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dc37fa9b-717a-49c9-be15-2be707baec3a-host-proc-sys-kernel\") on node \"ip-172-31-30-251\" DevicePath \"\"" Apr 25 00:01:37.323978 kubelet[3193]: I0425 00:01:37.321954 3193 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dc37fa9b-717a-49c9-be15-2be707baec3a-hostproc\") on node \"ip-172-31-30-251\" DevicePath \"\"" Apr 25 00:01:37.323978 kubelet[3193]: I0425 00:01:37.321966 3193 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dc37fa9b-717a-49c9-be15-2be707baec3a-bpf-maps\") on node \"ip-172-31-30-251\" DevicePath \"\"" Apr 25 00:01:37.323978 kubelet[3193]: I0425 00:01:37.321978 3193 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dc37fa9b-717a-49c9-be15-2be707baec3a-cilium-config-path\") on node \"ip-172-31-30-251\" DevicePath \"\"" Apr 25 00:01:37.323978 kubelet[3193]: I0425 00:01:37.321990 3193 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-57v96\" (UniqueName: \"kubernetes.io/projected/dc37fa9b-717a-49c9-be15-2be707baec3a-kube-api-access-57v96\") on node \"ip-172-31-30-251\" DevicePath \"\"" Apr 25 00:01:37.323978 kubelet[3193]: I0425 00:01:37.322001 3193 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wld7t\" (UniqueName: \"kubernetes.io/projected/9a90b521-7ed3-4db6-ba85-db810c0452db-kube-api-access-wld7t\") on node \"ip-172-31-30-251\" DevicePath \"\"" Apr 25 00:01:37.323978 kubelet[3193]: I0425 00:01:37.322014 3193 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc37fa9b-717a-49c9-be15-2be707baec3a-lib-modules\") on node \"ip-172-31-30-251\" DevicePath \"\"" Apr 25 00:01:37.323978 kubelet[3193]: I0425 00:01:37.322028 3193 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dc37fa9b-717a-49c9-be15-2be707baec3a-host-proc-sys-net\") on node \"ip-172-31-30-251\" DevicePath \"\"" Apr 25 00:01:37.541665 systemd[1]: Removed slice kubepods-besteffort-pod9a90b521_7ed3_4db6_ba85_db810c0452db.slice - libcontainer container kubepods-besteffort-pod9a90b521_7ed3_4db6_ba85_db810c0452db.slice. Apr 25 00:01:37.561381 kubelet[3193]: I0425 00:01:37.561340 3193 scope.go:117] "RemoveContainer" containerID="af13dcebf7c713855682e67719d9bf65812f744daf46139ce4dfcb90f32ae6f4" Apr 25 00:01:37.566035 containerd[1984]: time="2026-04-25T00:01:37.565964284Z" level=info msg="RemoveContainer for \"af13dcebf7c713855682e67719d9bf65812f744daf46139ce4dfcb90f32ae6f4\"" Apr 25 00:01:37.570559 containerd[1984]: time="2026-04-25T00:01:37.570334762Z" level=info msg="RemoveContainer for \"af13dcebf7c713855682e67719d9bf65812f744daf46139ce4dfcb90f32ae6f4\" returns successfully" Apr 25 00:01:37.573013 systemd[1]: Removed slice kubepods-burstable-poddc37fa9b_717a_49c9_be15_2be707baec3a.slice - libcontainer container kubepods-burstable-poddc37fa9b_717a_49c9_be15_2be707baec3a.slice. Apr 25 00:01:37.573157 systemd[1]: kubepods-burstable-poddc37fa9b_717a_49c9_be15_2be707baec3a.slice: Consumed 8.362s CPU time. Apr 25 00:01:37.594362 kubelet[3193]: I0425 00:01:37.593532 3193 scope.go:117] "RemoveContainer" containerID="af13dcebf7c713855682e67719d9bf65812f744daf46139ce4dfcb90f32ae6f4" Apr 25 00:01:37.625615 containerd[1984]: time="2026-04-25T00:01:37.605997771Z" level=error msg="ContainerStatus for \"af13dcebf7c713855682e67719d9bf65812f744daf46139ce4dfcb90f32ae6f4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"af13dcebf7c713855682e67719d9bf65812f744daf46139ce4dfcb90f32ae6f4\": not found" Apr 25 00:01:37.633113 kubelet[3193]: E0425 00:01:37.631776 3193 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"af13dcebf7c713855682e67719d9bf65812f744daf46139ce4dfcb90f32ae6f4\": not found" containerID="af13dcebf7c713855682e67719d9bf65812f744daf46139ce4dfcb90f32ae6f4" Apr 25 00:01:37.644520 kubelet[3193]: I0425 00:01:37.631882 3193 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"af13dcebf7c713855682e67719d9bf65812f744daf46139ce4dfcb90f32ae6f4"} err="failed to get container status \"af13dcebf7c713855682e67719d9bf65812f744daf46139ce4dfcb90f32ae6f4\": rpc error: code = NotFound desc = an error occurred when try to find container \"af13dcebf7c713855682e67719d9bf65812f744daf46139ce4dfcb90f32ae6f4\": not found" Apr 25 00:01:37.644520 kubelet[3193]: I0425 00:01:37.644520 3193 scope.go:117] "RemoveContainer" containerID="c625e32e70a316a114341a701d0baf70e8d6b3f952b6fb773222f00a5ca7a4db" Apr 25 00:01:37.646156 containerd[1984]: time="2026-04-25T00:01:37.646039543Z" level=info msg="RemoveContainer for \"c625e32e70a316a114341a701d0baf70e8d6b3f952b6fb773222f00a5ca7a4db\"" Apr 25 00:01:37.650537 containerd[1984]: time="2026-04-25T00:01:37.650494748Z" level=info msg="RemoveContainer for \"c625e32e70a316a114341a701d0baf70e8d6b3f952b6fb773222f00a5ca7a4db\" returns successfully" Apr 25 00:01:37.650730 kubelet[3193]: I0425 00:01:37.650693 3193 scope.go:117] "RemoveContainer" containerID="38e31c2d91a6bb95362e15d403966c510caaf213d90a94018c8e13f069f4cac1" Apr 25 00:01:37.651853 containerd[1984]: time="2026-04-25T00:01:37.651797602Z" level=info msg="RemoveContainer for \"38e31c2d91a6bb95362e15d403966c510caaf213d90a94018c8e13f069f4cac1\"" Apr 25 00:01:37.672691 containerd[1984]: time="2026-04-25T00:01:37.672647511Z" level=info msg="RemoveContainer for \"38e31c2d91a6bb95362e15d403966c510caaf213d90a94018c8e13f069f4cac1\" returns successfully" Apr 25 00:01:37.673046 kubelet[3193]: I0425 00:01:37.673010 3193 scope.go:117] "RemoveContainer" containerID="b4b4a17a64cd8c66a440985e9826fccbd5a5ce1bff7f8c23b2ec6c8197b0702f" Apr 25 00:01:37.674430 containerd[1984]: time="2026-04-25T00:01:37.674396989Z" level=info msg="RemoveContainer for \"b4b4a17a64cd8c66a440985e9826fccbd5a5ce1bff7f8c23b2ec6c8197b0702f\"" Apr 25 00:01:37.677786 containerd[1984]: time="2026-04-25T00:01:37.677746834Z" level=info msg="RemoveContainer for \"b4b4a17a64cd8c66a440985e9826fccbd5a5ce1bff7f8c23b2ec6c8197b0702f\" returns successfully" Apr 25 00:01:37.677961 kubelet[3193]: I0425 00:01:37.677935 3193 scope.go:117] "RemoveContainer" containerID="cc8f6849aef27f0a205dd08c95b71072c901e72a617331e072f8e26db9c82025" Apr 25 00:01:37.679148 containerd[1984]: time="2026-04-25T00:01:37.679116299Z" level=info msg="RemoveContainer for \"cc8f6849aef27f0a205dd08c95b71072c901e72a617331e072f8e26db9c82025\"" Apr 25 00:01:37.682838 containerd[1984]: time="2026-04-25T00:01:37.682797695Z" level=info msg="RemoveContainer for \"cc8f6849aef27f0a205dd08c95b71072c901e72a617331e072f8e26db9c82025\" returns successfully" Apr 25 00:01:37.683100 kubelet[3193]: I0425 00:01:37.683075 3193 scope.go:117] "RemoveContainer" containerID="466a4fa4ebc375bc39fdfb773e6803e8bc60815986ed476c130c3a5100e15a92" Apr 25 00:01:37.684232 containerd[1984]: time="2026-04-25T00:01:37.684204940Z" level=info msg="RemoveContainer for \"466a4fa4ebc375bc39fdfb773e6803e8bc60815986ed476c130c3a5100e15a92\"" Apr 25 00:01:37.687688 containerd[1984]: time="2026-04-25T00:01:37.687657461Z" level=info msg="RemoveContainer for \"466a4fa4ebc375bc39fdfb773e6803e8bc60815986ed476c130c3a5100e15a92\" returns successfully" Apr 25 00:01:37.687871 kubelet[3193]: I0425 00:01:37.687820 3193 scope.go:117] "RemoveContainer" containerID="c625e32e70a316a114341a701d0baf70e8d6b3f952b6fb773222f00a5ca7a4db" Apr 25 00:01:37.688072 containerd[1984]: time="2026-04-25T00:01:37.688037334Z" level=error msg="ContainerStatus for \"c625e32e70a316a114341a701d0baf70e8d6b3f952b6fb773222f00a5ca7a4db\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c625e32e70a316a114341a701d0baf70e8d6b3f952b6fb773222f00a5ca7a4db\": not found" Apr 25 00:01:37.688244 kubelet[3193]: E0425 00:01:37.688221 3193 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c625e32e70a316a114341a701d0baf70e8d6b3f952b6fb773222f00a5ca7a4db\": not found" containerID="c625e32e70a316a114341a701d0baf70e8d6b3f952b6fb773222f00a5ca7a4db" Apr 25 00:01:37.688359 kubelet[3193]: I0425 00:01:37.688251 3193 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c625e32e70a316a114341a701d0baf70e8d6b3f952b6fb773222f00a5ca7a4db"} err="failed to get container status \"c625e32e70a316a114341a701d0baf70e8d6b3f952b6fb773222f00a5ca7a4db\": rpc error: code = NotFound desc = an error occurred when try to find container \"c625e32e70a316a114341a701d0baf70e8d6b3f952b6fb773222f00a5ca7a4db\": not found" Apr 25 00:01:37.688359 kubelet[3193]: I0425 00:01:37.688281 3193 scope.go:117] "RemoveContainer" containerID="38e31c2d91a6bb95362e15d403966c510caaf213d90a94018c8e13f069f4cac1" Apr 25 00:01:37.688529 containerd[1984]: time="2026-04-25T00:01:37.688492500Z" level=error msg="ContainerStatus for \"38e31c2d91a6bb95362e15d403966c510caaf213d90a94018c8e13f069f4cac1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"38e31c2d91a6bb95362e15d403966c510caaf213d90a94018c8e13f069f4cac1\": not found" Apr 25 00:01:37.688661 kubelet[3193]: E0425 00:01:37.688629 3193 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"38e31c2d91a6bb95362e15d403966c510caaf213d90a94018c8e13f069f4cac1\": not found" containerID="38e31c2d91a6bb95362e15d403966c510caaf213d90a94018c8e13f069f4cac1" Apr 25 00:01:37.688727 kubelet[3193]: I0425 00:01:37.688656 3193 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"38e31c2d91a6bb95362e15d403966c510caaf213d90a94018c8e13f069f4cac1"} err="failed to get container status \"38e31c2d91a6bb95362e15d403966c510caaf213d90a94018c8e13f069f4cac1\": rpc error: code = NotFound desc = an error occurred when try to find container \"38e31c2d91a6bb95362e15d403966c510caaf213d90a94018c8e13f069f4cac1\": not found" Apr 25 00:01:37.688727 kubelet[3193]: I0425 00:01:37.688679 3193 scope.go:117] "RemoveContainer" containerID="b4b4a17a64cd8c66a440985e9826fccbd5a5ce1bff7f8c23b2ec6c8197b0702f" Apr 25 00:01:37.688914 containerd[1984]: time="2026-04-25T00:01:37.688883656Z" level=error msg="ContainerStatus for \"b4b4a17a64cd8c66a440985e9826fccbd5a5ce1bff7f8c23b2ec6c8197b0702f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b4b4a17a64cd8c66a440985e9826fccbd5a5ce1bff7f8c23b2ec6c8197b0702f\": not found" Apr 25 00:01:37.689074 kubelet[3193]: E0425 00:01:37.689014 3193 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b4b4a17a64cd8c66a440985e9826fccbd5a5ce1bff7f8c23b2ec6c8197b0702f\": not found" containerID="b4b4a17a64cd8c66a440985e9826fccbd5a5ce1bff7f8c23b2ec6c8197b0702f" Apr 25 00:01:37.689074 kubelet[3193]: I0425 00:01:37.689056 3193 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b4b4a17a64cd8c66a440985e9826fccbd5a5ce1bff7f8c23b2ec6c8197b0702f"} err="failed to get container status \"b4b4a17a64cd8c66a440985e9826fccbd5a5ce1bff7f8c23b2ec6c8197b0702f\": rpc error: code = NotFound desc = an error occurred when try to find container \"b4b4a17a64cd8c66a440985e9826fccbd5a5ce1bff7f8c23b2ec6c8197b0702f\": not found" Apr 25 00:01:37.689172 kubelet[3193]: I0425 00:01:37.689078 3193 scope.go:117] "RemoveContainer" containerID="cc8f6849aef27f0a205dd08c95b71072c901e72a617331e072f8e26db9c82025" Apr 25 00:01:37.689442 containerd[1984]: time="2026-04-25T00:01:37.689370901Z" level=error msg="ContainerStatus for \"cc8f6849aef27f0a205dd08c95b71072c901e72a617331e072f8e26db9c82025\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cc8f6849aef27f0a205dd08c95b71072c901e72a617331e072f8e26db9c82025\": not found" Apr 25 00:01:37.689554 kubelet[3193]: E0425 00:01:37.689527 3193 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cc8f6849aef27f0a205dd08c95b71072c901e72a617331e072f8e26db9c82025\": not found" containerID="cc8f6849aef27f0a205dd08c95b71072c901e72a617331e072f8e26db9c82025" Apr 25 00:01:37.689614 kubelet[3193]: I0425 00:01:37.689552 3193 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cc8f6849aef27f0a205dd08c95b71072c901e72a617331e072f8e26db9c82025"} err="failed to get container status \"cc8f6849aef27f0a205dd08c95b71072c901e72a617331e072f8e26db9c82025\": rpc error: code = NotFound desc = an error occurred when try to find container \"cc8f6849aef27f0a205dd08c95b71072c901e72a617331e072f8e26db9c82025\": not found" Apr 25 00:01:37.689614 kubelet[3193]: I0425 00:01:37.689582 3193 scope.go:117] "RemoveContainer" containerID="466a4fa4ebc375bc39fdfb773e6803e8bc60815986ed476c130c3a5100e15a92" Apr 25 00:01:37.689790 containerd[1984]: time="2026-04-25T00:01:37.689753909Z" level=error msg="ContainerStatus for \"466a4fa4ebc375bc39fdfb773e6803e8bc60815986ed476c130c3a5100e15a92\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"466a4fa4ebc375bc39fdfb773e6803e8bc60815986ed476c130c3a5100e15a92\": not found" Apr 25 00:01:37.689927 kubelet[3193]: E0425 00:01:37.689899 3193 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"466a4fa4ebc375bc39fdfb773e6803e8bc60815986ed476c130c3a5100e15a92\": not found" containerID="466a4fa4ebc375bc39fdfb773e6803e8bc60815986ed476c130c3a5100e15a92" Apr 25 00:01:37.689983 kubelet[3193]: I0425 00:01:37.689925 3193 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"466a4fa4ebc375bc39fdfb773e6803e8bc60815986ed476c130c3a5100e15a92"} err="failed to get container status \"466a4fa4ebc375bc39fdfb773e6803e8bc60815986ed476c130c3a5100e15a92\": rpc error: code = NotFound desc = an error occurred when try to find container \"466a4fa4ebc375bc39fdfb773e6803e8bc60815986ed476c130c3a5100e15a92\": not found" Apr 25 00:01:37.791171 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9144b97fb1899bdf85596efaf9e5c9ded0b5175c9751c9ffe48373a79abfa67e-rootfs.mount: Deactivated successfully. Apr 25 00:01:37.791292 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9144b97fb1899bdf85596efaf9e5c9ded0b5175c9751c9ffe48373a79abfa67e-shm.mount: Deactivated successfully. Apr 25 00:01:37.791383 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-65131dd26add77e85ed143bac3876426230056f5c87f2d769b385f44f8e501e3-rootfs.mount: Deactivated successfully. Apr 25 00:01:37.791467 systemd[1]: var-lib-kubelet-pods-dc37fa9b\x2d717a\x2d49c9\x2dbe15\x2d2be707baec3a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d57v96.mount: Deactivated successfully. Apr 25 00:01:37.791555 systemd[1]: var-lib-kubelet-pods-9a90b521\x2d7ed3\x2d4db6\x2dba85\x2ddb810c0452db-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwld7t.mount: Deactivated successfully. Apr 25 00:01:37.791654 systemd[1]: var-lib-kubelet-pods-dc37fa9b\x2d717a\x2d49c9\x2dbe15\x2d2be707baec3a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 25 00:01:37.791743 systemd[1]: var-lib-kubelet-pods-dc37fa9b\x2d717a\x2d49c9\x2dbe15\x2d2be707baec3a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 25 00:01:37.986021 kubelet[3193]: I0425 00:01:37.985983 3193 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a90b521-7ed3-4db6-ba85-db810c0452db" path="/var/lib/kubelet/pods/9a90b521-7ed3-4db6-ba85-db810c0452db/volumes" Apr 25 00:01:37.986605 kubelet[3193]: I0425 00:01:37.986571 3193 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc37fa9b-717a-49c9-be15-2be707baec3a" path="/var/lib/kubelet/pods/dc37fa9b-717a-49c9-be15-2be707baec3a/volumes" Apr 25 00:01:38.864506 sshd[4934]: pam_unix(sshd:session): session closed for user core Apr 25 00:01:38.869435 systemd-logind[1963]: Session 21 logged out. Waiting for processes to exit. Apr 25 00:01:38.869995 systemd[1]: sshd@20-172.31.30.251:22-4.175.71.9:37334.service: Deactivated successfully. Apr 25 00:01:38.872787 systemd[1]: session-21.scope: Deactivated successfully. Apr 25 00:01:38.874178 systemd-logind[1963]: Removed session 21. Apr 25 00:01:39.050316 systemd[1]: Started sshd@21-172.31.30.251:22-4.175.71.9:39758.service - OpenSSH per-connection server daemon (4.175.71.9:39758). Apr 25 00:01:39.123623 kubelet[3193]: E0425 00:01:39.123476 3193 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 25 00:01:39.600889 ntpd[1953]: Deleting interface #12 lxc_health, fe80::5414:edff:feaf:aaa3%8#123, interface stats: received=0, sent=0, dropped=0, active_time=69 secs Apr 25 00:01:39.601243 ntpd[1953]: 25 Apr 00:01:39 ntpd[1953]: Deleting interface #12 lxc_health, fe80::5414:edff:feaf:aaa3%8#123, interface stats: received=0, sent=0, dropped=0, active_time=69 secs Apr 25 00:01:40.083875 sshd[5099]: Accepted publickey for core from 4.175.71.9 port 39758 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 25 00:01:40.084935 sshd[5099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:01:40.091008 systemd-logind[1963]: New session 22 of user core. Apr 25 00:01:40.100088 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 25 00:01:41.162962 systemd[1]: Created slice kubepods-burstable-podaeab8f16_0e5f_436a_a670_2a60bbc7c3d5.slice - libcontainer container kubepods-burstable-podaeab8f16_0e5f_436a_a670_2a60bbc7c3d5.slice. Apr 25 00:01:41.256307 kubelet[3193]: I0425 00:01:41.256255 3193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/aeab8f16-0e5f-436a-a670-2a60bbc7c3d5-cilium-run\") pod \"cilium-bbd7f\" (UID: \"aeab8f16-0e5f-436a-a670-2a60bbc7c3d5\") " pod="kube-system/cilium-bbd7f" Apr 25 00:01:41.256307 kubelet[3193]: I0425 00:01:41.256309 3193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/aeab8f16-0e5f-436a-a670-2a60bbc7c3d5-hostproc\") pod \"cilium-bbd7f\" (UID: \"aeab8f16-0e5f-436a-a670-2a60bbc7c3d5\") " pod="kube-system/cilium-bbd7f" Apr 25 00:01:41.256808 kubelet[3193]: I0425 00:01:41.256339 3193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/aeab8f16-0e5f-436a-a670-2a60bbc7c3d5-cilium-cgroup\") pod \"cilium-bbd7f\" (UID: \"aeab8f16-0e5f-436a-a670-2a60bbc7c3d5\") " pod="kube-system/cilium-bbd7f" Apr 25 00:01:41.256808 kubelet[3193]: I0425 00:01:41.256362 3193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aeab8f16-0e5f-436a-a670-2a60bbc7c3d5-xtables-lock\") pod \"cilium-bbd7f\" (UID: \"aeab8f16-0e5f-436a-a670-2a60bbc7c3d5\") " pod="kube-system/cilium-bbd7f" Apr 25 00:01:41.256808 kubelet[3193]: I0425 00:01:41.256383 3193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aeab8f16-0e5f-436a-a670-2a60bbc7c3d5-cilium-config-path\") pod \"cilium-bbd7f\" (UID: \"aeab8f16-0e5f-436a-a670-2a60bbc7c3d5\") " pod="kube-system/cilium-bbd7f" Apr 25 00:01:41.256808 kubelet[3193]: I0425 00:01:41.256405 3193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/aeab8f16-0e5f-436a-a670-2a60bbc7c3d5-host-proc-sys-net\") pod \"cilium-bbd7f\" (UID: \"aeab8f16-0e5f-436a-a670-2a60bbc7c3d5\") " pod="kube-system/cilium-bbd7f" Apr 25 00:01:41.256808 kubelet[3193]: I0425 00:01:41.256428 3193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/aeab8f16-0e5f-436a-a670-2a60bbc7c3d5-hubble-tls\") pod \"cilium-bbd7f\" (UID: \"aeab8f16-0e5f-436a-a670-2a60bbc7c3d5\") " pod="kube-system/cilium-bbd7f" Apr 25 00:01:41.256808 kubelet[3193]: I0425 00:01:41.256451 3193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/aeab8f16-0e5f-436a-a670-2a60bbc7c3d5-cilium-ipsec-secrets\") pod \"cilium-bbd7f\" (UID: \"aeab8f16-0e5f-436a-a670-2a60bbc7c3d5\") " pod="kube-system/cilium-bbd7f" Apr 25 00:01:41.257008 kubelet[3193]: I0425 00:01:41.256481 3193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/aeab8f16-0e5f-436a-a670-2a60bbc7c3d5-host-proc-sys-kernel\") pod \"cilium-bbd7f\" (UID: \"aeab8f16-0e5f-436a-a670-2a60bbc7c3d5\") " pod="kube-system/cilium-bbd7f" Apr 25 00:01:41.257008 kubelet[3193]: I0425 00:01:41.256509 3193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ggb8\" (UniqueName: \"kubernetes.io/projected/aeab8f16-0e5f-436a-a670-2a60bbc7c3d5-kube-api-access-5ggb8\") pod \"cilium-bbd7f\" (UID: \"aeab8f16-0e5f-436a-a670-2a60bbc7c3d5\") " pod="kube-system/cilium-bbd7f" Apr 25 00:01:41.257008 kubelet[3193]: I0425 00:01:41.256533 3193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/aeab8f16-0e5f-436a-a670-2a60bbc7c3d5-cni-path\") pod \"cilium-bbd7f\" (UID: \"aeab8f16-0e5f-436a-a670-2a60bbc7c3d5\") " pod="kube-system/cilium-bbd7f" Apr 25 00:01:41.257008 kubelet[3193]: I0425 00:01:41.256563 3193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/aeab8f16-0e5f-436a-a670-2a60bbc7c3d5-bpf-maps\") pod \"cilium-bbd7f\" (UID: \"aeab8f16-0e5f-436a-a670-2a60bbc7c3d5\") " pod="kube-system/cilium-bbd7f" Apr 25 00:01:41.257008 kubelet[3193]: I0425 00:01:41.256588 3193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aeab8f16-0e5f-436a-a670-2a60bbc7c3d5-etc-cni-netd\") pod \"cilium-bbd7f\" (UID: \"aeab8f16-0e5f-436a-a670-2a60bbc7c3d5\") " pod="kube-system/cilium-bbd7f" Apr 25 00:01:41.257008 kubelet[3193]: I0425 00:01:41.256613 3193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/aeab8f16-0e5f-436a-a670-2a60bbc7c3d5-clustermesh-secrets\") pod \"cilium-bbd7f\" (UID: \"aeab8f16-0e5f-436a-a670-2a60bbc7c3d5\") " pod="kube-system/cilium-bbd7f" Apr 25 00:01:41.257154 kubelet[3193]: I0425 00:01:41.256634 3193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aeab8f16-0e5f-436a-a670-2a60bbc7c3d5-lib-modules\") pod \"cilium-bbd7f\" (UID: \"aeab8f16-0e5f-436a-a670-2a60bbc7c3d5\") " pod="kube-system/cilium-bbd7f" Apr 25 00:01:41.324628 sshd[5099]: pam_unix(sshd:session): session closed for user core Apr 25 00:01:41.328469 systemd[1]: sshd@21-172.31.30.251:22-4.175.71.9:39758.service: Deactivated successfully. Apr 25 00:01:41.331208 systemd[1]: session-22.scope: Deactivated successfully. Apr 25 00:01:41.332764 systemd-logind[1963]: Session 22 logged out. Waiting for processes to exit. Apr 25 00:01:41.334506 systemd-logind[1963]: Removed session 22. Apr 25 00:01:41.471584 containerd[1984]: time="2026-04-25T00:01:41.471452320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bbd7f,Uid:aeab8f16-0e5f-436a-a670-2a60bbc7c3d5,Namespace:kube-system,Attempt:0,}" Apr 25 00:01:41.498135 systemd[1]: Started sshd@22-172.31.30.251:22-4.175.71.9:39770.service - OpenSSH per-connection server daemon (4.175.71.9:39770). Apr 25 00:01:41.505650 containerd[1984]: time="2026-04-25T00:01:41.505492419Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 25 00:01:41.505650 containerd[1984]: time="2026-04-25T00:01:41.505616210Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 25 00:01:41.506031 containerd[1984]: time="2026-04-25T00:01:41.505813916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:01:41.506031 containerd[1984]: time="2026-04-25T00:01:41.505965017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 25 00:01:41.533045 systemd[1]: Started cri-containerd-3e6a56d005d7a2ddb1a7c4e5f75db3fe77d3c1da995293a2ac03a9398bd19070.scope - libcontainer container 3e6a56d005d7a2ddb1a7c4e5f75db3fe77d3c1da995293a2ac03a9398bd19070. Apr 25 00:01:41.558344 containerd[1984]: time="2026-04-25T00:01:41.558302396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bbd7f,Uid:aeab8f16-0e5f-436a-a670-2a60bbc7c3d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e6a56d005d7a2ddb1a7c4e5f75db3fe77d3c1da995293a2ac03a9398bd19070\"" Apr 25 00:01:41.565345 containerd[1984]: time="2026-04-25T00:01:41.565216000Z" level=info msg="CreateContainer within sandbox \"3e6a56d005d7a2ddb1a7c4e5f75db3fe77d3c1da995293a2ac03a9398bd19070\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 25 00:01:41.578595 containerd[1984]: time="2026-04-25T00:01:41.578534966Z" level=info msg="CreateContainer within sandbox \"3e6a56d005d7a2ddb1a7c4e5f75db3fe77d3c1da995293a2ac03a9398bd19070\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"77689545eba1178f4d5c306c1e54bd5472c498bc066ce1bfe43eac331a1f6967\"" Apr 25 00:01:41.579241 containerd[1984]: time="2026-04-25T00:01:41.579210943Z" level=info msg="StartContainer for \"77689545eba1178f4d5c306c1e54bd5472c498bc066ce1bfe43eac331a1f6967\"" Apr 25 00:01:41.609033 systemd[1]: Started cri-containerd-77689545eba1178f4d5c306c1e54bd5472c498bc066ce1bfe43eac331a1f6967.scope - libcontainer container 77689545eba1178f4d5c306c1e54bd5472c498bc066ce1bfe43eac331a1f6967. Apr 25 00:01:41.638525 containerd[1984]: time="2026-04-25T00:01:41.638292010Z" level=info msg="StartContainer for \"77689545eba1178f4d5c306c1e54bd5472c498bc066ce1bfe43eac331a1f6967\" returns successfully" Apr 25 00:01:41.660933 systemd[1]: cri-containerd-77689545eba1178f4d5c306c1e54bd5472c498bc066ce1bfe43eac331a1f6967.scope: Deactivated successfully. Apr 25 00:01:41.723455 containerd[1984]: time="2026-04-25T00:01:41.723071090Z" level=info msg="shim disconnected" id=77689545eba1178f4d5c306c1e54bd5472c498bc066ce1bfe43eac331a1f6967 namespace=k8s.io Apr 25 00:01:41.723455 containerd[1984]: time="2026-04-25T00:01:41.723169574Z" level=warning msg="cleaning up after shim disconnected" id=77689545eba1178f4d5c306c1e54bd5472c498bc066ce1bfe43eac331a1f6967 namespace=k8s.io Apr 25 00:01:41.723455 containerd[1984]: time="2026-04-25T00:01:41.723208047Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 25 00:01:42.457869 sshd[5118]: Accepted publickey for core from 4.175.71.9 port 39770 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 25 00:01:42.458900 sshd[5118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:01:42.464551 systemd-logind[1963]: New session 23 of user core. Apr 25 00:01:42.472103 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 25 00:01:42.587275 containerd[1984]: time="2026-04-25T00:01:42.587223883Z" level=info msg="CreateContainer within sandbox \"3e6a56d005d7a2ddb1a7c4e5f75db3fe77d3c1da995293a2ac03a9398bd19070\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 25 00:01:42.605622 containerd[1984]: time="2026-04-25T00:01:42.605518031Z" level=info msg="CreateContainer within sandbox \"3e6a56d005d7a2ddb1a7c4e5f75db3fe77d3c1da995293a2ac03a9398bd19070\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c544f233ad6cebd8acaaf8561d58b976ecc27e2aad3d993a89b3434610481fe0\"" Apr 25 00:01:42.607861 containerd[1984]: time="2026-04-25T00:01:42.607223100Z" level=info msg="StartContainer for \"c544f233ad6cebd8acaaf8561d58b976ecc27e2aad3d993a89b3434610481fe0\"" Apr 25 00:01:42.644019 systemd[1]: Started cri-containerd-c544f233ad6cebd8acaaf8561d58b976ecc27e2aad3d993a89b3434610481fe0.scope - libcontainer container c544f233ad6cebd8acaaf8561d58b976ecc27e2aad3d993a89b3434610481fe0. Apr 25 00:01:42.679436 containerd[1984]: time="2026-04-25T00:01:42.678619024Z" level=info msg="StartContainer for \"c544f233ad6cebd8acaaf8561d58b976ecc27e2aad3d993a89b3434610481fe0\" returns successfully" Apr 25 00:01:42.685786 systemd[1]: cri-containerd-c544f233ad6cebd8acaaf8561d58b976ecc27e2aad3d993a89b3434610481fe0.scope: Deactivated successfully. Apr 25 00:01:42.721380 containerd[1984]: time="2026-04-25T00:01:42.721236574Z" level=info msg="shim disconnected" id=c544f233ad6cebd8acaaf8561d58b976ecc27e2aad3d993a89b3434610481fe0 namespace=k8s.io Apr 25 00:01:42.721380 containerd[1984]: time="2026-04-25T00:01:42.721295340Z" level=warning msg="cleaning up after shim disconnected" id=c544f233ad6cebd8acaaf8561d58b976ecc27e2aad3d993a89b3434610481fe0 namespace=k8s.io Apr 25 00:01:42.721380 containerd[1984]: time="2026-04-25T00:01:42.721306610Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 25 00:01:43.113659 sshd[5118]: pam_unix(sshd:session): session closed for user core Apr 25 00:01:43.119197 systemd-logind[1963]: Session 23 logged out. Waiting for processes to exit. Apr 25 00:01:43.119987 systemd[1]: sshd@22-172.31.30.251:22-4.175.71.9:39770.service: Deactivated successfully. Apr 25 00:01:43.122689 systemd[1]: session-23.scope: Deactivated successfully. Apr 25 00:01:43.124107 systemd-logind[1963]: Removed session 23. Apr 25 00:01:43.283876 systemd[1]: Started sshd@23-172.31.30.251:22-4.175.71.9:39784.service - OpenSSH per-connection server daemon (4.175.71.9:39784). Apr 25 00:01:43.364561 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c544f233ad6cebd8acaaf8561d58b976ecc27e2aad3d993a89b3434610481fe0-rootfs.mount: Deactivated successfully. Apr 25 00:01:43.588805 containerd[1984]: time="2026-04-25T00:01:43.588574156Z" level=info msg="CreateContainer within sandbox \"3e6a56d005d7a2ddb1a7c4e5f75db3fe77d3c1da995293a2ac03a9398bd19070\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 25 00:01:43.631106 containerd[1984]: time="2026-04-25T00:01:43.630980547Z" level=info msg="CreateContainer within sandbox \"3e6a56d005d7a2ddb1a7c4e5f75db3fe77d3c1da995293a2ac03a9398bd19070\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5a0869b72955d3ad454aa83ab6f390f91fd61f731f8f478dfb640be0eb97d502\"" Apr 25 00:01:43.631728 containerd[1984]: time="2026-04-25T00:01:43.631545576Z" level=info msg="StartContainer for \"5a0869b72955d3ad454aa83ab6f390f91fd61f731f8f478dfb640be0eb97d502\"" Apr 25 00:01:43.669078 systemd[1]: Started cri-containerd-5a0869b72955d3ad454aa83ab6f390f91fd61f731f8f478dfb640be0eb97d502.scope - libcontainer container 5a0869b72955d3ad454aa83ab6f390f91fd61f731f8f478dfb640be0eb97d502. Apr 25 00:01:43.724210 containerd[1984]: time="2026-04-25T00:01:43.724172268Z" level=info msg="StartContainer for \"5a0869b72955d3ad454aa83ab6f390f91fd61f731f8f478dfb640be0eb97d502\" returns successfully" Apr 25 00:01:43.753795 systemd[1]: cri-containerd-5a0869b72955d3ad454aa83ab6f390f91fd61f731f8f478dfb640be0eb97d502.scope: Deactivated successfully. Apr 25 00:01:43.799075 containerd[1984]: time="2026-04-25T00:01:43.799010670Z" level=info msg="shim disconnected" id=5a0869b72955d3ad454aa83ab6f390f91fd61f731f8f478dfb640be0eb97d502 namespace=k8s.io Apr 25 00:01:43.799075 containerd[1984]: time="2026-04-25T00:01:43.799068139Z" level=warning msg="cleaning up after shim disconnected" id=5a0869b72955d3ad454aa83ab6f390f91fd61f731f8f478dfb640be0eb97d502 namespace=k8s.io Apr 25 00:01:43.799498 containerd[1984]: time="2026-04-25T00:01:43.799079897Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 25 00:01:44.125387 kubelet[3193]: E0425 00:01:44.125300 3193 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 25 00:01:44.248396 sshd[5287]: Accepted publickey for core from 4.175.71.9 port 39784 ssh2: RSA SHA256:5HhJ2X4iOQfF5HWKIEVpWTPXYo3rjlnxoO1NrD+aEDg Apr 25 00:01:44.249116 sshd[5287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 25 00:01:44.254775 systemd-logind[1963]: New session 24 of user core. Apr 25 00:01:44.260049 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 25 00:01:44.364024 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a0869b72955d3ad454aa83ab6f390f91fd61f731f8f478dfb640be0eb97d502-rootfs.mount: Deactivated successfully. Apr 25 00:01:44.611070 containerd[1984]: time="2026-04-25T00:01:44.611024588Z" level=info msg="CreateContainer within sandbox \"3e6a56d005d7a2ddb1a7c4e5f75db3fe77d3c1da995293a2ac03a9398bd19070\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 25 00:01:44.634789 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4126649810.mount: Deactivated successfully. Apr 25 00:01:44.636345 containerd[1984]: time="2026-04-25T00:01:44.636304131Z" level=info msg="CreateContainer within sandbox \"3e6a56d005d7a2ddb1a7c4e5f75db3fe77d3c1da995293a2ac03a9398bd19070\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1a22e34514644ec18267ba749bc7c60cce2921bd26799dccdc0342e9ee527f64\"" Apr 25 00:01:44.638377 containerd[1984]: time="2026-04-25T00:01:44.637207946Z" level=info msg="StartContainer for \"1a22e34514644ec18267ba749bc7c60cce2921bd26799dccdc0342e9ee527f64\"" Apr 25 00:01:44.677076 systemd[1]: Started cri-containerd-1a22e34514644ec18267ba749bc7c60cce2921bd26799dccdc0342e9ee527f64.scope - libcontainer container 1a22e34514644ec18267ba749bc7c60cce2921bd26799dccdc0342e9ee527f64. Apr 25 00:01:44.709011 systemd[1]: cri-containerd-1a22e34514644ec18267ba749bc7c60cce2921bd26799dccdc0342e9ee527f64.scope: Deactivated successfully. Apr 25 00:01:44.712507 containerd[1984]: time="2026-04-25T00:01:44.712467915Z" level=info msg="StartContainer for \"1a22e34514644ec18267ba749bc7c60cce2921bd26799dccdc0342e9ee527f64\" returns successfully" Apr 25 00:01:44.750477 containerd[1984]: time="2026-04-25T00:01:44.750357441Z" level=info msg="shim disconnected" id=1a22e34514644ec18267ba749bc7c60cce2921bd26799dccdc0342e9ee527f64 namespace=k8s.io Apr 25 00:01:44.750477 containerd[1984]: time="2026-04-25T00:01:44.750470220Z" level=warning msg="cleaning up after shim disconnected" id=1a22e34514644ec18267ba749bc7c60cce2921bd26799dccdc0342e9ee527f64 namespace=k8s.io Apr 25 00:01:44.750477 containerd[1984]: time="2026-04-25T00:01:44.750482585Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 25 00:01:45.364104 systemd[1]: run-containerd-runc-k8s.io-1a22e34514644ec18267ba749bc7c60cce2921bd26799dccdc0342e9ee527f64-runc.i1zlnr.mount: Deactivated successfully. Apr 25 00:01:45.364225 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1a22e34514644ec18267ba749bc7c60cce2921bd26799dccdc0342e9ee527f64-rootfs.mount: Deactivated successfully. Apr 25 00:01:45.597899 containerd[1984]: time="2026-04-25T00:01:45.597692189Z" level=info msg="CreateContainer within sandbox \"3e6a56d005d7a2ddb1a7c4e5f75db3fe77d3c1da995293a2ac03a9398bd19070\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 25 00:01:45.620086 containerd[1984]: time="2026-04-25T00:01:45.618845445Z" level=info msg="CreateContainer within sandbox \"3e6a56d005d7a2ddb1a7c4e5f75db3fe77d3c1da995293a2ac03a9398bd19070\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c5f99c57508186bd57ac95d1847b6d3f63fdfe6c1c630260ac14719fc05bdf11\"" Apr 25 00:01:45.622426 containerd[1984]: time="2026-04-25T00:01:45.621132231Z" level=info msg="StartContainer for \"c5f99c57508186bd57ac95d1847b6d3f63fdfe6c1c630260ac14719fc05bdf11\"" Apr 25 00:01:45.663003 systemd[1]: Started cri-containerd-c5f99c57508186bd57ac95d1847b6d3f63fdfe6c1c630260ac14719fc05bdf11.scope - libcontainer container c5f99c57508186bd57ac95d1847b6d3f63fdfe6c1c630260ac14719fc05bdf11. Apr 25 00:01:45.695532 containerd[1984]: time="2026-04-25T00:01:45.695475552Z" level=info msg="StartContainer for \"c5f99c57508186bd57ac95d1847b6d3f63fdfe6c1c630260ac14719fc05bdf11\" returns successfully" Apr 25 00:01:46.338733 kubelet[3193]: I0425 00:01:46.338684 3193 setters.go:618] "Node became not ready" node="ip-172-31-30-251" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-25T00:01:46Z","lastTransitionTime":"2026-04-25T00:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 25 00:01:46.533411 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 25 00:01:46.623806 kubelet[3193]: I0425 00:01:46.623130 3193 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bbd7f" podStartSLOduration=5.623109274 podStartE2EDuration="5.623109274s" podCreationTimestamp="2026-04-25 00:01:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-25 00:01:46.621905004 +0000 UTC m=+112.779877229" watchObservedRunningTime="2026-04-25 00:01:46.623109274 +0000 UTC m=+112.781081500" Apr 25 00:01:49.608998 systemd-networkd[1905]: lxc_health: Link UP Apr 25 00:01:49.613156 systemd-networkd[1905]: lxc_health: Gained carrier Apr 25 00:01:49.620263 (udev-worker)[5986]: Network interface NamePolicy= disabled on kernel command line. Apr 25 00:01:50.866369 systemd-networkd[1905]: lxc_health: Gained IPv6LL Apr 25 00:01:53.601020 ntpd[1953]: Listen normally on 15 lxc_health [fe80::704b:9ff:fef7:d76%14]:123 Apr 25 00:01:53.601948 ntpd[1953]: 25 Apr 00:01:53 ntpd[1953]: Listen normally on 15 lxc_health [fe80::704b:9ff:fef7:d76%14]:123 Apr 25 00:01:54.000088 containerd[1984]: time="2026-04-25T00:01:54.000038227Z" level=info msg="StopPodSandbox for \"65131dd26add77e85ed143bac3876426230056f5c87f2d769b385f44f8e501e3\"" Apr 25 00:01:54.000563 containerd[1984]: time="2026-04-25T00:01:54.000157382Z" level=info msg="TearDown network for sandbox \"65131dd26add77e85ed143bac3876426230056f5c87f2d769b385f44f8e501e3\" successfully" Apr 25 00:01:54.000563 containerd[1984]: time="2026-04-25T00:01:54.000174759Z" level=info msg="StopPodSandbox for \"65131dd26add77e85ed143bac3876426230056f5c87f2d769b385f44f8e501e3\" returns successfully" Apr 25 00:01:54.002251 containerd[1984]: time="2026-04-25T00:01:54.002215118Z" level=info msg="RemovePodSandbox for \"65131dd26add77e85ed143bac3876426230056f5c87f2d769b385f44f8e501e3\"" Apr 25 00:01:54.010979 containerd[1984]: time="2026-04-25T00:01:54.010901461Z" level=info msg="Forcibly stopping sandbox \"65131dd26add77e85ed143bac3876426230056f5c87f2d769b385f44f8e501e3\"" Apr 25 00:01:54.012069 containerd[1984]: time="2026-04-25T00:01:54.011080548Z" level=info msg="TearDown network for sandbox \"65131dd26add77e85ed143bac3876426230056f5c87f2d769b385f44f8e501e3\" successfully" Apr 25 00:01:54.029967 containerd[1984]: time="2026-04-25T00:01:54.029151617Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"65131dd26add77e85ed143bac3876426230056f5c87f2d769b385f44f8e501e3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 25 00:01:54.030141 containerd[1984]: time="2026-04-25T00:01:54.030023395Z" level=info msg="RemovePodSandbox \"65131dd26add77e85ed143bac3876426230056f5c87f2d769b385f44f8e501e3\" returns successfully" Apr 25 00:01:54.030743 containerd[1984]: time="2026-04-25T00:01:54.030704621Z" level=info msg="StopPodSandbox for \"9144b97fb1899bdf85596efaf9e5c9ded0b5175c9751c9ffe48373a79abfa67e\"" Apr 25 00:01:54.031374 containerd[1984]: time="2026-04-25T00:01:54.030812667Z" level=info msg="TearDown network for sandbox \"9144b97fb1899bdf85596efaf9e5c9ded0b5175c9751c9ffe48373a79abfa67e\" successfully" Apr 25 00:01:54.031467 containerd[1984]: time="2026-04-25T00:01:54.031377905Z" level=info msg="StopPodSandbox for \"9144b97fb1899bdf85596efaf9e5c9ded0b5175c9751c9ffe48373a79abfa67e\" returns successfully" Apr 25 00:01:54.036222 containerd[1984]: time="2026-04-25T00:01:54.036177930Z" level=info msg="RemovePodSandbox for \"9144b97fb1899bdf85596efaf9e5c9ded0b5175c9751c9ffe48373a79abfa67e\"" Apr 25 00:01:54.036366 containerd[1984]: time="2026-04-25T00:01:54.036226376Z" level=info msg="Forcibly stopping sandbox \"9144b97fb1899bdf85596efaf9e5c9ded0b5175c9751c9ffe48373a79abfa67e\"" Apr 25 00:01:54.036366 containerd[1984]: time="2026-04-25T00:01:54.036305975Z" level=info msg="TearDown network for sandbox \"9144b97fb1899bdf85596efaf9e5c9ded0b5175c9751c9ffe48373a79abfa67e\" successfully" Apr 25 00:01:54.055890 containerd[1984]: time="2026-04-25T00:01:54.055174760Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9144b97fb1899bdf85596efaf9e5c9ded0b5175c9751c9ffe48373a79abfa67e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 25 00:01:54.055890 containerd[1984]: time="2026-04-25T00:01:54.055256862Z" level=info msg="RemovePodSandbox \"9144b97fb1899bdf85596efaf9e5c9ded0b5175c9751c9ffe48373a79abfa67e\" returns successfully" Apr 25 00:01:54.060998 systemd[1]: run-containerd-runc-k8s.io-c5f99c57508186bd57ac95d1847b6d3f63fdfe6c1c630260ac14719fc05bdf11-runc.VQVqdO.mount: Deactivated successfully. Apr 25 00:01:56.284162 systemd[1]: run-containerd-runc-k8s.io-c5f99c57508186bd57ac95d1847b6d3f63fdfe6c1c630260ac14719fc05bdf11-runc.JNJwpf.mount: Deactivated successfully. Apr 25 00:01:56.502526 sshd[5287]: pam_unix(sshd:session): session closed for user core Apr 25 00:01:56.507450 systemd-logind[1963]: Session 24 logged out. Waiting for processes to exit. Apr 25 00:01:56.507976 systemd[1]: sshd@23-172.31.30.251:22-4.175.71.9:39784.service: Deactivated successfully. Apr 25 00:01:56.510708 systemd[1]: session-24.scope: Deactivated successfully. Apr 25 00:01:56.512103 systemd-logind[1963]: Removed session 24.