Sep 9 05:35:18.911382 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Sep 9 03:39:34 -00 2025 Sep 9 05:35:18.911409 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=107bc9be805328e5e30844239fa87d36579f371e3de2c34fec43f6ff6d17b104 Sep 9 05:35:18.911420 kernel: BIOS-provided physical RAM map: Sep 9 05:35:18.911427 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 9 05:35:18.911433 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Sep 9 05:35:18.911440 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Sep 9 05:35:18.911448 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Sep 9 05:35:18.911456 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Sep 9 05:35:18.911465 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Sep 9 05:35:18.911472 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Sep 9 05:35:18.911479 kernel: NX (Execute Disable) protection: active Sep 9 05:35:18.911486 kernel: APIC: Static calls initialized Sep 9 05:35:18.911493 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Sep 9 05:35:18.911500 kernel: extended physical RAM map: Sep 9 05:35:18.911512 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 9 05:35:18.911519 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Sep 9 05:35:18.911527 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Sep 9 05:35:18.911535 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Sep 9 05:35:18.911542 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Sep 9 05:35:18.911550 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Sep 9 05:35:18.911558 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Sep 9 05:35:18.911566 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Sep 9 05:35:18.911573 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Sep 9 05:35:18.911581 kernel: efi: EFI v2.7 by EDK II Sep 9 05:35:18.911591 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 Sep 9 05:35:18.911599 kernel: secureboot: Secure boot disabled Sep 9 05:35:18.911607 kernel: SMBIOS 2.7 present. Sep 9 05:35:18.911614 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Sep 9 05:35:18.911622 kernel: DMI: Memory slots populated: 1/1 Sep 9 05:35:18.911629 kernel: Hypervisor detected: KVM Sep 9 05:35:18.911637 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 9 05:35:18.911644 kernel: kvm-clock: using sched offset of 5161323147 cycles Sep 9 05:35:18.911653 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 9 05:35:18.911661 kernel: tsc: Detected 2499.996 MHz processor Sep 9 05:35:18.911669 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 9 05:35:18.911679 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 9 05:35:18.911687 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Sep 9 05:35:18.911695 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 9 05:35:18.911703 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 9 05:35:18.911711 kernel: Using GB pages for direct mapping Sep 9 05:35:18.911722 kernel: ACPI: Early table checksum verification disabled Sep 9 05:35:18.911733 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Sep 9 05:35:18.911741 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Sep 9 05:35:18.911750 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 9 05:35:18.911758 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Sep 9 05:35:18.911766 kernel: ACPI: FACS 0x00000000789D0000 000040 Sep 9 05:35:18.911774 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Sep 9 05:35:18.911782 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 9 05:35:18.911791 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 9 05:35:18.911801 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Sep 9 05:35:18.911810 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Sep 9 05:35:18.911818 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 9 05:35:18.911826 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 9 05:35:18.911834 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Sep 9 05:35:18.911843 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Sep 9 05:35:18.911851 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Sep 9 05:35:18.911859 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Sep 9 05:35:18.911870 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Sep 9 05:35:18.911878 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Sep 9 05:35:18.911886 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Sep 9 05:35:18.911894 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Sep 9 05:35:18.911903 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Sep 9 05:35:18.911911 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Sep 9 05:35:18.911919 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Sep 9 05:35:18.911927 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Sep 9 05:35:18.911935 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Sep 9 05:35:18.911943 kernel: NUMA: Initialized distance table, cnt=1 Sep 9 05:35:18.911954 kernel: NODE_DATA(0) allocated [mem 0x7a8eddc0-0x7a8f4fff] Sep 9 05:35:18.911963 kernel: Zone ranges: Sep 9 05:35:18.911971 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 9 05:35:18.911979 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Sep 9 05:35:18.911987 kernel: Normal empty Sep 9 05:35:18.911995 kernel: Device empty Sep 9 05:35:18.912004 kernel: Movable zone start for each node Sep 9 05:35:18.912012 kernel: Early memory node ranges Sep 9 05:35:18.912020 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 9 05:35:18.912031 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Sep 9 05:35:18.912039 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Sep 9 05:35:18.912047 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Sep 9 05:35:18.912055 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 9 05:35:18.912064 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 9 05:35:18.912155 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 9 05:35:18.912163 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Sep 9 05:35:18.912209 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 9 05:35:18.912218 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 9 05:35:18.912230 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Sep 9 05:35:18.912238 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 9 05:35:18.912246 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 9 05:35:18.912255 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 9 05:35:18.912263 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 9 05:35:18.912282 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 9 05:35:18.912294 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 9 05:35:18.912306 kernel: TSC deadline timer available Sep 9 05:35:18.912318 kernel: CPU topo: Max. logical packages: 1 Sep 9 05:35:18.912330 kernel: CPU topo: Max. logical dies: 1 Sep 9 05:35:18.912345 kernel: CPU topo: Max. dies per package: 1 Sep 9 05:35:18.912353 kernel: CPU topo: Max. threads per core: 2 Sep 9 05:35:18.912361 kernel: CPU topo: Num. cores per package: 1 Sep 9 05:35:18.912369 kernel: CPU topo: Num. threads per package: 2 Sep 9 05:35:18.912378 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Sep 9 05:35:18.912387 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 9 05:35:18.912395 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Sep 9 05:35:18.912404 kernel: Booting paravirtualized kernel on KVM Sep 9 05:35:18.912412 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 9 05:35:18.912423 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 9 05:35:18.912432 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Sep 9 05:35:18.912440 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Sep 9 05:35:18.912448 kernel: pcpu-alloc: [0] 0 1 Sep 9 05:35:18.912457 kernel: kvm-guest: PV spinlocks enabled Sep 9 05:35:18.912465 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 9 05:35:18.912475 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=107bc9be805328e5e30844239fa87d36579f371e3de2c34fec43f6ff6d17b104 Sep 9 05:35:18.912484 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 05:35:18.912495 kernel: random: crng init done Sep 9 05:35:18.912503 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 05:35:18.912511 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 9 05:35:18.912519 kernel: Fallback order for Node 0: 0 Sep 9 05:35:18.912528 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Sep 9 05:35:18.912536 kernel: Policy zone: DMA32 Sep 9 05:35:18.912553 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 05:35:18.912564 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 9 05:35:18.912573 kernel: Kernel/User page tables isolation: enabled Sep 9 05:35:18.912581 kernel: ftrace: allocating 40102 entries in 157 pages Sep 9 05:35:18.912590 kernel: ftrace: allocated 157 pages with 5 groups Sep 9 05:35:18.912601 kernel: Dynamic Preempt: voluntary Sep 9 05:35:18.912610 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 05:35:18.912620 kernel: rcu: RCU event tracing is enabled. Sep 9 05:35:18.912629 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 9 05:35:18.912638 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 05:35:18.912647 kernel: Rude variant of Tasks RCU enabled. Sep 9 05:35:18.912659 kernel: Tracing variant of Tasks RCU enabled. Sep 9 05:35:18.912667 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 05:35:18.912676 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 9 05:35:18.912685 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 9 05:35:18.912694 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 9 05:35:18.912703 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 9 05:35:18.912712 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 9 05:35:18.912721 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 05:35:18.912732 kernel: Console: colour dummy device 80x25 Sep 9 05:35:18.912741 kernel: printk: legacy console [tty0] enabled Sep 9 05:35:18.912750 kernel: printk: legacy console [ttyS0] enabled Sep 9 05:35:18.912758 kernel: ACPI: Core revision 20240827 Sep 9 05:35:18.912767 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Sep 9 05:35:18.912776 kernel: APIC: Switch to symmetric I/O mode setup Sep 9 05:35:18.912785 kernel: x2apic enabled Sep 9 05:35:18.912794 kernel: APIC: Switched APIC routing to: physical x2apic Sep 9 05:35:18.912802 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Sep 9 05:35:18.912814 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Sep 9 05:35:18.912823 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 9 05:35:18.912832 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 9 05:35:18.912840 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 9 05:35:18.912849 kernel: Spectre V2 : Mitigation: Retpolines Sep 9 05:35:18.912857 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 9 05:35:18.912866 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 9 05:35:18.912875 kernel: RETBleed: Vulnerable Sep 9 05:35:18.912884 kernel: Speculative Store Bypass: Vulnerable Sep 9 05:35:18.912892 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Sep 9 05:35:18.912901 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 9 05:35:18.912913 kernel: GDS: Unknown: Dependent on hypervisor status Sep 9 05:35:18.912922 kernel: active return thunk: its_return_thunk Sep 9 05:35:18.912930 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 9 05:35:18.912939 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 9 05:35:18.912947 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 9 05:35:18.912956 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 9 05:35:18.912964 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Sep 9 05:35:18.912973 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Sep 9 05:35:18.912982 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 9 05:35:18.912990 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 9 05:35:18.912999 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 9 05:35:18.913011 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Sep 9 05:35:18.913019 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 9 05:35:18.913028 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Sep 9 05:35:18.913036 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Sep 9 05:35:18.913045 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Sep 9 05:35:18.913054 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Sep 9 05:35:18.913062 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Sep 9 05:35:18.913071 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Sep 9 05:35:18.913080 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Sep 9 05:35:18.913088 kernel: Freeing SMP alternatives memory: 32K Sep 9 05:35:18.913097 kernel: pid_max: default: 32768 minimum: 301 Sep 9 05:35:18.913108 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 9 05:35:18.913117 kernel: landlock: Up and running. Sep 9 05:35:18.913126 kernel: SELinux: Initializing. Sep 9 05:35:18.913135 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 9 05:35:18.913144 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 9 05:35:18.913153 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Sep 9 05:35:18.913162 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Sep 9 05:35:18.913183 kernel: signal: max sigframe size: 3632 Sep 9 05:35:18.913192 kernel: rcu: Hierarchical SRCU implementation. Sep 9 05:35:18.913201 kernel: rcu: Max phase no-delay instances is 400. Sep 9 05:35:18.913213 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 9 05:35:18.913222 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 9 05:35:18.913231 kernel: smp: Bringing up secondary CPUs ... Sep 9 05:35:18.913240 kernel: smpboot: x86: Booting SMP configuration: Sep 9 05:35:18.913249 kernel: .... node #0, CPUs: #1 Sep 9 05:35:18.913258 kernel: Transient Scheduler Attacks: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 9 05:35:18.913268 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 9 05:35:18.913276 kernel: smp: Brought up 1 node, 2 CPUs Sep 9 05:35:18.913285 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Sep 9 05:35:18.913297 kernel: Memory: 1908056K/2037804K available (14336K kernel code, 2428K rwdata, 9988K rodata, 54076K init, 2892K bss, 125192K reserved, 0K cma-reserved) Sep 9 05:35:18.913306 kernel: devtmpfs: initialized Sep 9 05:35:18.913315 kernel: x86/mm: Memory block size: 128MB Sep 9 05:35:18.913324 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Sep 9 05:35:18.913334 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 05:35:18.913343 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 9 05:35:18.913351 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 05:35:18.913360 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 05:35:18.913372 kernel: audit: initializing netlink subsys (disabled) Sep 9 05:35:18.913381 kernel: audit: type=2000 audit(1757396117.301:1): state=initialized audit_enabled=0 res=1 Sep 9 05:35:18.913390 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 05:35:18.913398 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 9 05:35:18.913408 kernel: cpuidle: using governor menu Sep 9 05:35:18.913417 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 05:35:18.913425 kernel: dca service started, version 1.12.1 Sep 9 05:35:18.913434 kernel: PCI: Using configuration type 1 for base access Sep 9 05:35:18.913443 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 9 05:35:18.913454 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 05:35:18.913463 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 05:35:18.913472 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 05:35:18.913481 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 05:35:18.913490 kernel: ACPI: Added _OSI(Module Device) Sep 9 05:35:18.913498 kernel: ACPI: Added _OSI(Processor Device) Sep 9 05:35:18.913507 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 05:35:18.913516 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Sep 9 05:35:18.913525 kernel: ACPI: Interpreter enabled Sep 9 05:35:18.913536 kernel: ACPI: PM: (supports S0 S5) Sep 9 05:35:18.913545 kernel: ACPI: Using IOAPIC for interrupt routing Sep 9 05:35:18.913554 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 9 05:35:18.913563 kernel: PCI: Using E820 reservations for host bridge windows Sep 9 05:35:18.913572 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 9 05:35:18.913581 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 05:35:18.913760 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 9 05:35:18.913857 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 9 05:35:18.913951 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 9 05:35:18.913962 kernel: acpiphp: Slot [3] registered Sep 9 05:35:18.913971 kernel: acpiphp: Slot [4] registered Sep 9 05:35:18.913980 kernel: acpiphp: Slot [5] registered Sep 9 05:35:18.913989 kernel: acpiphp: Slot [6] registered Sep 9 05:35:18.913998 kernel: acpiphp: Slot [7] registered Sep 9 05:35:18.914006 kernel: acpiphp: Slot [8] registered Sep 9 05:35:18.914015 kernel: acpiphp: Slot [9] registered Sep 9 05:35:18.914024 kernel: acpiphp: Slot [10] registered Sep 9 05:35:18.914035 kernel: acpiphp: Slot [11] registered Sep 9 05:35:18.914044 kernel: acpiphp: Slot [12] registered Sep 9 05:35:18.914053 kernel: acpiphp: Slot [13] registered Sep 9 05:35:18.914062 kernel: acpiphp: Slot [14] registered Sep 9 05:35:18.914071 kernel: acpiphp: Slot [15] registered Sep 9 05:35:18.914080 kernel: acpiphp: Slot [16] registered Sep 9 05:35:18.914089 kernel: acpiphp: Slot [17] registered Sep 9 05:35:18.914097 kernel: acpiphp: Slot [18] registered Sep 9 05:35:18.914106 kernel: acpiphp: Slot [19] registered Sep 9 05:35:18.914118 kernel: acpiphp: Slot [20] registered Sep 9 05:35:18.914127 kernel: acpiphp: Slot [21] registered Sep 9 05:35:18.914135 kernel: acpiphp: Slot [22] registered Sep 9 05:35:18.914144 kernel: acpiphp: Slot [23] registered Sep 9 05:35:18.914152 kernel: acpiphp: Slot [24] registered Sep 9 05:35:18.914161 kernel: acpiphp: Slot [25] registered Sep 9 05:35:18.914183 kernel: acpiphp: Slot [26] registered Sep 9 05:35:18.914193 kernel: acpiphp: Slot [27] registered Sep 9 05:35:18.914201 kernel: acpiphp: Slot [28] registered Sep 9 05:35:18.914210 kernel: acpiphp: Slot [29] registered Sep 9 05:35:18.914223 kernel: acpiphp: Slot [30] registered Sep 9 05:35:18.914231 kernel: acpiphp: Slot [31] registered Sep 9 05:35:18.914240 kernel: PCI host bridge to bus 0000:00 Sep 9 05:35:18.914340 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 9 05:35:18.914430 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 9 05:35:18.914512 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 9 05:35:18.914593 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 9 05:35:18.914673 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Sep 9 05:35:18.914761 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 05:35:18.914865 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Sep 9 05:35:18.914965 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Sep 9 05:35:18.915064 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Sep 9 05:35:18.915154 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 9 05:35:18.915275 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Sep 9 05:35:18.915366 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Sep 9 05:35:18.915459 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Sep 9 05:35:18.915553 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Sep 9 05:35:18.915646 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Sep 9 05:35:18.915739 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Sep 9 05:35:18.915837 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Sep 9 05:35:18.915936 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Sep 9 05:35:18.916025 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Sep 9 05:35:18.916115 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 9 05:35:18.916243 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Sep 9 05:35:18.916352 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Sep 9 05:35:18.916457 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Sep 9 05:35:18.916553 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Sep 9 05:35:18.916570 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 9 05:35:18.916580 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 9 05:35:18.916589 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 9 05:35:18.916598 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 9 05:35:18.916607 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 9 05:35:18.916616 kernel: iommu: Default domain type: Translated Sep 9 05:35:18.916625 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 9 05:35:18.916634 kernel: efivars: Registered efivars operations Sep 9 05:35:18.916643 kernel: PCI: Using ACPI for IRQ routing Sep 9 05:35:18.916655 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 9 05:35:18.916664 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Sep 9 05:35:18.916673 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Sep 9 05:35:18.916682 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Sep 9 05:35:18.916779 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Sep 9 05:35:18.916874 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Sep 9 05:35:18.916965 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 9 05:35:18.916977 kernel: vgaarb: loaded Sep 9 05:35:18.916989 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Sep 9 05:35:18.916998 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Sep 9 05:35:18.917007 kernel: clocksource: Switched to clocksource kvm-clock Sep 9 05:35:18.917016 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 05:35:18.917025 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 05:35:18.917034 kernel: pnp: PnP ACPI init Sep 9 05:35:18.917043 kernel: pnp: PnP ACPI: found 5 devices Sep 9 05:35:18.917052 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 9 05:35:18.917061 kernel: NET: Registered PF_INET protocol family Sep 9 05:35:18.917073 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 05:35:18.917082 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 9 05:35:18.917091 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 05:35:18.917100 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 9 05:35:18.917109 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 9 05:35:18.917118 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 9 05:35:18.917127 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 9 05:35:18.917136 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 9 05:35:18.917145 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 05:35:18.917157 kernel: NET: Registered PF_XDP protocol family Sep 9 05:35:18.920368 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 9 05:35:18.920479 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 9 05:35:18.920562 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 9 05:35:18.920644 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 9 05:35:18.920725 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Sep 9 05:35:18.920826 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 9 05:35:18.920839 kernel: PCI: CLS 0 bytes, default 64 Sep 9 05:35:18.920855 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 9 05:35:18.920865 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Sep 9 05:35:18.920874 kernel: clocksource: Switched to clocksource tsc Sep 9 05:35:18.920883 kernel: Initialise system trusted keyrings Sep 9 05:35:18.920893 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 9 05:35:18.920902 kernel: Key type asymmetric registered Sep 9 05:35:18.920911 kernel: Asymmetric key parser 'x509' registered Sep 9 05:35:18.920920 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 9 05:35:18.920929 kernel: io scheduler mq-deadline registered Sep 9 05:35:18.920941 kernel: io scheduler kyber registered Sep 9 05:35:18.920950 kernel: io scheduler bfq registered Sep 9 05:35:18.920959 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 9 05:35:18.920968 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 05:35:18.920977 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 9 05:35:18.920986 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 9 05:35:18.920995 kernel: i8042: Warning: Keylock active Sep 9 05:35:18.921004 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 9 05:35:18.921013 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 9 05:35:18.921118 kernel: rtc_cmos 00:00: RTC can wake from S4 Sep 9 05:35:18.921249 kernel: rtc_cmos 00:00: registered as rtc0 Sep 9 05:35:18.921350 kernel: rtc_cmos 00:00: setting system clock to 2025-09-09T05:35:18 UTC (1757396118) Sep 9 05:35:18.921460 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Sep 9 05:35:18.921497 kernel: intel_pstate: CPU model not supported Sep 9 05:35:18.921509 kernel: efifb: probing for efifb Sep 9 05:35:18.921519 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Sep 9 05:35:18.921531 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Sep 9 05:35:18.921541 kernel: efifb: scrolling: redraw Sep 9 05:35:18.921551 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 9 05:35:18.921560 kernel: Console: switching to colour frame buffer device 100x37 Sep 9 05:35:18.921569 kernel: fb0: EFI VGA frame buffer device Sep 9 05:35:18.921579 kernel: pstore: Using crash dump compression: deflate Sep 9 05:35:18.921589 kernel: pstore: Registered efi_pstore as persistent store backend Sep 9 05:35:18.921598 kernel: NET: Registered PF_INET6 protocol family Sep 9 05:35:18.921608 kernel: Segment Routing with IPv6 Sep 9 05:35:18.921617 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 05:35:18.921629 kernel: NET: Registered PF_PACKET protocol family Sep 9 05:35:18.921639 kernel: Key type dns_resolver registered Sep 9 05:35:18.921648 kernel: IPI shorthand broadcast: enabled Sep 9 05:35:18.921658 kernel: sched_clock: Marking stable (2626002896, 152524454)->(2862000241, -83472891) Sep 9 05:35:18.921667 kernel: registered taskstats version 1 Sep 9 05:35:18.921676 kernel: Loading compiled-in X.509 certificates Sep 9 05:35:18.921686 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: 884b9ad6a330f59ae6e6488b20a5491e41ff24a3' Sep 9 05:35:18.921696 kernel: Demotion targets for Node 0: null Sep 9 05:35:18.921705 kernel: Key type .fscrypt registered Sep 9 05:35:18.921716 kernel: Key type fscrypt-provisioning registered Sep 9 05:35:18.921726 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 05:35:18.921735 kernel: ima: Allocated hash algorithm: sha1 Sep 9 05:35:18.921744 kernel: ima: No architecture policies found Sep 9 05:35:18.921754 kernel: clk: Disabling unused clocks Sep 9 05:35:18.921763 kernel: Warning: unable to open an initial console. Sep 9 05:35:18.921773 kernel: Freeing unused kernel image (initmem) memory: 54076K Sep 9 05:35:18.921782 kernel: Write protecting the kernel read-only data: 24576k Sep 9 05:35:18.921794 kernel: Freeing unused kernel image (rodata/data gap) memory: 252K Sep 9 05:35:18.921805 kernel: Run /init as init process Sep 9 05:35:18.921815 kernel: with arguments: Sep 9 05:35:18.921827 kernel: /init Sep 9 05:35:18.921836 kernel: with environment: Sep 9 05:35:18.921845 kernel: HOME=/ Sep 9 05:35:18.921857 kernel: TERM=linux Sep 9 05:35:18.921867 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 05:35:18.921878 systemd[1]: Successfully made /usr/ read-only. Sep 9 05:35:18.921892 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 05:35:18.921902 systemd[1]: Detected virtualization amazon. Sep 9 05:35:18.921912 systemd[1]: Detected architecture x86-64. Sep 9 05:35:18.921921 systemd[1]: Running in initrd. Sep 9 05:35:18.921933 systemd[1]: No hostname configured, using default hostname. Sep 9 05:35:18.921943 systemd[1]: Hostname set to . Sep 9 05:35:18.921953 systemd[1]: Initializing machine ID from VM UUID. Sep 9 05:35:18.921963 systemd[1]: Queued start job for default target initrd.target. Sep 9 05:35:18.921973 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 05:35:18.921983 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 05:35:18.921994 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 05:35:18.922004 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 05:35:18.922016 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 05:35:18.922027 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 05:35:18.922037 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 05:35:18.922047 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 05:35:18.922057 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 05:35:18.922067 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 05:35:18.922077 systemd[1]: Reached target paths.target - Path Units. Sep 9 05:35:18.922089 systemd[1]: Reached target slices.target - Slice Units. Sep 9 05:35:18.922099 systemd[1]: Reached target swap.target - Swaps. Sep 9 05:35:18.922108 systemd[1]: Reached target timers.target - Timer Units. Sep 9 05:35:18.922118 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 05:35:18.922128 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 05:35:18.922138 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 05:35:18.922147 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 9 05:35:18.922157 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 05:35:18.922167 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 05:35:18.922433 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 05:35:18.922444 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 05:35:18.922454 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 05:35:18.922464 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 05:35:18.922475 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 05:35:18.922485 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 9 05:35:18.922495 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 05:35:18.922505 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 05:35:18.922517 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 05:35:18.922527 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:35:18.922564 systemd-journald[207]: Collecting audit messages is disabled. Sep 9 05:35:18.922587 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 05:35:18.922601 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 05:35:18.922611 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 05:35:18.922622 systemd-journald[207]: Journal started Sep 9 05:35:18.922646 systemd-journald[207]: Runtime Journal (/run/log/journal/ec2df39d2a6221dda9bf965cb058ba0a) is 4.8M, max 38.4M, 33.6M free. Sep 9 05:35:18.923702 systemd-modules-load[208]: Inserted module 'overlay' Sep 9 05:35:18.928222 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 05:35:18.935516 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 05:35:18.940277 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 05:35:18.942458 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:35:18.954885 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 05:35:18.960684 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 05:35:18.960760 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 05:35:18.963663 systemd-tmpfiles[221]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 9 05:35:18.967275 kernel: Bridge firewalling registered Sep 9 05:35:18.964290 systemd-modules-load[208]: Inserted module 'br_netfilter' Sep 9 05:35:18.965106 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 05:35:18.970416 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 05:35:18.973378 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 05:35:18.976408 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 05:35:18.997565 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 05:35:19.002783 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 05:35:19.005925 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 9 05:35:19.007107 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:35:19.011372 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 05:35:19.016403 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 05:35:19.031261 dracut-cmdline[244]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=107bc9be805328e5e30844239fa87d36579f371e3de2c34fec43f6ff6d17b104 Sep 9 05:35:19.078166 systemd-resolved[247]: Positive Trust Anchors: Sep 9 05:35:19.079208 systemd-resolved[247]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 05:35:19.079276 systemd-resolved[247]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 05:35:19.086453 systemd-resolved[247]: Defaulting to hostname 'linux'. Sep 9 05:35:19.089753 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 05:35:19.090505 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 05:35:19.132209 kernel: SCSI subsystem initialized Sep 9 05:35:19.142203 kernel: Loading iSCSI transport class v2.0-870. Sep 9 05:35:19.153212 kernel: iscsi: registered transport (tcp) Sep 9 05:35:19.175580 kernel: iscsi: registered transport (qla4xxx) Sep 9 05:35:19.175663 kernel: QLogic iSCSI HBA Driver Sep 9 05:35:19.194658 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 05:35:19.210625 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 05:35:19.213558 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 05:35:19.258302 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 05:35:19.260444 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 05:35:19.316208 kernel: raid6: avx512x4 gen() 18036 MB/s Sep 9 05:35:19.334205 kernel: raid6: avx512x2 gen() 17771 MB/s Sep 9 05:35:19.352205 kernel: raid6: avx512x1 gen() 17931 MB/s Sep 9 05:35:19.370201 kernel: raid6: avx2x4 gen() 17780 MB/s Sep 9 05:35:19.388203 kernel: raid6: avx2x2 gen() 17828 MB/s Sep 9 05:35:19.406508 kernel: raid6: avx2x1 gen() 13453 MB/s Sep 9 05:35:19.406566 kernel: raid6: using algorithm avx512x4 gen() 18036 MB/s Sep 9 05:35:19.425451 kernel: raid6: .... xor() 7790 MB/s, rmw enabled Sep 9 05:35:19.425499 kernel: raid6: using avx512x2 recovery algorithm Sep 9 05:35:19.447218 kernel: xor: automatically using best checksumming function avx Sep 9 05:35:19.615209 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 05:35:19.621667 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 05:35:19.623891 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 05:35:19.655075 systemd-udevd[456]: Using default interface naming scheme 'v255'. Sep 9 05:35:19.661791 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 05:35:19.665375 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 05:35:19.690536 dracut-pre-trigger[462]: rd.md=0: removing MD RAID activation Sep 9 05:35:19.717920 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 05:35:19.720065 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 05:35:19.779638 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 05:35:19.783544 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 05:35:19.841051 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 9 05:35:19.841289 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 9 05:35:19.850246 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Sep 9 05:35:19.855851 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 9 05:35:19.856064 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 9 05:35:19.860454 kernel: cryptd: max_cpu_qlen set to 1000 Sep 9 05:35:19.868937 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:c2:db:60:28:8b Sep 9 05:35:19.872488 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 9 05:35:19.874727 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 05:35:19.874998 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:35:19.875694 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:35:19.877377 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:35:19.885130 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 05:35:19.885204 kernel: GPT:9289727 != 16777215 Sep 9 05:35:19.885218 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 05:35:19.885229 kernel: GPT:9289727 != 16777215 Sep 9 05:35:19.885240 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 05:35:19.885251 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 9 05:35:19.891566 (udev-worker)[516]: Network interface NamePolicy= disabled on kernel command line. Sep 9 05:35:19.905664 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Sep 9 05:35:19.910233 kernel: AES CTR mode by8 optimization enabled Sep 9 05:35:19.926877 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:35:19.946198 kernel: nvme nvme0: using unchecked data buffer Sep 9 05:35:20.083098 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 9 05:35:20.094461 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 9 05:35:20.094969 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 9 05:35:20.096651 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 05:35:20.116485 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 9 05:35:20.127295 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 9 05:35:20.127982 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 05:35:20.129381 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 05:35:20.130529 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 05:35:20.132337 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 05:35:20.135329 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 05:35:20.155081 disk-uuid[690]: Primary Header is updated. Sep 9 05:35:20.155081 disk-uuid[690]: Secondary Entries is updated. Sep 9 05:35:20.155081 disk-uuid[690]: Secondary Header is updated. Sep 9 05:35:20.162717 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 05:35:20.163949 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 9 05:35:21.176989 disk-uuid[693]: The operation has completed successfully. Sep 9 05:35:21.177905 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 9 05:35:21.319390 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 05:35:21.319518 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 05:35:21.347826 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 05:35:21.362272 sh[958]: Success Sep 9 05:35:21.382329 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 05:35:21.382397 kernel: device-mapper: uevent: version 1.0.3 Sep 9 05:35:21.382412 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 9 05:35:21.394197 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Sep 9 05:35:21.481253 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 05:35:21.485280 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 05:35:21.495787 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 05:35:21.518246 kernel: BTRFS: device fsid 9ca60a92-6b53-4529-adc0-1f4392d2ad56 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (981) Sep 9 05:35:21.522835 kernel: BTRFS info (device dm-0): first mount of filesystem 9ca60a92-6b53-4529-adc0-1f4392d2ad56 Sep 9 05:35:21.522918 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 9 05:35:21.626811 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 9 05:35:21.626879 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 05:35:21.626894 kernel: BTRFS info (device dm-0): enabling free space tree Sep 9 05:35:21.636966 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 05:35:21.637897 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 9 05:35:21.638441 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 05:35:21.639187 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 05:35:21.640859 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 05:35:21.672222 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1012) Sep 9 05:35:21.677033 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem d4e5a7a8-c50a-463e-827d-ca249a0b8b8b Sep 9 05:35:21.677091 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 9 05:35:21.685743 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 9 05:35:21.685809 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 9 05:35:21.692425 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem d4e5a7a8-c50a-463e-827d-ca249a0b8b8b Sep 9 05:35:21.693018 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 05:35:21.696466 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 05:35:21.745566 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 05:35:21.748444 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 05:35:21.795140 systemd-networkd[1150]: lo: Link UP Sep 9 05:35:21.795154 systemd-networkd[1150]: lo: Gained carrier Sep 9 05:35:21.796925 systemd-networkd[1150]: Enumeration completed Sep 9 05:35:21.797051 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 05:35:21.797568 systemd-networkd[1150]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 05:35:21.797574 systemd-networkd[1150]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 05:35:21.800012 systemd[1]: Reached target network.target - Network. Sep 9 05:35:21.801528 systemd-networkd[1150]: eth0: Link UP Sep 9 05:35:21.801534 systemd-networkd[1150]: eth0: Gained carrier Sep 9 05:35:21.801551 systemd-networkd[1150]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 05:35:21.813299 systemd-networkd[1150]: eth0: DHCPv4 address 172.31.25.117/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 9 05:35:22.199287 ignition[1081]: Ignition 2.22.0 Sep 9 05:35:22.199302 ignition[1081]: Stage: fetch-offline Sep 9 05:35:22.199480 ignition[1081]: no configs at "/usr/lib/ignition/base.d" Sep 9 05:35:22.199488 ignition[1081]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 9 05:35:22.199790 ignition[1081]: Ignition finished successfully Sep 9 05:35:22.201293 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 05:35:22.203236 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 9 05:35:22.238049 ignition[1159]: Ignition 2.22.0 Sep 9 05:35:22.238066 ignition[1159]: Stage: fetch Sep 9 05:35:22.238506 ignition[1159]: no configs at "/usr/lib/ignition/base.d" Sep 9 05:35:22.238519 ignition[1159]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 9 05:35:22.238641 ignition[1159]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 9 05:35:22.273691 ignition[1159]: PUT result: OK Sep 9 05:35:22.276717 ignition[1159]: parsed url from cmdline: "" Sep 9 05:35:22.276729 ignition[1159]: no config URL provided Sep 9 05:35:22.276740 ignition[1159]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 05:35:22.276755 ignition[1159]: no config at "/usr/lib/ignition/user.ign" Sep 9 05:35:22.276793 ignition[1159]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 9 05:35:22.277583 ignition[1159]: PUT result: OK Sep 9 05:35:22.277651 ignition[1159]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 9 05:35:22.278478 ignition[1159]: GET result: OK Sep 9 05:35:22.278582 ignition[1159]: parsing config with SHA512: 7669f24fd55ea5b9b79a037b4923d7f5ef2e2a8035a6ca46e3ee7b6b4d521c158873a0e3a681fccd68f7d33a2a054b77440b6ccb000bc22c2df8ee5a213165e7 Sep 9 05:35:22.286606 unknown[1159]: fetched base config from "system" Sep 9 05:35:22.287921 unknown[1159]: fetched base config from "system" Sep 9 05:35:22.287934 unknown[1159]: fetched user config from "aws" Sep 9 05:35:22.289228 ignition[1159]: fetch: fetch complete Sep 9 05:35:22.289771 ignition[1159]: fetch: fetch passed Sep 9 05:35:22.289846 ignition[1159]: Ignition finished successfully Sep 9 05:35:22.292413 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 9 05:35:22.294106 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 05:35:22.331214 ignition[1166]: Ignition 2.22.0 Sep 9 05:35:22.331229 ignition[1166]: Stage: kargs Sep 9 05:35:22.332106 ignition[1166]: no configs at "/usr/lib/ignition/base.d" Sep 9 05:35:22.332120 ignition[1166]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 9 05:35:22.332397 ignition[1166]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 9 05:35:22.333274 ignition[1166]: PUT result: OK Sep 9 05:35:22.335387 ignition[1166]: kargs: kargs passed Sep 9 05:35:22.335457 ignition[1166]: Ignition finished successfully Sep 9 05:35:22.337583 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 05:35:22.339433 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 05:35:22.372674 ignition[1173]: Ignition 2.22.0 Sep 9 05:35:22.372691 ignition[1173]: Stage: disks Sep 9 05:35:22.373087 ignition[1173]: no configs at "/usr/lib/ignition/base.d" Sep 9 05:35:22.373100 ignition[1173]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 9 05:35:22.373230 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 9 05:35:22.374205 ignition[1173]: PUT result: OK Sep 9 05:35:22.376599 ignition[1173]: disks: disks passed Sep 9 05:35:22.376677 ignition[1173]: Ignition finished successfully Sep 9 05:35:22.378513 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 05:35:22.379384 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 05:35:22.379743 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 05:35:22.380658 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 05:35:22.380979 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 05:35:22.381512 systemd[1]: Reached target basic.target - Basic System. Sep 9 05:35:22.383109 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 05:35:22.432809 systemd-fsck[1181]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 9 05:35:22.435418 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 05:35:22.437343 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 05:35:22.588220 kernel: EXT4-fs (nvme0n1p9): mounted filesystem d2d7815e-fa16-4396-ab9d-ac540c1d8856 r/w with ordered data mode. Quota mode: none. Sep 9 05:35:22.588675 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 05:35:22.589523 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 05:35:22.591600 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 05:35:22.594272 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 05:35:22.596829 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 05:35:22.599301 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 05:35:22.600363 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 05:35:22.606711 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 05:35:22.608868 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 05:35:22.622195 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1200) Sep 9 05:35:22.625255 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem d4e5a7a8-c50a-463e-827d-ca249a0b8b8b Sep 9 05:35:22.625312 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 9 05:35:22.634923 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 9 05:35:22.634987 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 9 05:35:22.637095 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 05:35:23.009762 initrd-setup-root[1224]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 05:35:23.038921 initrd-setup-root[1231]: cut: /sysroot/etc/group: No such file or directory Sep 9 05:35:23.057399 systemd-networkd[1150]: eth0: Gained IPv6LL Sep 9 05:35:23.060141 initrd-setup-root[1238]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 05:35:23.064891 initrd-setup-root[1245]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 05:35:23.310131 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 05:35:23.312392 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 05:35:23.314027 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 05:35:23.326504 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 05:35:23.330857 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem d4e5a7a8-c50a-463e-827d-ca249a0b8b8b Sep 9 05:35:23.352036 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 05:35:23.363073 ignition[1312]: INFO : Ignition 2.22.0 Sep 9 05:35:23.363073 ignition[1312]: INFO : Stage: mount Sep 9 05:35:23.364368 ignition[1312]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 05:35:23.364368 ignition[1312]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 9 05:35:23.364368 ignition[1312]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 9 05:35:23.364368 ignition[1312]: INFO : PUT result: OK Sep 9 05:35:23.366684 ignition[1312]: INFO : mount: mount passed Sep 9 05:35:23.367254 ignition[1312]: INFO : Ignition finished successfully Sep 9 05:35:23.368814 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 05:35:23.370479 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 05:35:23.590906 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 05:35:23.620217 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1325) Sep 9 05:35:23.624965 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem d4e5a7a8-c50a-463e-827d-ca249a0b8b8b Sep 9 05:35:23.625232 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 9 05:35:23.633049 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 9 05:35:23.633134 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 9 05:35:23.635307 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 05:35:23.669170 ignition[1342]: INFO : Ignition 2.22.0 Sep 9 05:35:23.669170 ignition[1342]: INFO : Stage: files Sep 9 05:35:23.670536 ignition[1342]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 05:35:23.670536 ignition[1342]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 9 05:35:23.670536 ignition[1342]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 9 05:35:23.671839 ignition[1342]: INFO : PUT result: OK Sep 9 05:35:23.673161 ignition[1342]: DEBUG : files: compiled without relabeling support, skipping Sep 9 05:35:23.674509 ignition[1342]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 05:35:23.674509 ignition[1342]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 05:35:23.687795 ignition[1342]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 05:35:23.688713 ignition[1342]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 05:35:23.688713 ignition[1342]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 05:35:23.688542 unknown[1342]: wrote ssh authorized keys file for user: core Sep 9 05:35:23.690938 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 9 05:35:23.691746 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 9 05:35:23.733396 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 05:35:23.989892 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 9 05:35:23.989892 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 05:35:23.991583 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 9 05:35:24.255993 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 9 05:35:24.770924 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 05:35:24.772322 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 9 05:35:24.772322 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 05:35:24.772322 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 05:35:24.772322 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 05:35:24.772322 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 05:35:24.772322 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 05:35:24.772322 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 05:35:24.772322 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 05:35:24.778186 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 05:35:24.778186 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 05:35:24.778186 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 9 05:35:24.780736 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 9 05:35:24.780736 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 9 05:35:24.780736 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 9 05:35:25.155350 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 9 05:35:25.472901 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 9 05:35:25.472901 ignition[1342]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 9 05:35:25.498140 ignition[1342]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 05:35:25.502831 ignition[1342]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 05:35:25.502831 ignition[1342]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 9 05:35:25.505217 ignition[1342]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 9 05:35:25.505217 ignition[1342]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 05:35:25.505217 ignition[1342]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 05:35:25.505217 ignition[1342]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 05:35:25.505217 ignition[1342]: INFO : files: files passed Sep 9 05:35:25.505217 ignition[1342]: INFO : Ignition finished successfully Sep 9 05:35:25.505409 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 05:35:25.507825 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 05:35:25.511081 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 05:35:25.522533 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 05:35:25.524069 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 05:35:25.528740 initrd-setup-root-after-ignition[1372]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 05:35:25.529787 initrd-setup-root-after-ignition[1372]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 05:35:25.531081 initrd-setup-root-after-ignition[1376]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 05:35:25.532520 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 05:35:25.533588 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 05:35:25.535536 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 05:35:25.584883 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 05:35:25.585034 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 05:35:25.586285 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 05:35:25.587441 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 05:35:25.588360 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 05:35:25.589530 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 05:35:25.629410 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 05:35:25.631338 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 05:35:25.654229 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 05:35:25.654992 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 05:35:25.656068 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 05:35:25.657040 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 05:35:25.657309 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 05:35:25.658404 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 05:35:25.659299 systemd[1]: Stopped target basic.target - Basic System. Sep 9 05:35:25.660005 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 05:35:25.660965 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 05:35:25.661726 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 05:35:25.662504 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 9 05:35:25.663285 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 05:35:25.664106 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 05:35:25.664969 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 05:35:25.666057 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 05:35:25.666839 systemd[1]: Stopped target swap.target - Swaps. Sep 9 05:35:25.667576 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 05:35:25.667804 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 05:35:25.668942 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 05:35:25.669724 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 05:35:25.670451 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 05:35:25.670612 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 05:35:25.671266 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 05:35:25.671442 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 05:35:25.672620 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 05:35:25.672880 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 05:35:25.673533 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 05:35:25.673724 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 05:35:25.676336 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 05:35:25.679458 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 05:35:25.679942 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 05:35:25.680196 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 05:35:25.683519 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 05:35:25.683744 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 05:35:25.690664 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 05:35:25.693296 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 05:35:25.720201 ignition[1396]: INFO : Ignition 2.22.0 Sep 9 05:35:25.720201 ignition[1396]: INFO : Stage: umount Sep 9 05:35:25.720201 ignition[1396]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 05:35:25.720201 ignition[1396]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 9 05:35:25.720201 ignition[1396]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 9 05:35:25.720201 ignition[1396]: INFO : PUT result: OK Sep 9 05:35:25.719792 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 05:35:25.725545 ignition[1396]: INFO : umount: umount passed Sep 9 05:35:25.725545 ignition[1396]: INFO : Ignition finished successfully Sep 9 05:35:25.728150 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 05:35:25.728476 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 05:35:25.729354 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 05:35:25.729423 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 05:35:25.729913 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 05:35:25.729978 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 05:35:25.730592 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 9 05:35:25.730651 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 9 05:35:25.731313 systemd[1]: Stopped target network.target - Network. Sep 9 05:35:25.731884 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 05:35:25.731948 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 05:35:25.732655 systemd[1]: Stopped target paths.target - Path Units. Sep 9 05:35:25.733238 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 05:35:25.737277 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 05:35:25.737770 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 05:35:25.738778 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 05:35:25.739500 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 05:35:25.739561 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 05:35:25.740987 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 05:35:25.741041 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 05:35:25.741622 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 05:35:25.741705 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 05:35:25.742298 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 05:35:25.742355 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 05:35:25.743122 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 05:35:25.743780 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 05:35:25.750289 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 05:35:25.750460 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 05:35:25.754921 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 9 05:35:25.755369 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 05:35:25.755508 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 05:35:25.758086 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 9 05:35:25.759431 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 9 05:35:25.759933 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 05:35:25.759991 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 05:35:25.761857 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 05:35:25.762394 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 05:35:25.762498 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 05:35:25.763481 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 05:35:25.763547 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:35:25.765966 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 05:35:25.766040 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 05:35:25.767311 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 05:35:25.767392 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 05:35:25.770363 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 05:35:25.775954 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 05:35:25.776060 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 9 05:35:25.788934 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 05:35:25.789159 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 05:35:25.792087 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 05:35:25.792413 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 05:35:25.793694 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 05:35:25.794099 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 05:35:25.794855 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 05:35:25.795301 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 05:35:25.796211 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 05:35:25.796613 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 05:35:25.797497 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 05:35:25.797899 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 05:35:25.799773 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 05:35:25.801764 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 9 05:35:25.802427 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 05:35:25.804459 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 05:35:25.804542 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 05:35:25.805885 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 9 05:35:25.805955 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 05:35:25.807801 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 05:35:25.807870 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 05:35:25.809312 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 05:35:25.809384 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:35:25.813445 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 9 05:35:25.813536 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 9 05:35:25.813593 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 9 05:35:25.813652 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 05:35:25.814287 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 05:35:25.814432 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 05:35:25.823883 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 05:35:25.824033 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 05:35:25.839673 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 05:35:25.839815 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 05:35:25.841321 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 05:35:25.841894 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 05:35:25.841977 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 05:35:25.843726 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 05:35:25.876470 systemd[1]: Switching root. Sep 9 05:35:25.912223 systemd-journald[207]: Journal stopped Sep 9 05:35:27.614163 systemd-journald[207]: Received SIGTERM from PID 1 (systemd). Sep 9 05:35:27.625926 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 05:35:27.625959 kernel: SELinux: policy capability open_perms=1 Sep 9 05:35:27.625980 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 05:35:27.626000 kernel: SELinux: policy capability always_check_network=0 Sep 9 05:35:27.626019 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 05:35:27.626039 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 05:35:27.626059 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 05:35:27.626092 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 05:35:27.626116 kernel: SELinux: policy capability userspace_initial_context=0 Sep 9 05:35:27.626136 kernel: audit: type=1403 audit(1757396126.331:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 05:35:27.626163 systemd[1]: Successfully loaded SELinux policy in 77.633ms. Sep 9 05:35:27.629241 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.163ms. Sep 9 05:35:27.629284 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 05:35:27.629305 systemd[1]: Detected virtualization amazon. Sep 9 05:35:27.629325 systemd[1]: Detected architecture x86-64. Sep 9 05:35:27.629343 systemd[1]: Detected first boot. Sep 9 05:35:27.629367 systemd[1]: Initializing machine ID from VM UUID. Sep 9 05:35:27.629386 zram_generator::config[1440]: No configuration found. Sep 9 05:35:27.629411 kernel: Guest personality initialized and is inactive Sep 9 05:35:27.629433 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 9 05:35:27.629450 kernel: Initialized host personality Sep 9 05:35:27.629469 kernel: NET: Registered PF_VSOCK protocol family Sep 9 05:35:27.629490 systemd[1]: Populated /etc with preset unit settings. Sep 9 05:35:27.629515 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 9 05:35:27.629544 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 05:35:27.629565 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 05:35:27.629588 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 05:35:27.629610 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 05:35:27.629632 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 05:35:27.629656 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 05:35:27.629678 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 05:35:27.629700 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 05:35:27.629723 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 05:35:27.629749 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 05:35:27.629771 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 05:35:27.629791 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 05:35:27.629810 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 05:35:27.629830 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 05:35:27.629856 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 05:35:27.629876 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 05:35:27.629898 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 05:35:27.629917 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 9 05:35:27.629936 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 05:35:27.629955 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 05:35:27.629975 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 05:35:27.629995 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 05:35:27.630014 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 05:35:27.630034 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 05:35:27.630054 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 05:35:27.630076 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 05:35:27.630096 systemd[1]: Reached target slices.target - Slice Units. Sep 9 05:35:27.630116 systemd[1]: Reached target swap.target - Swaps. Sep 9 05:35:27.630136 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 05:35:27.630156 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 05:35:27.633208 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 9 05:35:27.633260 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 05:35:27.633280 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 05:35:27.633300 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 05:35:27.633319 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 05:35:27.633343 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 05:35:27.633363 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 05:35:27.633381 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 05:35:27.633400 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:35:27.633419 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 05:35:27.633438 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 05:35:27.633457 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 05:35:27.633478 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 05:35:27.633500 systemd[1]: Reached target machines.target - Containers. Sep 9 05:35:27.633521 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 05:35:27.633540 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 05:35:27.633559 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 05:35:27.633577 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 05:35:27.633596 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 05:35:27.633614 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 05:35:27.633633 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 05:35:27.633656 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 05:35:27.633680 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 05:35:27.633699 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 05:35:27.633719 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 05:35:27.633739 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 05:35:27.633758 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 05:35:27.633777 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 05:35:27.633798 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 05:35:27.633818 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 05:35:27.633840 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 05:35:27.633859 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 05:35:27.633880 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 05:35:27.639458 kernel: fuse: init (API version 7.41) Sep 9 05:35:27.639499 kernel: loop: module loaded Sep 9 05:35:27.639521 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 9 05:35:27.639541 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 05:35:27.639568 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 05:35:27.639587 systemd[1]: Stopped verity-setup.service. Sep 9 05:35:27.639608 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:35:27.639629 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 05:35:27.639649 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 05:35:27.639668 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 05:35:27.639688 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 05:35:27.639707 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 05:35:27.639727 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 05:35:27.639747 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 05:35:27.639766 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 05:35:27.639785 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 05:35:27.639808 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 05:35:27.639827 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 05:35:27.639846 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 05:35:27.639864 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 05:35:27.639884 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 05:35:27.639903 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 05:35:27.639922 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 05:35:27.639940 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 05:35:27.639959 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 05:35:27.639981 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 05:35:27.639999 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 05:35:27.640019 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 05:35:27.640076 systemd-journald[1519]: Collecting audit messages is disabled. Sep 9 05:35:27.640112 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 05:35:27.640132 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 05:35:27.640155 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 05:35:27.640189 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 05:35:27.640208 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 9 05:35:27.640239 systemd-journald[1519]: Journal started Sep 9 05:35:27.640278 systemd-journald[1519]: Runtime Journal (/run/log/journal/ec2df39d2a6221dda9bf965cb058ba0a) is 4.8M, max 38.4M, 33.6M free. Sep 9 05:35:27.222566 systemd[1]: Queued start job for default target multi-user.target. Sep 9 05:35:27.231510 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 9 05:35:27.231965 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 05:35:27.647309 kernel: ACPI: bus type drm_connector registered Sep 9 05:35:27.647362 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 05:35:27.653203 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 05:35:27.656203 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 05:35:27.662209 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 05:35:27.667204 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 05:35:27.671211 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 05:35:27.676203 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 05:35:27.686280 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 05:35:27.703230 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 05:35:27.713239 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 05:35:27.716544 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 05:35:27.718258 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 05:35:27.718675 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 05:35:27.720935 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 9 05:35:27.722525 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 05:35:27.724486 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 05:35:27.726769 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 05:35:27.736786 kernel: loop0: detected capacity change from 0 to 110984 Sep 9 05:35:27.760579 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 05:35:27.765127 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 05:35:27.769435 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 9 05:35:27.781896 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:35:27.810541 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 9 05:35:27.833628 systemd-journald[1519]: Time spent on flushing to /var/log/journal/ec2df39d2a6221dda9bf965cb058ba0a is 56.536ms for 1028 entries. Sep 9 05:35:27.833628 systemd-journald[1519]: System Journal (/var/log/journal/ec2df39d2a6221dda9bf965cb058ba0a) is 8M, max 195.6M, 187.6M free. Sep 9 05:35:27.902495 systemd-journald[1519]: Received client request to flush runtime journal. Sep 9 05:35:27.902552 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 05:35:27.902593 kernel: loop1: detected capacity change from 0 to 224512 Sep 9 05:35:27.861497 systemd-tmpfiles[1556]: ACLs are not supported, ignoring. Sep 9 05:35:27.861520 systemd-tmpfiles[1556]: ACLs are not supported, ignoring. Sep 9 05:35:27.868841 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 05:35:27.875438 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 05:35:27.880455 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 05:35:27.906877 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 05:35:27.953240 kernel: loop2: detected capacity change from 0 to 72368 Sep 9 05:35:27.975475 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 05:35:27.978437 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 05:35:28.003658 systemd-tmpfiles[1595]: ACLs are not supported, ignoring. Sep 9 05:35:28.003942 systemd-tmpfiles[1595]: ACLs are not supported, ignoring. Sep 9 05:35:28.007941 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 05:35:28.062360 kernel: loop3: detected capacity change from 0 to 128016 Sep 9 05:35:28.164291 kernel: loop4: detected capacity change from 0 to 110984 Sep 9 05:35:28.180330 kernel: loop5: detected capacity change from 0 to 224512 Sep 9 05:35:28.220211 kernel: loop6: detected capacity change from 0 to 72368 Sep 9 05:35:28.233729 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 05:35:28.235206 kernel: loop7: detected capacity change from 0 to 128016 Sep 9 05:35:28.258897 (sd-merge)[1600]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 9 05:35:28.259644 (sd-merge)[1600]: Merged extensions into '/usr'. Sep 9 05:35:28.273879 systemd[1]: Reload requested from client PID 1555 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 05:35:28.274055 systemd[1]: Reloading... Sep 9 05:35:28.376205 zram_generator::config[1625]: No configuration found. Sep 9 05:35:28.744967 systemd[1]: Reloading finished in 470 ms. Sep 9 05:35:28.765197 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 05:35:28.776385 systemd[1]: Starting ensure-sysext.service... Sep 9 05:35:28.780288 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 05:35:28.824110 systemd[1]: Reload requested from client PID 1677 ('systemctl') (unit ensure-sysext.service)... Sep 9 05:35:28.824303 systemd[1]: Reloading... Sep 9 05:35:28.838256 systemd-tmpfiles[1678]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 9 05:35:28.838315 systemd-tmpfiles[1678]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 9 05:35:28.838747 systemd-tmpfiles[1678]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 05:35:28.839151 systemd-tmpfiles[1678]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 05:35:28.844480 systemd-tmpfiles[1678]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 05:35:28.845693 systemd-tmpfiles[1678]: ACLs are not supported, ignoring. Sep 9 05:35:28.845879 systemd-tmpfiles[1678]: ACLs are not supported, ignoring. Sep 9 05:35:28.869051 systemd-tmpfiles[1678]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 05:35:28.871227 systemd-tmpfiles[1678]: Skipping /boot Sep 9 05:35:28.883854 systemd-tmpfiles[1678]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 05:35:28.883995 systemd-tmpfiles[1678]: Skipping /boot Sep 9 05:35:28.955216 zram_generator::config[1706]: No configuration found. Sep 9 05:35:29.185683 systemd[1]: Reloading finished in 360 ms. Sep 9 05:35:29.194922 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 05:35:29.209022 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 05:35:29.216932 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 05:35:29.233370 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 05:35:29.235254 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 05:35:29.242417 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 05:35:29.246401 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 05:35:29.256141 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 05:35:29.260507 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:35:29.260705 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 05:35:29.264462 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 05:35:29.268157 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 05:35:29.272249 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 05:35:29.272751 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 05:35:29.272864 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 05:35:29.272960 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:35:29.276989 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:35:29.277418 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 05:35:29.277594 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 05:35:29.277674 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 05:35:29.277768 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:35:29.283148 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 05:35:29.284360 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 05:35:29.289034 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:35:29.290452 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 05:35:29.295406 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 05:35:29.296435 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 05:35:29.296486 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 05:35:29.296564 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 05:35:29.297272 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:35:29.298130 systemd[1]: Finished ensure-sysext.service. Sep 9 05:35:29.309646 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 05:35:29.317999 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 05:35:29.319485 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 05:35:29.320787 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 05:35:29.320947 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 05:35:29.326350 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 05:35:29.334544 systemd-udevd[1770]: Using default interface naming scheme 'v255'. Sep 9 05:35:29.335617 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 05:35:29.337692 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 05:35:29.338418 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 05:35:29.339086 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 05:35:29.358165 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 05:35:29.360348 ldconfig[1548]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 05:35:29.367239 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 05:35:29.369738 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 05:35:29.375119 augenrules[1800]: No rules Sep 9 05:35:29.374919 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 05:35:29.376278 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 05:35:29.402545 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 05:35:29.404767 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 05:35:29.405910 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 05:35:29.412773 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 05:35:29.471893 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 05:35:29.473759 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 05:35:29.541733 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 9 05:35:29.553589 (udev-worker)[1826]: Network interface NamePolicy= disabled on kernel command line. Sep 9 05:35:29.685246 systemd-networkd[1817]: lo: Link UP Sep 9 05:35:29.685261 systemd-networkd[1817]: lo: Gained carrier Sep 9 05:35:29.707162 systemd-resolved[1768]: Positive Trust Anchors: Sep 9 05:35:29.707202 systemd-resolved[1768]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 05:35:29.707267 systemd-resolved[1768]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 05:35:29.720981 systemd-resolved[1768]: Defaulting to hostname 'linux'. Sep 9 05:35:29.724691 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 05:35:29.725526 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 05:35:29.729756 kernel: mousedev: PS/2 mouse device common for all mice Sep 9 05:35:29.726338 systemd-networkd[1817]: Enumeration completed Sep 9 05:35:29.726789 systemd-networkd[1817]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 05:35:29.726795 systemd-networkd[1817]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 05:35:29.727324 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 05:35:29.728101 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 05:35:29.730601 systemd-networkd[1817]: eth0: Link UP Sep 9 05:35:29.730612 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 05:35:29.731284 systemd-networkd[1817]: eth0: Gained carrier Sep 9 05:35:29.731314 systemd-networkd[1817]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 05:35:29.731461 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 9 05:35:29.732470 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 05:35:29.734074 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 05:35:29.734934 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 05:35:29.736517 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 05:35:29.736558 systemd[1]: Reached target paths.target - Path Units. Sep 9 05:35:29.737034 systemd[1]: Reached target timers.target - Timer Units. Sep 9 05:35:29.739223 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 05:35:29.740374 systemd-networkd[1817]: eth0: DHCPv4 address 172.31.25.117/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 9 05:35:29.742920 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 05:35:29.748022 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 9 05:35:29.750033 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 9 05:35:29.750685 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 9 05:35:29.759220 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 05:35:29.760389 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 9 05:35:29.761896 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 05:35:29.762707 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 05:35:29.765165 systemd[1]: Reached target network.target - Network. Sep 9 05:35:29.765735 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 05:35:29.766281 systemd[1]: Reached target basic.target - Basic System. Sep 9 05:35:29.766828 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 05:35:29.766870 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 05:35:29.768240 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 05:35:29.776299 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 9 05:35:29.783465 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 05:35:29.789703 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 05:35:29.796113 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 05:35:29.803691 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 05:35:29.805361 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 05:35:29.815209 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 9 05:35:29.813802 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 9 05:35:29.818443 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 05:35:29.830299 kernel: ACPI: button: Power Button [PWRF] Sep 9 05:35:29.830402 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Sep 9 05:35:29.830428 kernel: ACPI: button: Sleep Button [SLPF] Sep 9 05:35:29.825009 systemd[1]: Started ntpd.service - Network Time Service. Sep 9 05:35:29.837352 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 05:35:29.852359 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 9 05:35:29.857443 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 05:35:29.873774 jq[1859]: false Sep 9 05:35:29.904560 google_oslogin_nss_cache[1861]: oslogin_cache_refresh[1861]: Refreshing passwd entry cache Sep 9 05:35:29.904560 google_oslogin_nss_cache[1861]: oslogin_cache_refresh[1861]: Failure getting users, quitting Sep 9 05:35:29.904560 google_oslogin_nss_cache[1861]: oslogin_cache_refresh[1861]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 9 05:35:29.904560 google_oslogin_nss_cache[1861]: oslogin_cache_refresh[1861]: Refreshing group entry cache Sep 9 05:35:29.904560 google_oslogin_nss_cache[1861]: oslogin_cache_refresh[1861]: Failure getting groups, quitting Sep 9 05:35:29.904560 google_oslogin_nss_cache[1861]: oslogin_cache_refresh[1861]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 9 05:35:29.883797 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 05:35:29.877686 oslogin_cache_refresh[1861]: Refreshing passwd entry cache Sep 9 05:35:29.905372 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 05:35:29.895360 oslogin_cache_refresh[1861]: Failure getting users, quitting Sep 9 05:35:29.895386 oslogin_cache_refresh[1861]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 9 05:35:29.895446 oslogin_cache_refresh[1861]: Refreshing group entry cache Sep 9 05:35:29.898570 oslogin_cache_refresh[1861]: Failure getting groups, quitting Sep 9 05:35:29.898586 oslogin_cache_refresh[1861]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 9 05:35:29.915912 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 9 05:35:29.923494 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 05:35:29.926682 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 05:35:29.928457 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 05:35:29.933759 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 05:35:29.956217 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Sep 9 05:35:29.955371 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 05:35:29.970215 extend-filesystems[1860]: Found /dev/nvme0n1p6 Sep 9 05:35:29.976972 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 05:35:29.979627 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 05:35:29.979915 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 05:35:29.980604 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 9 05:35:29.980866 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 9 05:35:29.990512 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 05:35:29.990777 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 05:35:30.024668 jq[1880]: true Sep 9 05:35:30.047113 ntpd[1863]: ntpd 4.2.8p17@1.4004-o Tue Sep 9 03:09:56 UTC 2025 (1): Starting Sep 9 05:35:30.047151 ntpd[1863]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 9 05:35:30.047549 ntpd[1863]: 9 Sep 05:35:30 ntpd[1863]: ntpd 4.2.8p17@1.4004-o Tue Sep 9 03:09:56 UTC 2025 (1): Starting Sep 9 05:35:30.047549 ntpd[1863]: 9 Sep 05:35:30 ntpd[1863]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 9 05:35:30.047549 ntpd[1863]: 9 Sep 05:35:30 ntpd[1863]: ---------------------------------------------------- Sep 9 05:35:30.047549 ntpd[1863]: 9 Sep 05:35:30 ntpd[1863]: ntp-4 is maintained by Network Time Foundation, Sep 9 05:35:30.047549 ntpd[1863]: 9 Sep 05:35:30 ntpd[1863]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 9 05:35:30.047549 ntpd[1863]: 9 Sep 05:35:30 ntpd[1863]: corporation. Support and training for ntp-4 are Sep 9 05:35:30.047549 ntpd[1863]: 9 Sep 05:35:30 ntpd[1863]: available at https://www.nwtime.org/support Sep 9 05:35:30.047549 ntpd[1863]: 9 Sep 05:35:30 ntpd[1863]: ---------------------------------------------------- Sep 9 05:35:30.047162 ntpd[1863]: ---------------------------------------------------- Sep 9 05:35:30.047187 ntpd[1863]: ntp-4 is maintained by Network Time Foundation, Sep 9 05:35:30.047197 ntpd[1863]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 9 05:35:30.047206 ntpd[1863]: corporation. Support and training for ntp-4 are Sep 9 05:35:30.047215 ntpd[1863]: available at https://www.nwtime.org/support Sep 9 05:35:30.047223 ntpd[1863]: ---------------------------------------------------- Sep 9 05:35:30.052196 extend-filesystems[1860]: Found /dev/nvme0n1p9 Sep 9 05:35:30.060644 ntpd[1863]: proto: precision = 0.076 usec (-24) Sep 9 05:35:30.062095 ntpd[1863]: 9 Sep 05:35:30 ntpd[1863]: proto: precision = 0.076 usec (-24) Sep 9 05:35:30.063048 ntpd[1863]: basedate set to 2025-08-28 Sep 9 05:35:30.063078 ntpd[1863]: gps base set to 2025-08-31 (week 2382) Sep 9 05:35:30.063211 ntpd[1863]: 9 Sep 05:35:30 ntpd[1863]: basedate set to 2025-08-28 Sep 9 05:35:30.063211 ntpd[1863]: 9 Sep 05:35:30 ntpd[1863]: gps base set to 2025-08-31 (week 2382) Sep 9 05:35:30.070342 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 05:35:30.070803 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 05:35:30.071897 extend-filesystems[1860]: Checking size of /dev/nvme0n1p9 Sep 9 05:35:30.079825 (ntainerd)[1901]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 05:35:30.079451 ntpd[1863]: Listen and drop on 0 v6wildcard [::]:123 Sep 9 05:35:30.091744 ntpd[1863]: 9 Sep 05:35:30 ntpd[1863]: Listen and drop on 0 v6wildcard [::]:123 Sep 9 05:35:30.091744 ntpd[1863]: 9 Sep 05:35:30 ntpd[1863]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 9 05:35:30.079505 ntpd[1863]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 9 05:35:30.100129 ntpd[1863]: Listen normally on 2 lo 127.0.0.1:123 Sep 9 05:35:30.100226 ntpd[1863]: Listen normally on 3 eth0 172.31.25.117:123 Sep 9 05:35:30.100360 ntpd[1863]: 9 Sep 05:35:30 ntpd[1863]: Listen normally on 2 lo 127.0.0.1:123 Sep 9 05:35:30.100360 ntpd[1863]: 9 Sep 05:35:30 ntpd[1863]: Listen normally on 3 eth0 172.31.25.117:123 Sep 9 05:35:30.100360 ntpd[1863]: 9 Sep 05:35:30 ntpd[1863]: Listen normally on 4 lo [::1]:123 Sep 9 05:35:30.100360 ntpd[1863]: 9 Sep 05:35:30 ntpd[1863]: bind(21) AF_INET6 fe80::4c2:dbff:fe60:288b%2#123 flags 0x11 failed: Cannot assign requested address Sep 9 05:35:30.100271 ntpd[1863]: Listen normally on 4 lo [::1]:123 Sep 9 05:35:30.100560 ntpd[1863]: 9 Sep 05:35:30 ntpd[1863]: unable to create socket on eth0 (5) for fe80::4c2:dbff:fe60:288b%2#123 Sep 9 05:35:30.100560 ntpd[1863]: 9 Sep 05:35:30 ntpd[1863]: failed to init interface for address fe80::4c2:dbff:fe60:288b%2 Sep 9 05:35:30.100326 ntpd[1863]: bind(21) AF_INET6 fe80::4c2:dbff:fe60:288b%2#123 flags 0x11 failed: Cannot assign requested address Sep 9 05:35:30.100675 ntpd[1863]: 9 Sep 05:35:30 ntpd[1863]: Listening on routing socket on fd #21 for interface updates Sep 9 05:35:30.100348 ntpd[1863]: unable to create socket on eth0 (5) for fe80::4c2:dbff:fe60:288b%2#123 Sep 9 05:35:30.100523 ntpd[1863]: failed to init interface for address fe80::4c2:dbff:fe60:288b%2 Sep 9 05:35:30.100564 ntpd[1863]: Listening on routing socket on fd #21 for interface updates Sep 9 05:35:30.117746 jq[1899]: true Sep 9 05:35:30.123649 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 9 05:35:30.126811 ntpd[1863]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 9 05:35:30.127253 ntpd[1863]: 9 Sep 05:35:30 ntpd[1863]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 9 05:35:30.127349 ntpd[1863]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 9 05:35:30.127837 ntpd[1863]: 9 Sep 05:35:30 ntpd[1863]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 9 05:35:30.135724 update_engine[1878]: I20250909 05:35:30.135617 1878 main.cc:92] Flatcar Update Engine starting Sep 9 05:35:30.140648 tar[1887]: linux-amd64/LICENSE Sep 9 05:35:30.140954 tar[1887]: linux-amd64/helm Sep 9 05:35:30.157778 dbus-daemon[1857]: [system] SELinux support is enabled Sep 9 05:35:30.158011 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 05:35:30.166472 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 05:35:30.166524 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 05:35:30.168342 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 05:35:30.168379 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 05:35:30.186834 extend-filesystems[1860]: Resized partition /dev/nvme0n1p9 Sep 9 05:35:30.193634 extend-filesystems[1944]: resize2fs 1.47.3 (8-Jul-2025) Sep 9 05:35:30.206248 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 9 05:35:30.201032 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 9 05:35:30.200945 dbus-daemon[1857]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1817 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 9 05:35:30.206783 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 9 05:35:30.208539 systemd[1]: Started update-engine.service - Update Engine. Sep 9 05:35:30.210319 update_engine[1878]: I20250909 05:35:30.208675 1878 update_check_scheduler.cc:74] Next update check in 9m52s Sep 9 05:35:30.221472 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 05:35:30.262233 coreos-metadata[1856]: Sep 09 05:35:30.259 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 9 05:35:30.268193 coreos-metadata[1856]: Sep 09 05:35:30.266 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 9 05:35:30.268951 coreos-metadata[1856]: Sep 09 05:35:30.268 INFO Fetch successful Sep 9 05:35:30.268951 coreos-metadata[1856]: Sep 09 05:35:30.268 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 9 05:35:30.270621 coreos-metadata[1856]: Sep 09 05:35:30.270 INFO Fetch successful Sep 9 05:35:30.270795 systemd-logind[1872]: New seat seat0. Sep 9 05:35:30.277690 coreos-metadata[1856]: Sep 09 05:35:30.270 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 9 05:35:30.281874 coreos-metadata[1856]: Sep 09 05:35:30.281 INFO Fetch successful Sep 9 05:35:30.282594 coreos-metadata[1856]: Sep 09 05:35:30.282 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 9 05:35:30.283869 coreos-metadata[1856]: Sep 09 05:35:30.283 INFO Fetch successful Sep 9 05:35:30.284013 coreos-metadata[1856]: Sep 09 05:35:30.283 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 9 05:35:30.284900 coreos-metadata[1856]: Sep 09 05:35:30.284 INFO Fetch failed with 404: resource not found Sep 9 05:35:30.284900 coreos-metadata[1856]: Sep 09 05:35:30.284 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 9 05:35:30.285726 coreos-metadata[1856]: Sep 09 05:35:30.285 INFO Fetch successful Sep 9 05:35:30.285996 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 05:35:30.286170 coreos-metadata[1856]: Sep 09 05:35:30.285 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 9 05:35:30.286957 coreos-metadata[1856]: Sep 09 05:35:30.286 INFO Fetch successful Sep 9 05:35:30.286957 coreos-metadata[1856]: Sep 09 05:35:30.286 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 9 05:35:30.287500 coreos-metadata[1856]: Sep 09 05:35:30.287 INFO Fetch successful Sep 9 05:35:30.287684 coreos-metadata[1856]: Sep 09 05:35:30.287 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 9 05:35:30.288502 coreos-metadata[1856]: Sep 09 05:35:30.288 INFO Fetch successful Sep 9 05:35:30.288667 coreos-metadata[1856]: Sep 09 05:35:30.288 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 9 05:35:30.289880 coreos-metadata[1856]: Sep 09 05:35:30.289 INFO Fetch successful Sep 9 05:35:30.323197 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 9 05:35:30.364031 extend-filesystems[1944]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 9 05:35:30.364031 extend-filesystems[1944]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 9 05:35:30.364031 extend-filesystems[1944]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 9 05:35:30.351699 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 05:35:30.380444 extend-filesystems[1860]: Resized filesystem in /dev/nvme0n1p9 Sep 9 05:35:30.352002 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 05:35:30.394882 bash[1972]: Updated "/home/core/.ssh/authorized_keys" Sep 9 05:35:30.395766 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 05:35:30.403158 systemd[1]: Starting sshkeys.service... Sep 9 05:35:30.405378 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 9 05:35:30.408736 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 05:35:30.528352 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 9 05:35:30.534605 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 9 05:35:30.659747 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 9 05:35:30.662792 dbus-daemon[1857]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 9 05:35:30.663693 dbus-daemon[1857]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1946 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 9 05:35:30.671381 systemd[1]: Starting polkit.service - Authorization Manager... Sep 9 05:35:30.691719 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:35:30.775223 sshd_keygen[1906]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 05:35:30.812568 coreos-metadata[2005]: Sep 09 05:35:30.812 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 9 05:35:30.814454 coreos-metadata[2005]: Sep 09 05:35:30.814 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 9 05:35:30.823214 coreos-metadata[2005]: Sep 09 05:35:30.817 INFO Fetch successful Sep 9 05:35:30.823214 coreos-metadata[2005]: Sep 09 05:35:30.817 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 9 05:35:30.823214 coreos-metadata[2005]: Sep 09 05:35:30.818 INFO Fetch successful Sep 9 05:35:30.825519 unknown[2005]: wrote ssh authorized keys file for user: core Sep 9 05:35:30.909653 update-ssh-keys[2047]: Updated "/home/core/.ssh/authorized_keys" Sep 9 05:35:30.912115 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 9 05:35:30.924922 systemd[1]: Finished sshkeys.service. Sep 9 05:35:30.957189 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 05:35:30.957455 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:35:30.966930 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 05:35:30.976797 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:35:30.989268 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 05:35:30.998860 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 05:35:31.030372 systemd-logind[1872]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 9 05:35:31.048259 ntpd[1863]: 9 Sep 05:35:31 ntpd[1863]: bind(24) AF_INET6 fe80::4c2:dbff:fe60:288b%2#123 flags 0x11 failed: Cannot assign requested address Sep 9 05:35:31.048259 ntpd[1863]: 9 Sep 05:35:31 ntpd[1863]: unable to create socket on eth0 (6) for fe80::4c2:dbff:fe60:288b%2#123 Sep 9 05:35:31.048259 ntpd[1863]: 9 Sep 05:35:31 ntpd[1863]: failed to init interface for address fe80::4c2:dbff:fe60:288b%2 Sep 9 05:35:31.047619 ntpd[1863]: bind(24) AF_INET6 fe80::4c2:dbff:fe60:288b%2#123 flags 0x11 failed: Cannot assign requested address Sep 9 05:35:31.047654 ntpd[1863]: unable to create socket on eth0 (6) for fe80::4c2:dbff:fe60:288b%2#123 Sep 9 05:35:31.047671 ntpd[1863]: failed to init interface for address fe80::4c2:dbff:fe60:288b%2 Sep 9 05:35:31.084552 systemd-logind[1872]: Watching system buttons on /dev/input/event2 (Power Button) Sep 9 05:35:31.090054 systemd-logind[1872]: Watching system buttons on /dev/input/event3 (Sleep Button) Sep 9 05:35:31.091339 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 05:35:31.091644 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 05:35:31.097967 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 05:35:31.102871 polkitd[2022]: Started polkitd version 126 Sep 9 05:35:31.175000 polkitd[2022]: Loading rules from directory /etc/polkit-1/rules.d Sep 9 05:35:31.179885 polkitd[2022]: Loading rules from directory /run/polkit-1/rules.d Sep 9 05:35:31.182261 polkitd[2022]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 9 05:35:31.182717 polkitd[2022]: Loading rules from directory /usr/local/share/polkit-1/rules.d Sep 9 05:35:31.182772 polkitd[2022]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 9 05:35:31.182824 polkitd[2022]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 9 05:35:31.184418 polkitd[2022]: Finished loading, compiling and executing 2 rules Sep 9 05:35:31.184881 systemd[1]: Started polkit.service - Authorization Manager. Sep 9 05:35:31.191895 containerd[1901]: time="2025-09-09T05:35:31Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 9 05:35:31.197602 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 9 05:35:31.200002 dbus-daemon[1857]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 9 05:35:31.200424 polkitd[2022]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 9 05:35:31.205836 containerd[1901]: time="2025-09-09T05:35:31.205784717Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 9 05:35:31.214570 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 05:35:31.234903 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 05:35:31.239937 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 05:35:31.248730 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 9 05:35:31.249783 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 05:35:31.279260 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:35:31.283718 containerd[1901]: time="2025-09-09T05:35:31.283659280Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="13.962µs" Sep 9 05:35:31.286293 containerd[1901]: time="2025-09-09T05:35:31.285209819Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 9 05:35:31.286293 containerd[1901]: time="2025-09-09T05:35:31.285269792Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 9 05:35:31.286293 containerd[1901]: time="2025-09-09T05:35:31.285483566Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 9 05:35:31.286293 containerd[1901]: time="2025-09-09T05:35:31.285508858Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 9 05:35:31.286293 containerd[1901]: time="2025-09-09T05:35:31.285543415Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 05:35:31.286293 containerd[1901]: time="2025-09-09T05:35:31.285619191Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 05:35:31.286293 containerd[1901]: time="2025-09-09T05:35:31.285636338Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 05:35:31.286293 containerd[1901]: time="2025-09-09T05:35:31.285939140Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 05:35:31.286293 containerd[1901]: time="2025-09-09T05:35:31.285960607Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 05:35:31.286293 containerd[1901]: time="2025-09-09T05:35:31.285979179Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 05:35:31.286293 containerd[1901]: time="2025-09-09T05:35:31.285993478Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 9 05:35:31.286293 containerd[1901]: time="2025-09-09T05:35:31.286098059Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 9 05:35:31.292211 containerd[1901]: time="2025-09-09T05:35:31.288450870Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 05:35:31.292211 containerd[1901]: time="2025-09-09T05:35:31.288513221Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 05:35:31.292211 containerd[1901]: time="2025-09-09T05:35:31.288532668Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 9 05:35:31.292211 containerd[1901]: time="2025-09-09T05:35:31.288592252Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 9 05:35:31.292211 containerd[1901]: time="2025-09-09T05:35:31.288909395Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 9 05:35:31.292211 containerd[1901]: time="2025-09-09T05:35:31.288991062Z" level=info msg="metadata content store policy set" policy=shared Sep 9 05:35:31.290086 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 05:35:31.293488 containerd[1901]: time="2025-09-09T05:35:31.293145772Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 9 05:35:31.293488 containerd[1901]: time="2025-09-09T05:35:31.293436509Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 9 05:35:31.293602 containerd[1901]: time="2025-09-09T05:35:31.293519626Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 9 05:35:31.293602 containerd[1901]: time="2025-09-09T05:35:31.293540513Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 9 05:35:31.293602 containerd[1901]: time="2025-09-09T05:35:31.293557480Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 9 05:35:31.293602 containerd[1901]: time="2025-09-09T05:35:31.293571734Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 9 05:35:31.293602 containerd[1901]: time="2025-09-09T05:35:31.293599053Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 9 05:35:31.293776 containerd[1901]: time="2025-09-09T05:35:31.293616206Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 9 05:35:31.293776 containerd[1901]: time="2025-09-09T05:35:31.293631897Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 9 05:35:31.293776 containerd[1901]: time="2025-09-09T05:35:31.293647138Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 9 05:35:31.293776 containerd[1901]: time="2025-09-09T05:35:31.293662872Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 9 05:35:31.293776 containerd[1901]: time="2025-09-09T05:35:31.293682111Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 9 05:35:31.293934 containerd[1901]: time="2025-09-09T05:35:31.293830774Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 9 05:35:31.293934 containerd[1901]: time="2025-09-09T05:35:31.293857475Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 9 05:35:31.293934 containerd[1901]: time="2025-09-09T05:35:31.293880371Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 9 05:35:31.293934 containerd[1901]: time="2025-09-09T05:35:31.293896350Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 9 05:35:31.293934 containerd[1901]: time="2025-09-09T05:35:31.293917471Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 9 05:35:31.294094 containerd[1901]: time="2025-09-09T05:35:31.293932397Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 9 05:35:31.294094 containerd[1901]: time="2025-09-09T05:35:31.293948734Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 9 05:35:31.294094 containerd[1901]: time="2025-09-09T05:35:31.293963710Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 9 05:35:31.294094 containerd[1901]: time="2025-09-09T05:35:31.293979800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 9 05:35:31.294094 containerd[1901]: time="2025-09-09T05:35:31.293995894Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 9 05:35:31.294094 containerd[1901]: time="2025-09-09T05:35:31.294011186Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 9 05:35:31.294329 containerd[1901]: time="2025-09-09T05:35:31.294099069Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 9 05:35:31.294329 containerd[1901]: time="2025-09-09T05:35:31.294118737Z" level=info msg="Start snapshots syncer" Sep 9 05:35:31.294329 containerd[1901]: time="2025-09-09T05:35:31.294278904Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 9 05:35:31.295196 containerd[1901]: time="2025-09-09T05:35:31.294671147Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 9 05:35:31.295196 containerd[1901]: time="2025-09-09T05:35:31.294746167Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 9 05:35:31.296943 containerd[1901]: time="2025-09-09T05:35:31.296852370Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 9 05:35:31.297415 containerd[1901]: time="2025-09-09T05:35:31.297079986Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 9 05:35:31.297415 containerd[1901]: time="2025-09-09T05:35:31.297118718Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 9 05:35:31.297415 containerd[1901]: time="2025-09-09T05:35:31.297137446Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 9 05:35:31.297415 containerd[1901]: time="2025-09-09T05:35:31.297156007Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 9 05:35:31.297415 containerd[1901]: time="2025-09-09T05:35:31.297205832Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 9 05:35:31.297415 containerd[1901]: time="2025-09-09T05:35:31.297223713Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 9 05:35:31.297415 containerd[1901]: time="2025-09-09T05:35:31.297238005Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 9 05:35:31.297415 containerd[1901]: time="2025-09-09T05:35:31.297276966Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 9 05:35:31.297415 containerd[1901]: time="2025-09-09T05:35:31.297291095Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 9 05:35:31.297415 containerd[1901]: time="2025-09-09T05:35:31.297305856Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 9 05:35:31.297415 containerd[1901]: time="2025-09-09T05:35:31.297388136Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 05:35:31.297818 containerd[1901]: time="2025-09-09T05:35:31.297415657Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 05:35:31.297818 containerd[1901]: time="2025-09-09T05:35:31.297486267Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 05:35:31.297818 containerd[1901]: time="2025-09-09T05:35:31.297502361Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 05:35:31.297818 containerd[1901]: time="2025-09-09T05:35:31.297515262Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 9 05:35:31.297818 containerd[1901]: time="2025-09-09T05:35:31.297529705Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 9 05:35:31.297818 containerd[1901]: time="2025-09-09T05:35:31.297546682Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 9 05:35:31.297818 containerd[1901]: time="2025-09-09T05:35:31.297569001Z" level=info msg="runtime interface created" Sep 9 05:35:31.297818 containerd[1901]: time="2025-09-09T05:35:31.297577260Z" level=info msg="created NRI interface" Sep 9 05:35:31.297818 containerd[1901]: time="2025-09-09T05:35:31.297589376Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 9 05:35:31.297818 containerd[1901]: time="2025-09-09T05:35:31.297610190Z" level=info msg="Connect containerd service" Sep 9 05:35:31.297818 containerd[1901]: time="2025-09-09T05:35:31.297648407Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 05:35:31.299872 containerd[1901]: time="2025-09-09T05:35:31.298748009Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 05:35:31.314222 systemd-resolved[1768]: System hostname changed to 'ip-172-31-25-117'. Sep 9 05:35:31.314247 systemd-hostnamed[1946]: Hostname set to (transient) Sep 9 05:35:31.316716 locksmithd[1947]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 05:35:31.511276 systemd-networkd[1817]: eth0: Gained IPv6LL Sep 9 05:35:31.519899 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 05:35:31.521884 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 05:35:31.526539 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 9 05:35:31.533207 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:35:31.540904 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 05:35:31.657215 containerd[1901]: time="2025-09-09T05:35:31.656753469Z" level=info msg="Start subscribing containerd event" Sep 9 05:35:31.657215 containerd[1901]: time="2025-09-09T05:35:31.656824201Z" level=info msg="Start recovering state" Sep 9 05:35:31.657215 containerd[1901]: time="2025-09-09T05:35:31.656937787Z" level=info msg="Start event monitor" Sep 9 05:35:31.657215 containerd[1901]: time="2025-09-09T05:35:31.656955889Z" level=info msg="Start cni network conf syncer for default" Sep 9 05:35:31.657215 containerd[1901]: time="2025-09-09T05:35:31.656972265Z" level=info msg="Start streaming server" Sep 9 05:35:31.657215 containerd[1901]: time="2025-09-09T05:35:31.656984880Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 9 05:35:31.657215 containerd[1901]: time="2025-09-09T05:35:31.656994884Z" level=info msg="runtime interface starting up..." Sep 9 05:35:31.657215 containerd[1901]: time="2025-09-09T05:35:31.657003975Z" level=info msg="starting plugins..." Sep 9 05:35:31.657215 containerd[1901]: time="2025-09-09T05:35:31.657019307Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 9 05:35:31.655886 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 05:35:31.660000 containerd[1901]: time="2025-09-09T05:35:31.659702755Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 05:35:31.660000 containerd[1901]: time="2025-09-09T05:35:31.659975677Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 05:35:31.660153 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 05:35:31.660854 containerd[1901]: time="2025-09-09T05:35:31.660819727Z" level=info msg="containerd successfully booted in 0.476418s" Sep 9 05:35:31.716925 amazon-ssm-agent[2192]: Initializing new seelog logger Sep 9 05:35:31.718189 amazon-ssm-agent[2192]: New Seelog Logger Creation Complete Sep 9 05:35:31.718189 amazon-ssm-agent[2192]: 2025/09/09 05:35:31 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 9 05:35:31.718189 amazon-ssm-agent[2192]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 9 05:35:31.718370 amazon-ssm-agent[2192]: 2025/09/09 05:35:31 processing appconfig overrides Sep 9 05:35:31.719106 amazon-ssm-agent[2192]: 2025/09/09 05:35:31 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 9 05:35:31.719265 amazon-ssm-agent[2192]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 9 05:35:31.719429 amazon-ssm-agent[2192]: 2025/09/09 05:35:31 processing appconfig overrides Sep 9 05:35:31.720329 amazon-ssm-agent[2192]: 2025/09/09 05:35:31 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 9 05:35:31.720420 amazon-ssm-agent[2192]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 9 05:35:31.720583 amazon-ssm-agent[2192]: 2025/09/09 05:35:31 processing appconfig overrides Sep 9 05:35:31.721133 amazon-ssm-agent[2192]: 2025-09-09 05:35:31.7190 INFO Proxy environment variables: Sep 9 05:35:31.724443 amazon-ssm-agent[2192]: 2025/09/09 05:35:31 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 9 05:35:31.724443 amazon-ssm-agent[2192]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 9 05:35:31.724587 amazon-ssm-agent[2192]: 2025/09/09 05:35:31 processing appconfig overrides Sep 9 05:35:31.821590 amazon-ssm-agent[2192]: 2025-09-09 05:35:31.7190 INFO no_proxy: Sep 9 05:35:31.918645 tar[1887]: linux-amd64/README.md Sep 9 05:35:31.919292 amazon-ssm-agent[2192]: 2025-09-09 05:35:31.7190 INFO https_proxy: Sep 9 05:35:31.943074 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 05:35:32.017877 amazon-ssm-agent[2192]: 2025-09-09 05:35:31.7190 INFO http_proxy: Sep 9 05:35:32.116389 amazon-ssm-agent[2192]: 2025-09-09 05:35:31.7194 INFO Checking if agent identity type OnPrem can be assumed Sep 9 05:35:32.215303 amazon-ssm-agent[2192]: 2025-09-09 05:35:31.7201 INFO Checking if agent identity type EC2 can be assumed Sep 9 05:35:32.247240 amazon-ssm-agent[2192]: 2025/09/09 05:35:32 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 9 05:35:32.247240 amazon-ssm-agent[2192]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 9 05:35:32.247240 amazon-ssm-agent[2192]: 2025/09/09 05:35:32 processing appconfig overrides Sep 9 05:35:32.275909 amazon-ssm-agent[2192]: 2025-09-09 05:35:31.7947 INFO Agent will take identity from EC2 Sep 9 05:35:32.275909 amazon-ssm-agent[2192]: 2025-09-09 05:35:31.7980 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Sep 9 05:35:32.275909 amazon-ssm-agent[2192]: 2025-09-09 05:35:31.7980 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Sep 9 05:35:32.275909 amazon-ssm-agent[2192]: 2025-09-09 05:35:31.7981 INFO [amazon-ssm-agent] Starting Core Agent Sep 9 05:35:32.275909 amazon-ssm-agent[2192]: 2025-09-09 05:35:31.7981 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Sep 9 05:35:32.275909 amazon-ssm-agent[2192]: 2025-09-09 05:35:31.7981 INFO [Registrar] Starting registrar module Sep 9 05:35:32.275909 amazon-ssm-agent[2192]: 2025-09-09 05:35:31.8013 INFO [EC2Identity] Checking disk for registration info Sep 9 05:35:32.275909 amazon-ssm-agent[2192]: 2025-09-09 05:35:31.8013 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Sep 9 05:35:32.275909 amazon-ssm-agent[2192]: 2025-09-09 05:35:31.8013 INFO [EC2Identity] Generating registration keypair Sep 9 05:35:32.275909 amazon-ssm-agent[2192]: 2025-09-09 05:35:32.2108 INFO [EC2Identity] Checking write access before registering Sep 9 05:35:32.275909 amazon-ssm-agent[2192]: 2025-09-09 05:35:32.2113 INFO [EC2Identity] Registering EC2 instance with Systems Manager Sep 9 05:35:32.275909 amazon-ssm-agent[2192]: 2025-09-09 05:35:32.2466 INFO [EC2Identity] EC2 registration was successful. Sep 9 05:35:32.275909 amazon-ssm-agent[2192]: 2025-09-09 05:35:32.2466 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Sep 9 05:35:32.275909 amazon-ssm-agent[2192]: 2025-09-09 05:35:32.2467 INFO [CredentialRefresher] credentialRefresher has started Sep 9 05:35:32.275909 amazon-ssm-agent[2192]: 2025-09-09 05:35:32.2467 INFO [CredentialRefresher] Starting credentials refresher loop Sep 9 05:35:32.275909 amazon-ssm-agent[2192]: 2025-09-09 05:35:32.2754 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 9 05:35:32.275909 amazon-ssm-agent[2192]: 2025-09-09 05:35:32.2756 INFO [CredentialRefresher] Credentials ready Sep 9 05:35:32.314140 amazon-ssm-agent[2192]: 2025-09-09 05:35:32.2758 INFO [CredentialRefresher] Next credential rotation will be in 29.99999464865 minutes Sep 9 05:35:32.365963 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 05:35:32.367626 systemd[1]: Started sshd@0-172.31.25.117:22-147.75.109.163:45386.service - OpenSSH per-connection server daemon (147.75.109.163:45386). Sep 9 05:35:32.595196 sshd[2220]: Accepted publickey for core from 147.75.109.163 port 45386 ssh2: RSA SHA256:k1gUnX9WA3dyp6ylgbUnG2K6cUpm99lcEZsxDzZ5bM4 Sep 9 05:35:32.598232 sshd-session[2220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:35:32.605666 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 05:35:32.608573 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 05:35:32.621316 systemd-logind[1872]: New session 1 of user core. Sep 9 05:35:32.632067 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 05:35:32.636925 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 05:35:32.649336 (systemd)[2225]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 05:35:32.652022 systemd-logind[1872]: New session c1 of user core. Sep 9 05:35:32.838490 systemd[2225]: Queued start job for default target default.target. Sep 9 05:35:32.845614 systemd[2225]: Created slice app.slice - User Application Slice. Sep 9 05:35:32.845655 systemd[2225]: Reached target paths.target - Paths. Sep 9 05:35:32.845709 systemd[2225]: Reached target timers.target - Timers. Sep 9 05:35:32.848359 systemd[2225]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 05:35:32.862519 systemd[2225]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 05:35:32.862666 systemd[2225]: Reached target sockets.target - Sockets. Sep 9 05:35:32.862721 systemd[2225]: Reached target basic.target - Basic System. Sep 9 05:35:32.862776 systemd[2225]: Reached target default.target - Main User Target. Sep 9 05:35:32.862817 systemd[2225]: Startup finished in 202ms. Sep 9 05:35:32.863059 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 05:35:32.870379 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 05:35:33.017825 systemd[1]: Started sshd@1-172.31.25.117:22-147.75.109.163:45402.service - OpenSSH per-connection server daemon (147.75.109.163:45402). Sep 9 05:35:33.191087 sshd[2236]: Accepted publickey for core from 147.75.109.163 port 45402 ssh2: RSA SHA256:k1gUnX9WA3dyp6ylgbUnG2K6cUpm99lcEZsxDzZ5bM4 Sep 9 05:35:33.192639 sshd-session[2236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:35:33.198604 systemd-logind[1872]: New session 2 of user core. Sep 9 05:35:33.200522 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:35:33.210838 (kubelet)[2243]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 05:35:33.210994 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 05:35:33.211923 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 05:35:33.213298 systemd[1]: Startup finished in 2.736s (kernel) + 7.627s (initrd) + 6.957s (userspace) = 17.321s. Sep 9 05:35:33.294616 amazon-ssm-agent[2192]: 2025-09-09 05:35:33.2924 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 9 05:35:33.335432 sshd[2245]: Connection closed by 147.75.109.163 port 45402 Sep 9 05:35:33.336865 sshd-session[2236]: pam_unix(sshd:session): session closed for user core Sep 9 05:35:33.344874 systemd[1]: sshd@1-172.31.25.117:22-147.75.109.163:45402.service: Deactivated successfully. Sep 9 05:35:33.347675 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 05:35:33.349831 systemd-logind[1872]: Session 2 logged out. Waiting for processes to exit. Sep 9 05:35:33.350708 systemd-logind[1872]: Removed session 2. Sep 9 05:35:33.368704 systemd[1]: Started sshd@2-172.31.25.117:22-147.75.109.163:45410.service - OpenSSH per-connection server daemon (147.75.109.163:45410). Sep 9 05:35:33.394733 amazon-ssm-agent[2192]: 2025-09-09 05:35:33.2955 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2252) started Sep 9 05:35:33.494962 amazon-ssm-agent[2192]: 2025-09-09 05:35:33.2956 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 9 05:35:33.553754 sshd[2262]: Accepted publickey for core from 147.75.109.163 port 45410 ssh2: RSA SHA256:k1gUnX9WA3dyp6ylgbUnG2K6cUpm99lcEZsxDzZ5bM4 Sep 9 05:35:33.555386 sshd-session[2262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:35:33.561610 systemd-logind[1872]: New session 3 of user core. Sep 9 05:35:33.564473 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 05:35:33.684792 sshd[2276]: Connection closed by 147.75.109.163 port 45410 Sep 9 05:35:33.685835 sshd-session[2262]: pam_unix(sshd:session): session closed for user core Sep 9 05:35:33.689686 systemd[1]: sshd@2-172.31.25.117:22-147.75.109.163:45410.service: Deactivated successfully. Sep 9 05:35:33.691593 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 05:35:33.694006 systemd-logind[1872]: Session 3 logged out. Waiting for processes to exit. Sep 9 05:35:33.695525 systemd-logind[1872]: Removed session 3. Sep 9 05:35:33.718019 systemd[1]: Started sshd@3-172.31.25.117:22-147.75.109.163:45412.service - OpenSSH per-connection server daemon (147.75.109.163:45412). Sep 9 05:35:33.882046 sshd[2282]: Accepted publickey for core from 147.75.109.163 port 45412 ssh2: RSA SHA256:k1gUnX9WA3dyp6ylgbUnG2K6cUpm99lcEZsxDzZ5bM4 Sep 9 05:35:33.883671 sshd-session[2282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:35:33.890339 systemd-logind[1872]: New session 4 of user core. Sep 9 05:35:33.895399 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 05:35:34.015224 sshd[2286]: Connection closed by 147.75.109.163 port 45412 Sep 9 05:35:34.015856 sshd-session[2282]: pam_unix(sshd:session): session closed for user core Sep 9 05:35:34.021667 systemd[1]: sshd@3-172.31.25.117:22-147.75.109.163:45412.service: Deactivated successfully. Sep 9 05:35:34.023898 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 05:35:34.025135 systemd-logind[1872]: Session 4 logged out. Waiting for processes to exit. Sep 9 05:35:34.028062 systemd-logind[1872]: Removed session 4. Sep 9 05:35:34.047785 ntpd[1863]: Listen normally on 7 eth0 [fe80::4c2:dbff:fe60:288b%2]:123 Sep 9 05:35:34.048635 ntpd[1863]: 9 Sep 05:35:34 ntpd[1863]: Listen normally on 7 eth0 [fe80::4c2:dbff:fe60:288b%2]:123 Sep 9 05:35:34.052139 systemd[1]: Started sshd@4-172.31.25.117:22-147.75.109.163:45418.service - OpenSSH per-connection server daemon (147.75.109.163:45418). Sep 9 05:35:34.145652 kubelet[2243]: E0909 05:35:34.144981 2243 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 05:35:34.147838 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 05:35:34.148032 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 05:35:34.148767 systemd[1]: kubelet.service: Consumed 1.064s CPU time, 265.8M memory peak. Sep 9 05:35:34.225890 sshd[2292]: Accepted publickey for core from 147.75.109.163 port 45418 ssh2: RSA SHA256:k1gUnX9WA3dyp6ylgbUnG2K6cUpm99lcEZsxDzZ5bM4 Sep 9 05:35:34.227090 sshd-session[2292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:35:34.232742 systemd-logind[1872]: New session 5 of user core. Sep 9 05:35:34.249911 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 05:35:34.397762 sudo[2298]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 05:35:34.398287 sudo[2298]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:35:34.413672 sudo[2298]: pam_unix(sudo:session): session closed for user root Sep 9 05:35:34.436783 sshd[2297]: Connection closed by 147.75.109.163 port 45418 Sep 9 05:35:34.437521 sshd-session[2292]: pam_unix(sshd:session): session closed for user core Sep 9 05:35:34.441832 systemd[1]: sshd@4-172.31.25.117:22-147.75.109.163:45418.service: Deactivated successfully. Sep 9 05:35:34.443679 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 05:35:34.444621 systemd-logind[1872]: Session 5 logged out. Waiting for processes to exit. Sep 9 05:35:34.446097 systemd-logind[1872]: Removed session 5. Sep 9 05:35:34.474409 systemd[1]: Started sshd@5-172.31.25.117:22-147.75.109.163:45420.service - OpenSSH per-connection server daemon (147.75.109.163:45420). Sep 9 05:35:34.644459 sshd[2304]: Accepted publickey for core from 147.75.109.163 port 45420 ssh2: RSA SHA256:k1gUnX9WA3dyp6ylgbUnG2K6cUpm99lcEZsxDzZ5bM4 Sep 9 05:35:34.646078 sshd-session[2304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:35:34.651247 systemd-logind[1872]: New session 6 of user core. Sep 9 05:35:34.662429 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 05:35:34.758141 sudo[2309]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 05:35:34.758436 sudo[2309]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:35:34.765264 sudo[2309]: pam_unix(sudo:session): session closed for user root Sep 9 05:35:34.770948 sudo[2308]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 9 05:35:34.771341 sudo[2308]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:35:34.782279 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 05:35:34.820520 augenrules[2331]: No rules Sep 9 05:35:34.821825 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 05:35:34.822091 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 05:35:34.823118 sudo[2308]: pam_unix(sudo:session): session closed for user root Sep 9 05:35:34.845807 sshd[2307]: Connection closed by 147.75.109.163 port 45420 Sep 9 05:35:34.846353 sshd-session[2304]: pam_unix(sshd:session): session closed for user core Sep 9 05:35:34.850302 systemd-logind[1872]: Session 6 logged out. Waiting for processes to exit. Sep 9 05:35:34.851087 systemd[1]: sshd@5-172.31.25.117:22-147.75.109.163:45420.service: Deactivated successfully. Sep 9 05:35:34.853245 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 05:35:34.854920 systemd-logind[1872]: Removed session 6. Sep 9 05:35:34.875925 systemd[1]: Started sshd@6-172.31.25.117:22-147.75.109.163:45430.service - OpenSSH per-connection server daemon (147.75.109.163:45430). Sep 9 05:35:35.040977 sshd[2340]: Accepted publickey for core from 147.75.109.163 port 45430 ssh2: RSA SHA256:k1gUnX9WA3dyp6ylgbUnG2K6cUpm99lcEZsxDzZ5bM4 Sep 9 05:35:35.042544 sshd-session[2340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:35:35.048994 systemd-logind[1872]: New session 7 of user core. Sep 9 05:35:35.057408 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 05:35:35.150747 sudo[2344]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 05:35:35.151024 sudo[2344]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:35:35.757770 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 05:35:35.774725 (dockerd)[2363]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 05:35:36.275829 dockerd[2363]: time="2025-09-09T05:35:36.275553938Z" level=info msg="Starting up" Sep 9 05:35:36.280304 dockerd[2363]: time="2025-09-09T05:35:36.280143638Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 9 05:35:36.293140 dockerd[2363]: time="2025-09-09T05:35:36.293089138Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 9 05:35:36.503448 dockerd[2363]: time="2025-09-09T05:35:36.503214930Z" level=info msg="Loading containers: start." Sep 9 05:35:36.526329 kernel: Initializing XFRM netlink socket Sep 9 05:35:36.812778 (udev-worker)[2385]: Network interface NamePolicy= disabled on kernel command line. Sep 9 05:35:36.854769 systemd-networkd[1817]: docker0: Link UP Sep 9 05:35:36.859816 dockerd[2363]: time="2025-09-09T05:35:36.859753362Z" level=info msg="Loading containers: done." Sep 9 05:35:36.877643 dockerd[2363]: time="2025-09-09T05:35:36.877566842Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 05:35:36.877816 dockerd[2363]: time="2025-09-09T05:35:36.877666321Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 9 05:35:36.877816 dockerd[2363]: time="2025-09-09T05:35:36.877756380Z" level=info msg="Initializing buildkit" Sep 9 05:35:36.905282 dockerd[2363]: time="2025-09-09T05:35:36.905214165Z" level=info msg="Completed buildkit initialization" Sep 9 05:35:36.913040 dockerd[2363]: time="2025-09-09T05:35:36.912989946Z" level=info msg="Daemon has completed initialization" Sep 9 05:35:36.913352 dockerd[2363]: time="2025-09-09T05:35:36.913206482Z" level=info msg="API listen on /run/docker.sock" Sep 9 05:35:36.913311 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 05:35:38.641459 systemd-resolved[1768]: Clock change detected. Flushing caches. Sep 9 05:35:39.482527 containerd[1901]: time="2025-09-09T05:35:39.482486876Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 9 05:35:40.058137 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3702207544.mount: Deactivated successfully. Sep 9 05:35:41.491860 containerd[1901]: time="2025-09-09T05:35:41.491806232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:41.492974 containerd[1901]: time="2025-09-09T05:35:41.492813710Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=28800687" Sep 9 05:35:41.494099 containerd[1901]: time="2025-09-09T05:35:41.494062617Z" level=info msg="ImageCreate event name:\"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:41.496853 containerd[1901]: time="2025-09-09T05:35:41.496815956Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:41.498402 containerd[1901]: time="2025-09-09T05:35:41.498206079Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"28797487\" in 2.015676818s" Sep 9 05:35:41.498402 containerd[1901]: time="2025-09-09T05:35:41.498255034Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\"" Sep 9 05:35:41.499088 containerd[1901]: time="2025-09-09T05:35:41.499059072Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 9 05:35:43.114720 containerd[1901]: time="2025-09-09T05:35:43.114658484Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:43.116696 containerd[1901]: time="2025-09-09T05:35:43.116660771Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=24784128" Sep 9 05:35:43.119164 containerd[1901]: time="2025-09-09T05:35:43.119105986Z" level=info msg="ImageCreate event name:\"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:43.123351 containerd[1901]: time="2025-09-09T05:35:43.123287195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:43.124240 containerd[1901]: time="2025-09-09T05:35:43.124203655Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"26387322\" in 1.625106385s" Sep 9 05:35:43.124240 containerd[1901]: time="2025-09-09T05:35:43.124236755Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\"" Sep 9 05:35:43.125068 containerd[1901]: time="2025-09-09T05:35:43.125021853Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 9 05:35:44.543884 containerd[1901]: time="2025-09-09T05:35:44.543835990Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:44.546450 containerd[1901]: time="2025-09-09T05:35:44.546163328Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=19175036" Sep 9 05:35:44.549075 containerd[1901]: time="2025-09-09T05:35:44.549031931Z" level=info msg="ImageCreate event name:\"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:44.554132 containerd[1901]: time="2025-09-09T05:35:44.554079794Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:44.556818 containerd[1901]: time="2025-09-09T05:35:44.555686999Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"20778248\" in 1.430619857s" Sep 9 05:35:44.556818 containerd[1901]: time="2025-09-09T05:35:44.555737592Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\"" Sep 9 05:35:44.557898 containerd[1901]: time="2025-09-09T05:35:44.557865564Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 9 05:35:45.596794 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3707874914.mount: Deactivated successfully. Sep 9 05:35:45.992279 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 05:35:45.995250 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:35:46.308448 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:35:46.316670 containerd[1901]: time="2025-09-09T05:35:46.316592086Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:46.319080 (kubelet)[2652]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 05:35:46.320639 containerd[1901]: time="2025-09-09T05:35:46.320595133Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=30897170" Sep 9 05:35:46.321769 containerd[1901]: time="2025-09-09T05:35:46.321679657Z" level=info msg="ImageCreate event name:\"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:46.325152 containerd[1901]: time="2025-09-09T05:35:46.325088306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:46.326430 containerd[1901]: time="2025-09-09T05:35:46.326362185Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"30896189\" in 1.768456035s" Sep 9 05:35:46.326430 containerd[1901]: time="2025-09-09T05:35:46.326399543Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\"" Sep 9 05:35:46.327992 containerd[1901]: time="2025-09-09T05:35:46.327944215Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 9 05:35:46.373165 kubelet[2652]: E0909 05:35:46.373123 2652 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 05:35:46.377713 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 05:35:46.377984 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 05:35:46.378753 systemd[1]: kubelet.service: Consumed 201ms CPU time, 108.9M memory peak. Sep 9 05:35:46.838264 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3035113979.mount: Deactivated successfully. Sep 9 05:35:47.924052 containerd[1901]: time="2025-09-09T05:35:47.923985886Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:47.930834 containerd[1901]: time="2025-09-09T05:35:47.930569729Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 9 05:35:47.933107 containerd[1901]: time="2025-09-09T05:35:47.933043796Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:47.937036 containerd[1901]: time="2025-09-09T05:35:47.936991663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:47.937320 containerd[1901]: time="2025-09-09T05:35:47.937282641Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.609178914s" Sep 9 05:35:47.937363 containerd[1901]: time="2025-09-09T05:35:47.937321636Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 9 05:35:47.938295 containerd[1901]: time="2025-09-09T05:35:47.938106343Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 05:35:48.382821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3154919736.mount: Deactivated successfully. Sep 9 05:35:48.391510 containerd[1901]: time="2025-09-09T05:35:48.391453983Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 05:35:48.393335 containerd[1901]: time="2025-09-09T05:35:48.393278917Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 9 05:35:48.395639 containerd[1901]: time="2025-09-09T05:35:48.395577373Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 05:35:48.398943 containerd[1901]: time="2025-09-09T05:35:48.398887963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 05:35:48.399724 containerd[1901]: time="2025-09-09T05:35:48.399317311Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 461.181919ms" Sep 9 05:35:48.399724 containerd[1901]: time="2025-09-09T05:35:48.399346438Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 9 05:35:48.400013 containerd[1901]: time="2025-09-09T05:35:48.399995415Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 9 05:35:48.919852 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount735984757.mount: Deactivated successfully. Sep 9 05:35:51.239217 containerd[1901]: time="2025-09-09T05:35:51.239156511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:51.240584 containerd[1901]: time="2025-09-09T05:35:51.240527476Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Sep 9 05:35:51.241667 containerd[1901]: time="2025-09-09T05:35:51.241629816Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:51.244527 containerd[1901]: time="2025-09-09T05:35:51.244466440Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:51.245891 containerd[1901]: time="2025-09-09T05:35:51.245725605Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.845701447s" Sep 9 05:35:51.245891 containerd[1901]: time="2025-09-09T05:35:51.245767542Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 9 05:35:53.771600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:35:53.772323 systemd[1]: kubelet.service: Consumed 201ms CPU time, 108.9M memory peak. Sep 9 05:35:53.774936 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:35:53.809997 systemd[1]: Reload requested from client PID 2797 ('systemctl') (unit session-7.scope)... Sep 9 05:35:53.810016 systemd[1]: Reloading... Sep 9 05:35:53.957581 zram_generator::config[2838]: No configuration found. Sep 9 05:35:54.254460 systemd[1]: Reloading finished in 443 ms. Sep 9 05:35:54.323105 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 05:35:54.323190 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 05:35:54.323703 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:35:54.323767 systemd[1]: kubelet.service: Consumed 141ms CPU time, 97.6M memory peak. Sep 9 05:35:54.326206 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:35:54.600092 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:35:54.613072 (kubelet)[2905]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 05:35:54.670708 kubelet[2905]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:35:54.670708 kubelet[2905]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 05:35:54.670708 kubelet[2905]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:35:54.671062 kubelet[2905]: I0909 05:35:54.670792 2905 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 05:35:54.854185 kubelet[2905]: I0909 05:35:54.854060 2905 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 9 05:35:54.854185 kubelet[2905]: I0909 05:35:54.854089 2905 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 05:35:54.854800 kubelet[2905]: I0909 05:35:54.854772 2905 server.go:954] "Client rotation is on, will bootstrap in background" Sep 9 05:35:54.934155 kubelet[2905]: I0909 05:35:54.934104 2905 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 05:35:54.937316 kubelet[2905]: E0909 05:35:54.937249 2905 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.25.117:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.25.117:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:35:54.954972 kubelet[2905]: I0909 05:35:54.954911 2905 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 05:35:54.960157 kubelet[2905]: I0909 05:35:54.960124 2905 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 05:35:54.964675 kubelet[2905]: I0909 05:35:54.964589 2905 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 05:35:54.964848 kubelet[2905]: I0909 05:35:54.964656 2905 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-25-117","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 05:35:54.969455 kubelet[2905]: I0909 05:35:54.969402 2905 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 05:35:54.969455 kubelet[2905]: I0909 05:35:54.969457 2905 container_manager_linux.go:304] "Creating device plugin manager" Sep 9 05:35:54.971100 kubelet[2905]: I0909 05:35:54.971063 2905 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:35:54.983267 kubelet[2905]: I0909 05:35:54.983139 2905 kubelet.go:446] "Attempting to sync node with API server" Sep 9 05:35:54.983267 kubelet[2905]: I0909 05:35:54.983200 2905 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 05:35:54.984003 kubelet[2905]: I0909 05:35:54.983959 2905 kubelet.go:352] "Adding apiserver pod source" Sep 9 05:35:54.984003 kubelet[2905]: I0909 05:35:54.984004 2905 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 05:35:54.992053 kubelet[2905]: W0909 05:35:54.991347 2905 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.25.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-117&limit=500&resourceVersion=0": dial tcp 172.31.25.117:6443: connect: connection refused Sep 9 05:35:54.992053 kubelet[2905]: E0909 05:35:54.991413 2905 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.25.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-117&limit=500&resourceVersion=0\": dial tcp 172.31.25.117:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:35:54.992053 kubelet[2905]: W0909 05:35:54.991817 2905 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.25.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.25.117:6443: connect: connection refused Sep 9 05:35:54.992053 kubelet[2905]: E0909 05:35:54.991852 2905 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.25.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.25.117:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:35:54.992053 kubelet[2905]: I0909 05:35:54.991937 2905 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 05:35:54.997028 kubelet[2905]: I0909 05:35:54.996992 2905 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 05:35:55.003792 kubelet[2905]: W0909 05:35:55.002735 2905 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 05:35:55.004015 kubelet[2905]: I0909 05:35:55.003990 2905 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 05:35:55.004074 kubelet[2905]: I0909 05:35:55.004045 2905 server.go:1287] "Started kubelet" Sep 9 05:35:55.007031 kubelet[2905]: I0909 05:35:55.006983 2905 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 05:35:55.008443 kubelet[2905]: I0909 05:35:55.008409 2905 server.go:479] "Adding debug handlers to kubelet server" Sep 9 05:35:55.012254 kubelet[2905]: I0909 05:35:55.011719 2905 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 05:35:55.012254 kubelet[2905]: I0909 05:35:55.012129 2905 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 05:35:55.020693 kubelet[2905]: I0909 05:35:55.020665 2905 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 05:35:55.023349 kubelet[2905]: E0909 05:35:55.019303 2905 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.25.117:6443/api/v1/namespaces/default/events\": dial tcp 172.31.25.117:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-25-117.1863867deca6d9a9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-25-117,UID:ip-172-31-25-117,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-25-117,},FirstTimestamp:2025-09-09 05:35:55.004017065 +0000 UTC m=+0.386749353,LastTimestamp:2025-09-09 05:35:55.004017065 +0000 UTC m=+0.386749353,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-25-117,}" Sep 9 05:35:55.025852 kubelet[2905]: I0909 05:35:55.025819 2905 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 05:35:55.030617 kubelet[2905]: E0909 05:35:55.030394 2905 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-25-117\" not found" Sep 9 05:35:55.030617 kubelet[2905]: I0909 05:35:55.030456 2905 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 05:35:55.030770 kubelet[2905]: I0909 05:35:55.030755 2905 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 05:35:55.030836 kubelet[2905]: I0909 05:35:55.030811 2905 reconciler.go:26] "Reconciler: start to sync state" Sep 9 05:35:55.031335 kubelet[2905]: W0909 05:35:55.031281 2905 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.25.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.25.117:6443: connect: connection refused Sep 9 05:35:55.031429 kubelet[2905]: E0909 05:35:55.031350 2905 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.25.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.25.117:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:35:55.031700 kubelet[2905]: E0909 05:35:55.031657 2905 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-117?timeout=10s\": dial tcp 172.31.25.117:6443: connect: connection refused" interval="200ms" Sep 9 05:35:55.040930 kubelet[2905]: I0909 05:35:55.040887 2905 factory.go:221] Registration of the systemd container factory successfully Sep 9 05:35:55.041068 kubelet[2905]: I0909 05:35:55.041012 2905 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 05:35:55.044358 kubelet[2905]: I0909 05:35:55.044228 2905 factory.go:221] Registration of the containerd container factory successfully Sep 9 05:35:55.053902 kubelet[2905]: I0909 05:35:55.053849 2905 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 05:35:55.056488 kubelet[2905]: I0909 05:35:55.056102 2905 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 05:35:55.056488 kubelet[2905]: I0909 05:35:55.056140 2905 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 9 05:35:55.056488 kubelet[2905]: I0909 05:35:55.056167 2905 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 05:35:55.056488 kubelet[2905]: I0909 05:35:55.056176 2905 kubelet.go:2382] "Starting kubelet main sync loop" Sep 9 05:35:55.056488 kubelet[2905]: E0909 05:35:55.056231 2905 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 05:35:55.065762 kubelet[2905]: E0909 05:35:55.065734 2905 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 05:35:55.066306 kubelet[2905]: W0909 05:35:55.066258 2905 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.25.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.25.117:6443: connect: connection refused Sep 9 05:35:55.066454 kubelet[2905]: E0909 05:35:55.066425 2905 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.25.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.25.117:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:35:55.079461 kubelet[2905]: I0909 05:35:55.079428 2905 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 05:35:55.079461 kubelet[2905]: I0909 05:35:55.079466 2905 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 05:35:55.079673 kubelet[2905]: I0909 05:35:55.079489 2905 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:35:55.085654 kubelet[2905]: I0909 05:35:55.085610 2905 policy_none.go:49] "None policy: Start" Sep 9 05:35:55.085654 kubelet[2905]: I0909 05:35:55.085646 2905 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 05:35:55.085654 kubelet[2905]: I0909 05:35:55.085662 2905 state_mem.go:35] "Initializing new in-memory state store" Sep 9 05:35:55.093208 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 05:35:55.105771 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 05:35:55.111970 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 05:35:55.121783 kubelet[2905]: I0909 05:35:55.121721 2905 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 05:35:55.122441 kubelet[2905]: I0909 05:35:55.122427 2905 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 05:35:55.122753 kubelet[2905]: I0909 05:35:55.122722 2905 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 05:35:55.126315 kubelet[2905]: E0909 05:35:55.125701 2905 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 05:35:55.126315 kubelet[2905]: E0909 05:35:55.126256 2905 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-25-117\" not found" Sep 9 05:35:55.126649 kubelet[2905]: I0909 05:35:55.126631 2905 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 05:35:55.166936 systemd[1]: Created slice kubepods-burstable-pod6db4de21a0c522dd96b249cc10f33e3d.slice - libcontainer container kubepods-burstable-pod6db4de21a0c522dd96b249cc10f33e3d.slice. Sep 9 05:35:55.175393 kubelet[2905]: E0909 05:35:55.174531 2905 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-117\" not found" node="ip-172-31-25-117" Sep 9 05:35:55.177947 systemd[1]: Created slice kubepods-burstable-pod2f8a10ebeaffa42847e9ce44be677ed6.slice - libcontainer container kubepods-burstable-pod2f8a10ebeaffa42847e9ce44be677ed6.slice. Sep 9 05:35:55.191385 kubelet[2905]: E0909 05:35:55.191358 2905 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-117\" not found" node="ip-172-31-25-117" Sep 9 05:35:55.195814 systemd[1]: Created slice kubepods-burstable-pod02c20e5448fef9e5424f5d280fb05f27.slice - libcontainer container kubepods-burstable-pod02c20e5448fef9e5424f5d280fb05f27.slice. Sep 9 05:35:55.199736 kubelet[2905]: E0909 05:35:55.199499 2905 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-117\" not found" node="ip-172-31-25-117" Sep 9 05:35:55.225712 kubelet[2905]: I0909 05:35:55.225683 2905 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-25-117" Sep 9 05:35:55.226224 kubelet[2905]: E0909 05:35:55.226186 2905 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.25.117:6443/api/v1/nodes\": dial tcp 172.31.25.117:6443: connect: connection refused" node="ip-172-31-25-117" Sep 9 05:35:55.232953 kubelet[2905]: E0909 05:35:55.232915 2905 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-117?timeout=10s\": dial tcp 172.31.25.117:6443: connect: connection refused" interval="400ms" Sep 9 05:35:55.331438 kubelet[2905]: I0909 05:35:55.331396 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6db4de21a0c522dd96b249cc10f33e3d-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-25-117\" (UID: \"6db4de21a0c522dd96b249cc10f33e3d\") " pod="kube-system/kube-apiserver-ip-172-31-25-117" Sep 9 05:35:55.331438 kubelet[2905]: I0909 05:35:55.331436 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2f8a10ebeaffa42847e9ce44be677ed6-k8s-certs\") pod \"kube-controller-manager-ip-172-31-25-117\" (UID: \"2f8a10ebeaffa42847e9ce44be677ed6\") " pod="kube-system/kube-controller-manager-ip-172-31-25-117" Sep 9 05:35:55.331438 kubelet[2905]: I0909 05:35:55.331456 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2f8a10ebeaffa42847e9ce44be677ed6-kubeconfig\") pod \"kube-controller-manager-ip-172-31-25-117\" (UID: \"2f8a10ebeaffa42847e9ce44be677ed6\") " pod="kube-system/kube-controller-manager-ip-172-31-25-117" Sep 9 05:35:55.331841 kubelet[2905]: I0909 05:35:55.331472 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/02c20e5448fef9e5424f5d280fb05f27-kubeconfig\") pod \"kube-scheduler-ip-172-31-25-117\" (UID: \"02c20e5448fef9e5424f5d280fb05f27\") " pod="kube-system/kube-scheduler-ip-172-31-25-117" Sep 9 05:35:55.331841 kubelet[2905]: I0909 05:35:55.331495 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6db4de21a0c522dd96b249cc10f33e3d-ca-certs\") pod \"kube-apiserver-ip-172-31-25-117\" (UID: \"6db4de21a0c522dd96b249cc10f33e3d\") " pod="kube-system/kube-apiserver-ip-172-31-25-117" Sep 9 05:35:55.331841 kubelet[2905]: I0909 05:35:55.331513 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2f8a10ebeaffa42847e9ce44be677ed6-ca-certs\") pod \"kube-controller-manager-ip-172-31-25-117\" (UID: \"2f8a10ebeaffa42847e9ce44be677ed6\") " pod="kube-system/kube-controller-manager-ip-172-31-25-117" Sep 9 05:35:55.331841 kubelet[2905]: I0909 05:35:55.331530 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2f8a10ebeaffa42847e9ce44be677ed6-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-25-117\" (UID: \"2f8a10ebeaffa42847e9ce44be677ed6\") " pod="kube-system/kube-controller-manager-ip-172-31-25-117" Sep 9 05:35:55.331841 kubelet[2905]: I0909 05:35:55.331566 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2f8a10ebeaffa42847e9ce44be677ed6-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-25-117\" (UID: \"2f8a10ebeaffa42847e9ce44be677ed6\") " pod="kube-system/kube-controller-manager-ip-172-31-25-117" Sep 9 05:35:55.331966 kubelet[2905]: I0909 05:35:55.331582 2905 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6db4de21a0c522dd96b249cc10f33e3d-k8s-certs\") pod \"kube-apiserver-ip-172-31-25-117\" (UID: \"6db4de21a0c522dd96b249cc10f33e3d\") " pod="kube-system/kube-apiserver-ip-172-31-25-117" Sep 9 05:35:55.428544 kubelet[2905]: I0909 05:35:55.428365 2905 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-25-117" Sep 9 05:35:55.429282 kubelet[2905]: E0909 05:35:55.429244 2905 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.25.117:6443/api/v1/nodes\": dial tcp 172.31.25.117:6443: connect: connection refused" node="ip-172-31-25-117" Sep 9 05:35:55.476486 containerd[1901]: time="2025-09-09T05:35:55.476448541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-25-117,Uid:6db4de21a0c522dd96b249cc10f33e3d,Namespace:kube-system,Attempt:0,}" Sep 9 05:35:55.493230 containerd[1901]: time="2025-09-09T05:35:55.493180840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-25-117,Uid:2f8a10ebeaffa42847e9ce44be677ed6,Namespace:kube-system,Attempt:0,}" Sep 9 05:35:55.500560 containerd[1901]: time="2025-09-09T05:35:55.500504080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-25-117,Uid:02c20e5448fef9e5424f5d280fb05f27,Namespace:kube-system,Attempt:0,}" Sep 9 05:35:55.631743 containerd[1901]: time="2025-09-09T05:35:55.631661652Z" level=info msg="connecting to shim b2a04a866c781adcf28d1fb4f35dc54faef299bdf1f67c68817c39f832133684" address="unix:///run/containerd/s/bc6cad3d8e97646ddd11ed5dd33a39e7a60ff9044bc2bc5c6e43537c984090be" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:35:55.634098 kubelet[2905]: E0909 05:35:55.634051 2905 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-117?timeout=10s\": dial tcp 172.31.25.117:6443: connect: connection refused" interval="800ms" Sep 9 05:35:55.635918 containerd[1901]: time="2025-09-09T05:35:55.635828940Z" level=info msg="connecting to shim 0a7e2cdc47fe7a512f9fd3723192f1f6725bc854c0569213641071790e13d1fc" address="unix:///run/containerd/s/a500bfe95940e1318425e09565d5fabd125c60c6626c25da9c65c50928997300" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:35:55.641003 containerd[1901]: time="2025-09-09T05:35:55.640941285Z" level=info msg="connecting to shim 9b78b2f2a6116e6822005f558999d4de7e911befa011a5078dad0e301375cb7e" address="unix:///run/containerd/s/16e7e41a156f580262d25be1ea12415793f80c8e481edd9da58710fc2afaf765" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:35:55.746043 systemd[1]: Started cri-containerd-b2a04a866c781adcf28d1fb4f35dc54faef299bdf1f67c68817c39f832133684.scope - libcontainer container b2a04a866c781adcf28d1fb4f35dc54faef299bdf1f67c68817c39f832133684. Sep 9 05:35:55.760885 systemd[1]: Started cri-containerd-0a7e2cdc47fe7a512f9fd3723192f1f6725bc854c0569213641071790e13d1fc.scope - libcontainer container 0a7e2cdc47fe7a512f9fd3723192f1f6725bc854c0569213641071790e13d1fc. Sep 9 05:35:55.762801 systemd[1]: Started cri-containerd-9b78b2f2a6116e6822005f558999d4de7e911befa011a5078dad0e301375cb7e.scope - libcontainer container 9b78b2f2a6116e6822005f558999d4de7e911befa011a5078dad0e301375cb7e. Sep 9 05:35:55.835745 kubelet[2905]: I0909 05:35:55.835542 2905 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-25-117" Sep 9 05:35:55.836081 kubelet[2905]: E0909 05:35:55.835887 2905 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.25.117:6443/api/v1/nodes\": dial tcp 172.31.25.117:6443: connect: connection refused" node="ip-172-31-25-117" Sep 9 05:35:55.855903 containerd[1901]: time="2025-09-09T05:35:55.854462996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-25-117,Uid:02c20e5448fef9e5424f5d280fb05f27,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a7e2cdc47fe7a512f9fd3723192f1f6725bc854c0569213641071790e13d1fc\"" Sep 9 05:35:55.860748 containerd[1901]: time="2025-09-09T05:35:55.860681778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-25-117,Uid:2f8a10ebeaffa42847e9ce44be677ed6,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b78b2f2a6116e6822005f558999d4de7e911befa011a5078dad0e301375cb7e\"" Sep 9 05:35:55.863128 containerd[1901]: time="2025-09-09T05:35:55.862913706Z" level=info msg="CreateContainer within sandbox \"9b78b2f2a6116e6822005f558999d4de7e911befa011a5078dad0e301375cb7e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 05:35:55.864207 containerd[1901]: time="2025-09-09T05:35:55.863603344Z" level=info msg="CreateContainer within sandbox \"0a7e2cdc47fe7a512f9fd3723192f1f6725bc854c0569213641071790e13d1fc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 05:35:55.866696 containerd[1901]: time="2025-09-09T05:35:55.866246257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-25-117,Uid:6db4de21a0c522dd96b249cc10f33e3d,Namespace:kube-system,Attempt:0,} returns sandbox id \"b2a04a866c781adcf28d1fb4f35dc54faef299bdf1f67c68817c39f832133684\"" Sep 9 05:35:55.869332 containerd[1901]: time="2025-09-09T05:35:55.869303258Z" level=info msg="CreateContainer within sandbox \"b2a04a866c781adcf28d1fb4f35dc54faef299bdf1f67c68817c39f832133684\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 05:35:55.891710 containerd[1901]: time="2025-09-09T05:35:55.891658520Z" level=info msg="Container 02360799fce9bf620a4a6b9d306e225021af2ff66750e09c31006b0aea212700: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:35:55.899283 containerd[1901]: time="2025-09-09T05:35:55.899245290Z" level=info msg="Container 90ddfa67cf23ce9148e1b3d1198d88453d3c5962f5d371f5f7f2ce8b00e9aed9: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:35:55.907126 containerd[1901]: time="2025-09-09T05:35:55.907092875Z" level=info msg="Container ffc220fa1608a81b45d6192cbedd2153d09cba4db3d2bc183616002fcf4fc4c2: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:35:55.926029 kubelet[2905]: W0909 05:35:55.925968 2905 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.25.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.25.117:6443: connect: connection refused Sep 9 05:35:55.926029 kubelet[2905]: E0909 05:35:55.926031 2905 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.25.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.25.117:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:35:55.928985 containerd[1901]: time="2025-09-09T05:35:55.928924375Z" level=info msg="CreateContainer within sandbox \"b2a04a866c781adcf28d1fb4f35dc54faef299bdf1f67c68817c39f832133684\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"90ddfa67cf23ce9148e1b3d1198d88453d3c5962f5d371f5f7f2ce8b00e9aed9\"" Sep 9 05:35:55.930144 containerd[1901]: time="2025-09-09T05:35:55.930027787Z" level=info msg="CreateContainer within sandbox \"9b78b2f2a6116e6822005f558999d4de7e911befa011a5078dad0e301375cb7e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"02360799fce9bf620a4a6b9d306e225021af2ff66750e09c31006b0aea212700\"" Sep 9 05:35:55.930396 containerd[1901]: time="2025-09-09T05:35:55.930371808Z" level=info msg="StartContainer for \"02360799fce9bf620a4a6b9d306e225021af2ff66750e09c31006b0aea212700\"" Sep 9 05:35:55.931644 containerd[1901]: time="2025-09-09T05:35:55.931605759Z" level=info msg="StartContainer for \"90ddfa67cf23ce9148e1b3d1198d88453d3c5962f5d371f5f7f2ce8b00e9aed9\"" Sep 9 05:35:55.933054 containerd[1901]: time="2025-09-09T05:35:55.932943373Z" level=info msg="connecting to shim 02360799fce9bf620a4a6b9d306e225021af2ff66750e09c31006b0aea212700" address="unix:///run/containerd/s/16e7e41a156f580262d25be1ea12415793f80c8e481edd9da58710fc2afaf765" protocol=ttrpc version=3 Sep 9 05:35:55.934477 containerd[1901]: time="2025-09-09T05:35:55.934399912Z" level=info msg="connecting to shim 90ddfa67cf23ce9148e1b3d1198d88453d3c5962f5d371f5f7f2ce8b00e9aed9" address="unix:///run/containerd/s/bc6cad3d8e97646ddd11ed5dd33a39e7a60ff9044bc2bc5c6e43537c984090be" protocol=ttrpc version=3 Sep 9 05:35:55.936304 containerd[1901]: time="2025-09-09T05:35:55.936272902Z" level=info msg="CreateContainer within sandbox \"0a7e2cdc47fe7a512f9fd3723192f1f6725bc854c0569213641071790e13d1fc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ffc220fa1608a81b45d6192cbedd2153d09cba4db3d2bc183616002fcf4fc4c2\"" Sep 9 05:35:55.936761 containerd[1901]: time="2025-09-09T05:35:55.936740322Z" level=info msg="StartContainer for \"ffc220fa1608a81b45d6192cbedd2153d09cba4db3d2bc183616002fcf4fc4c2\"" Sep 9 05:35:55.937665 containerd[1901]: time="2025-09-09T05:35:55.937636957Z" level=info msg="connecting to shim ffc220fa1608a81b45d6192cbedd2153d09cba4db3d2bc183616002fcf4fc4c2" address="unix:///run/containerd/s/a500bfe95940e1318425e09565d5fabd125c60c6626c25da9c65c50928997300" protocol=ttrpc version=3 Sep 9 05:35:55.954488 kubelet[2905]: W0909 05:35:55.954435 2905 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.25.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-117&limit=500&resourceVersion=0": dial tcp 172.31.25.117:6443: connect: connection refused Sep 9 05:35:55.954622 kubelet[2905]: E0909 05:35:55.954497 2905 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.25.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-117&limit=500&resourceVersion=0\": dial tcp 172.31.25.117:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:35:55.957795 systemd[1]: Started cri-containerd-02360799fce9bf620a4a6b9d306e225021af2ff66750e09c31006b0aea212700.scope - libcontainer container 02360799fce9bf620a4a6b9d306e225021af2ff66750e09c31006b0aea212700. Sep 9 05:35:55.967003 systemd[1]: Started cri-containerd-90ddfa67cf23ce9148e1b3d1198d88453d3c5962f5d371f5f7f2ce8b00e9aed9.scope - libcontainer container 90ddfa67cf23ce9148e1b3d1198d88453d3c5962f5d371f5f7f2ce8b00e9aed9. Sep 9 05:35:55.978161 systemd[1]: Started cri-containerd-ffc220fa1608a81b45d6192cbedd2153d09cba4db3d2bc183616002fcf4fc4c2.scope - libcontainer container ffc220fa1608a81b45d6192cbedd2153d09cba4db3d2bc183616002fcf4fc4c2. Sep 9 05:35:56.091413 containerd[1901]: time="2025-09-09T05:35:56.091329925Z" level=info msg="StartContainer for \"02360799fce9bf620a4a6b9d306e225021af2ff66750e09c31006b0aea212700\" returns successfully" Sep 9 05:35:56.103243 kubelet[2905]: E0909 05:35:56.102792 2905 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-117\" not found" node="ip-172-31-25-117" Sep 9 05:35:56.124161 containerd[1901]: time="2025-09-09T05:35:56.124114497Z" level=info msg="StartContainer for \"90ddfa67cf23ce9148e1b3d1198d88453d3c5962f5d371f5f7f2ce8b00e9aed9\" returns successfully" Sep 9 05:35:56.126339 containerd[1901]: time="2025-09-09T05:35:56.126282410Z" level=info msg="StartContainer for \"ffc220fa1608a81b45d6192cbedd2153d09cba4db3d2bc183616002fcf4fc4c2\" returns successfully" Sep 9 05:35:56.221038 kubelet[2905]: W0909 05:35:56.220912 2905 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.25.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.25.117:6443: connect: connection refused Sep 9 05:35:56.221038 kubelet[2905]: E0909 05:35:56.221007 2905 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.25.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.25.117:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:35:56.435109 kubelet[2905]: E0909 05:35:56.434993 2905 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-117?timeout=10s\": dial tcp 172.31.25.117:6443: connect: connection refused" interval="1.6s" Sep 9 05:35:56.518241 kubelet[2905]: W0909 05:35:56.517783 2905 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.25.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.25.117:6443: connect: connection refused Sep 9 05:35:56.518241 kubelet[2905]: E0909 05:35:56.517996 2905 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.25.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.25.117:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:35:56.638141 kubelet[2905]: I0909 05:35:56.638115 2905 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-25-117" Sep 9 05:35:56.639720 kubelet[2905]: E0909 05:35:56.639683 2905 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.25.117:6443/api/v1/nodes\": dial tcp 172.31.25.117:6443: connect: connection refused" node="ip-172-31-25-117" Sep 9 05:35:57.109809 kubelet[2905]: E0909 05:35:57.109784 2905 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-117\" not found" node="ip-172-31-25-117" Sep 9 05:35:57.115572 kubelet[2905]: E0909 05:35:57.115088 2905 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-117\" not found" node="ip-172-31-25-117" Sep 9 05:35:57.115572 kubelet[2905]: E0909 05:35:57.115484 2905 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-117\" not found" node="ip-172-31-25-117" Sep 9 05:35:58.122543 kubelet[2905]: E0909 05:35:58.122509 2905 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-117\" not found" node="ip-172-31-25-117" Sep 9 05:35:58.131844 kubelet[2905]: E0909 05:35:58.131806 2905 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-117\" not found" node="ip-172-31-25-117" Sep 9 05:35:58.242838 kubelet[2905]: I0909 05:35:58.242806 2905 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-25-117" Sep 9 05:35:59.033929 kubelet[2905]: E0909 05:35:59.033882 2905 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-25-117\" not found" node="ip-172-31-25-117" Sep 9 05:35:59.072748 kubelet[2905]: E0909 05:35:59.072584 2905 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-25-117.1863867deca6d9a9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-25-117,UID:ip-172-31-25-117,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-25-117,},FirstTimestamp:2025-09-09 05:35:55.004017065 +0000 UTC m=+0.386749353,LastTimestamp:2025-09-09 05:35:55.004017065 +0000 UTC m=+0.386749353,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-25-117,}" Sep 9 05:35:59.114517 kubelet[2905]: I0909 05:35:59.114240 2905 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-25-117" Sep 9 05:35:59.114517 kubelet[2905]: E0909 05:35:59.114287 2905 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-25-117\": node \"ip-172-31-25-117\" not found" Sep 9 05:35:59.119488 kubelet[2905]: I0909 05:35:59.119458 2905 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-25-117" Sep 9 05:35:59.122686 kubelet[2905]: I0909 05:35:59.122647 2905 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-25-117" Sep 9 05:35:59.132586 kubelet[2905]: I0909 05:35:59.132094 2905 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-25-117" Sep 9 05:35:59.141863 kubelet[2905]: E0909 05:35:59.141829 2905 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-25-117\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-25-117" Sep 9 05:35:59.142111 kubelet[2905]: E0909 05:35:59.142091 2905 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-25-117\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-25-117" Sep 9 05:35:59.142154 kubelet[2905]: I0909 05:35:59.142113 2905 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-25-117" Sep 9 05:35:59.142257 kubelet[2905]: E0909 05:35:59.142242 2905 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-25-117\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-25-117" Sep 9 05:35:59.146743 kubelet[2905]: E0909 05:35:59.146050 2905 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-25-117\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-25-117" Sep 9 05:35:59.146743 kubelet[2905]: I0909 05:35:59.146077 2905 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-25-117" Sep 9 05:35:59.149075 kubelet[2905]: E0909 05:35:59.149042 2905 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-25-117\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-25-117" Sep 9 05:35:59.994376 kubelet[2905]: I0909 05:35:59.994276 2905 apiserver.go:52] "Watching apiserver" Sep 9 05:36:00.036578 kubelet[2905]: I0909 05:36:00.036515 2905 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 05:36:01.735785 systemd[1]: Reload requested from client PID 3177 ('systemctl') (unit session-7.scope)... Sep 9 05:36:01.735806 systemd[1]: Reloading... Sep 9 05:36:02.413390 zram_generator::config[3228]: No configuration found. Sep 9 05:36:02.949272 systemd[1]: Reloading finished in 1205 ms. Sep 9 05:36:02.982335 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 9 05:36:03.000445 kubelet[2905]: I0909 05:36:03.000362 2905 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-25-117" Sep 9 05:36:03.004607 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:36:03.021947 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 05:36:03.022396 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:36:03.022513 systemd[1]: kubelet.service: Consumed 822ms CPU time, 130.5M memory peak. Sep 9 05:36:03.028215 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:36:03.379097 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:36:03.407624 (kubelet)[3285]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 05:36:03.490360 kubelet[3285]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:36:03.490813 kubelet[3285]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 05:36:03.490900 kubelet[3285]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:36:03.491201 kubelet[3285]: I0909 05:36:03.491108 3285 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 05:36:03.500109 kubelet[3285]: I0909 05:36:03.500073 3285 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 9 05:36:03.501626 kubelet[3285]: I0909 05:36:03.500278 3285 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 05:36:03.501626 kubelet[3285]: I0909 05:36:03.500709 3285 server.go:954] "Client rotation is on, will bootstrap in background" Sep 9 05:36:03.502342 kubelet[3285]: I0909 05:36:03.502314 3285 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 9 05:36:03.505097 kubelet[3285]: I0909 05:36:03.504441 3285 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 05:36:03.525281 kubelet[3285]: I0909 05:36:03.525255 3285 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 05:36:03.536254 kubelet[3285]: I0909 05:36:03.536218 3285 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 05:36:03.539707 kubelet[3285]: I0909 05:36:03.539536 3285 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 05:36:03.540128 kubelet[3285]: I0909 05:36:03.539838 3285 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-25-117","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 05:36:03.540829 kubelet[3285]: I0909 05:36:03.540813 3285 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 05:36:03.540994 kubelet[3285]: I0909 05:36:03.540933 3285 container_manager_linux.go:304] "Creating device plugin manager" Sep 9 05:36:03.541671 kubelet[3285]: I0909 05:36:03.541643 3285 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:36:03.541976 kubelet[3285]: I0909 05:36:03.541930 3285 kubelet.go:446] "Attempting to sync node with API server" Sep 9 05:36:03.541976 kubelet[3285]: I0909 05:36:03.541954 3285 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 05:36:03.542164 kubelet[3285]: I0909 05:36:03.542104 3285 kubelet.go:352] "Adding apiserver pod source" Sep 9 05:36:03.542164 kubelet[3285]: I0909 05:36:03.542124 3285 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 05:36:03.549798 kubelet[3285]: I0909 05:36:03.549179 3285 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 05:36:03.551167 kubelet[3285]: I0909 05:36:03.551146 3285 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 05:36:03.556162 kubelet[3285]: I0909 05:36:03.555358 3285 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 05:36:03.556162 kubelet[3285]: I0909 05:36:03.555408 3285 server.go:1287] "Started kubelet" Sep 9 05:36:03.564844 sudo[3300]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 9 05:36:03.565922 kubelet[3285]: I0909 05:36:03.565798 3285 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 05:36:03.566021 sudo[3300]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 9 05:36:03.569691 kubelet[3285]: I0909 05:36:03.569503 3285 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 05:36:03.578531 kubelet[3285]: I0909 05:36:03.578487 3285 server.go:479] "Adding debug handlers to kubelet server" Sep 9 05:36:03.586734 kubelet[3285]: I0909 05:36:03.586658 3285 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 05:36:03.587149 kubelet[3285]: I0909 05:36:03.587110 3285 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 05:36:03.597895 kubelet[3285]: I0909 05:36:03.596778 3285 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 05:36:03.597895 kubelet[3285]: E0909 05:36:03.596928 3285 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-25-117\" not found" Sep 9 05:36:03.598719 kubelet[3285]: I0909 05:36:03.598589 3285 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 05:36:03.598994 kubelet[3285]: I0909 05:36:03.598745 3285 reconciler.go:26] "Reconciler: start to sync state" Sep 9 05:36:03.599680 kubelet[3285]: I0909 05:36:03.599498 3285 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 05:36:03.642455 kubelet[3285]: I0909 05:36:03.642135 3285 factory.go:221] Registration of the containerd container factory successfully Sep 9 05:36:03.642455 kubelet[3285]: I0909 05:36:03.642161 3285 factory.go:221] Registration of the systemd container factory successfully Sep 9 05:36:03.642455 kubelet[3285]: I0909 05:36:03.642281 3285 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 05:36:03.656576 kubelet[3285]: E0909 05:36:03.655187 3285 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 05:36:03.656576 kubelet[3285]: I0909 05:36:03.655733 3285 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 05:36:03.662070 kubelet[3285]: I0909 05:36:03.662032 3285 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 05:36:03.662070 kubelet[3285]: I0909 05:36:03.662078 3285 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 9 05:36:03.662249 kubelet[3285]: I0909 05:36:03.662104 3285 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 05:36:03.662249 kubelet[3285]: I0909 05:36:03.662112 3285 kubelet.go:2382] "Starting kubelet main sync loop" Sep 9 05:36:03.662249 kubelet[3285]: E0909 05:36:03.662171 3285 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 05:36:03.763407 kubelet[3285]: E0909 05:36:03.763372 3285 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 05:36:03.793349 kubelet[3285]: I0909 05:36:03.793315 3285 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 05:36:03.793349 kubelet[3285]: I0909 05:36:03.793341 3285 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 05:36:03.793560 kubelet[3285]: I0909 05:36:03.793363 3285 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:36:03.793616 kubelet[3285]: I0909 05:36:03.793600 3285 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 05:36:03.793659 kubelet[3285]: I0909 05:36:03.793616 3285 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 05:36:03.793659 kubelet[3285]: I0909 05:36:03.793643 3285 policy_none.go:49] "None policy: Start" Sep 9 05:36:03.793659 kubelet[3285]: I0909 05:36:03.793656 3285 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 05:36:03.793769 kubelet[3285]: I0909 05:36:03.793669 3285 state_mem.go:35] "Initializing new in-memory state store" Sep 9 05:36:03.794309 kubelet[3285]: I0909 05:36:03.794278 3285 state_mem.go:75] "Updated machine memory state" Sep 9 05:36:03.801572 kubelet[3285]: I0909 05:36:03.801527 3285 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 05:36:03.807455 kubelet[3285]: I0909 05:36:03.805772 3285 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 05:36:03.807455 kubelet[3285]: I0909 05:36:03.805791 3285 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 05:36:03.807455 kubelet[3285]: I0909 05:36:03.806207 3285 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 05:36:03.810055 kubelet[3285]: E0909 05:36:03.810029 3285 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 05:36:03.928996 kubelet[3285]: I0909 05:36:03.928706 3285 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-25-117" Sep 9 05:36:03.945921 kubelet[3285]: I0909 05:36:03.945877 3285 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-25-117" Sep 9 05:36:03.946249 kubelet[3285]: I0909 05:36:03.946188 3285 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-25-117" Sep 9 05:36:03.968500 kubelet[3285]: I0909 05:36:03.968443 3285 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-25-117" Sep 9 05:36:03.968869 kubelet[3285]: I0909 05:36:03.968684 3285 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-25-117" Sep 9 05:36:03.969691 kubelet[3285]: I0909 05:36:03.969656 3285 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-25-117" Sep 9 05:36:03.987332 kubelet[3285]: E0909 05:36:03.987291 3285 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-25-117\" already exists" pod="kube-system/kube-scheduler-ip-172-31-25-117" Sep 9 05:36:04.004541 kubelet[3285]: I0909 05:36:04.004330 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2f8a10ebeaffa42847e9ce44be677ed6-k8s-certs\") pod \"kube-controller-manager-ip-172-31-25-117\" (UID: \"2f8a10ebeaffa42847e9ce44be677ed6\") " pod="kube-system/kube-controller-manager-ip-172-31-25-117" Sep 9 05:36:04.004541 kubelet[3285]: I0909 05:36:04.004385 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2f8a10ebeaffa42847e9ce44be677ed6-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-25-117\" (UID: \"2f8a10ebeaffa42847e9ce44be677ed6\") " pod="kube-system/kube-controller-manager-ip-172-31-25-117" Sep 9 05:36:04.004541 kubelet[3285]: I0909 05:36:04.004417 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/02c20e5448fef9e5424f5d280fb05f27-kubeconfig\") pod \"kube-scheduler-ip-172-31-25-117\" (UID: \"02c20e5448fef9e5424f5d280fb05f27\") " pod="kube-system/kube-scheduler-ip-172-31-25-117" Sep 9 05:36:04.004853 kubelet[3285]: I0909 05:36:04.004443 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6db4de21a0c522dd96b249cc10f33e3d-ca-certs\") pod \"kube-apiserver-ip-172-31-25-117\" (UID: \"6db4de21a0c522dd96b249cc10f33e3d\") " pod="kube-system/kube-apiserver-ip-172-31-25-117" Sep 9 05:36:04.004853 kubelet[3285]: I0909 05:36:04.004608 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6db4de21a0c522dd96b249cc10f33e3d-k8s-certs\") pod \"kube-apiserver-ip-172-31-25-117\" (UID: \"6db4de21a0c522dd96b249cc10f33e3d\") " pod="kube-system/kube-apiserver-ip-172-31-25-117" Sep 9 05:36:04.004853 kubelet[3285]: I0909 05:36:04.004630 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2f8a10ebeaffa42847e9ce44be677ed6-ca-certs\") pod \"kube-controller-manager-ip-172-31-25-117\" (UID: \"2f8a10ebeaffa42847e9ce44be677ed6\") " pod="kube-system/kube-controller-manager-ip-172-31-25-117" Sep 9 05:36:04.004853 kubelet[3285]: I0909 05:36:04.004681 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2f8a10ebeaffa42847e9ce44be677ed6-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-25-117\" (UID: \"2f8a10ebeaffa42847e9ce44be677ed6\") " pod="kube-system/kube-controller-manager-ip-172-31-25-117" Sep 9 05:36:04.004853 kubelet[3285]: I0909 05:36:04.004781 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2f8a10ebeaffa42847e9ce44be677ed6-kubeconfig\") pod \"kube-controller-manager-ip-172-31-25-117\" (UID: \"2f8a10ebeaffa42847e9ce44be677ed6\") " pod="kube-system/kube-controller-manager-ip-172-31-25-117" Sep 9 05:36:04.005059 kubelet[3285]: I0909 05:36:04.004838 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6db4de21a0c522dd96b249cc10f33e3d-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-25-117\" (UID: \"6db4de21a0c522dd96b249cc10f33e3d\") " pod="kube-system/kube-apiserver-ip-172-31-25-117" Sep 9 05:36:04.184976 sudo[3300]: pam_unix(sudo:session): session closed for user root Sep 9 05:36:04.545648 kubelet[3285]: I0909 05:36:04.544808 3285 apiserver.go:52] "Watching apiserver" Sep 9 05:36:04.598788 kubelet[3285]: I0909 05:36:04.598740 3285 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 05:36:04.745247 kubelet[3285]: I0909 05:36:04.744907 3285 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-25-117" Sep 9 05:36:04.746483 kubelet[3285]: I0909 05:36:04.746338 3285 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-25-117" Sep 9 05:36:04.757963 kubelet[3285]: E0909 05:36:04.757811 3285 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-25-117\" already exists" pod="kube-system/kube-apiserver-ip-172-31-25-117" Sep 9 05:36:04.759345 kubelet[3285]: E0909 05:36:04.759302 3285 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-25-117\" already exists" pod="kube-system/kube-scheduler-ip-172-31-25-117" Sep 9 05:36:04.803075 kubelet[3285]: I0909 05:36:04.802813 3285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-25-117" podStartSLOduration=1.802793237 podStartE2EDuration="1.802793237s" podCreationTimestamp="2025-09-09 05:36:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:36:04.788792435 +0000 UTC m=+1.371640258" watchObservedRunningTime="2025-09-09 05:36:04.802793237 +0000 UTC m=+1.385641060" Sep 9 05:36:04.804208 kubelet[3285]: I0909 05:36:04.803843 3285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-25-117" podStartSLOduration=1.803679447 podStartE2EDuration="1.803679447s" podCreationTimestamp="2025-09-09 05:36:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:36:04.802604295 +0000 UTC m=+1.385452128" watchObservedRunningTime="2025-09-09 05:36:04.803679447 +0000 UTC m=+1.386527278" Sep 9 05:36:04.826238 kubelet[3285]: I0909 05:36:04.825919 3285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-25-117" podStartSLOduration=1.825882373 podStartE2EDuration="1.825882373s" podCreationTimestamp="2025-09-09 05:36:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:36:04.824336363 +0000 UTC m=+1.407184200" watchObservedRunningTime="2025-09-09 05:36:04.825882373 +0000 UTC m=+1.408730203" Sep 9 05:36:06.257797 sudo[2344]: pam_unix(sudo:session): session closed for user root Sep 9 05:36:06.279948 sshd[2343]: Connection closed by 147.75.109.163 port 45430 Sep 9 05:36:06.280715 sshd-session[2340]: pam_unix(sshd:session): session closed for user core Sep 9 05:36:06.286222 systemd[1]: sshd@6-172.31.25.117:22-147.75.109.163:45430.service: Deactivated successfully. Sep 9 05:36:06.289082 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 05:36:06.289469 systemd[1]: session-7.scope: Consumed 4.664s CPU time, 208.8M memory peak. Sep 9 05:36:06.291320 systemd-logind[1872]: Session 7 logged out. Waiting for processes to exit. Sep 9 05:36:06.293667 systemd-logind[1872]: Removed session 7. Sep 9 05:36:06.583838 kubelet[3285]: I0909 05:36:06.583691 3285 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 05:36:06.584213 containerd[1901]: time="2025-09-09T05:36:06.584073634Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 05:36:06.584795 kubelet[3285]: I0909 05:36:06.584586 3285 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 05:36:07.460968 systemd[1]: Created slice kubepods-besteffort-podb7d0e43b_ead7_406e_b390_db0dfb4a191d.slice - libcontainer container kubepods-besteffort-podb7d0e43b_ead7_406e_b390_db0dfb4a191d.slice. Sep 9 05:36:07.513597 systemd[1]: Created slice kubepods-burstable-pod973febec_1d52_4e08_84d7_b41dca1ca333.slice - libcontainer container kubepods-burstable-pod973febec_1d52_4e08_84d7_b41dca1ca333.slice. Sep 9 05:36:07.529533 kubelet[3285]: I0909 05:36:07.529259 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/973febec-1d52-4e08-84d7-b41dca1ca333-cni-path\") pod \"cilium-nhmtm\" (UID: \"973febec-1d52-4e08-84d7-b41dca1ca333\") " pod="kube-system/cilium-nhmtm" Sep 9 05:36:07.529817 kubelet[3285]: I0909 05:36:07.529795 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/973febec-1d52-4e08-84d7-b41dca1ca333-clustermesh-secrets\") pod \"cilium-nhmtm\" (UID: \"973febec-1d52-4e08-84d7-b41dca1ca333\") " pod="kube-system/cilium-nhmtm" Sep 9 05:36:07.529951 kubelet[3285]: I0909 05:36:07.529931 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggvxm\" (UniqueName: \"kubernetes.io/projected/b7d0e43b-ead7-406e-b390-db0dfb4a191d-kube-api-access-ggvxm\") pod \"kube-proxy-zhbv4\" (UID: \"b7d0e43b-ead7-406e-b390-db0dfb4a191d\") " pod="kube-system/kube-proxy-zhbv4" Sep 9 05:36:07.530070 kubelet[3285]: I0909 05:36:07.530055 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/973febec-1d52-4e08-84d7-b41dca1ca333-bpf-maps\") pod \"cilium-nhmtm\" (UID: \"973febec-1d52-4e08-84d7-b41dca1ca333\") " pod="kube-system/cilium-nhmtm" Sep 9 05:36:07.530272 kubelet[3285]: I0909 05:36:07.530252 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/973febec-1d52-4e08-84d7-b41dca1ca333-hostproc\") pod \"cilium-nhmtm\" (UID: \"973febec-1d52-4e08-84d7-b41dca1ca333\") " pod="kube-system/cilium-nhmtm" Sep 9 05:36:07.530403 kubelet[3285]: I0909 05:36:07.530367 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/973febec-1d52-4e08-84d7-b41dca1ca333-lib-modules\") pod \"cilium-nhmtm\" (UID: \"973febec-1d52-4e08-84d7-b41dca1ca333\") " pod="kube-system/cilium-nhmtm" Sep 9 05:36:07.531628 kubelet[3285]: I0909 05:36:07.531606 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b7d0e43b-ead7-406e-b390-db0dfb4a191d-lib-modules\") pod \"kube-proxy-zhbv4\" (UID: \"b7d0e43b-ead7-406e-b390-db0dfb4a191d\") " pod="kube-system/kube-proxy-zhbv4" Sep 9 05:36:07.531766 kubelet[3285]: I0909 05:36:07.531752 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/973febec-1d52-4e08-84d7-b41dca1ca333-hubble-tls\") pod \"cilium-nhmtm\" (UID: \"973febec-1d52-4e08-84d7-b41dca1ca333\") " pod="kube-system/cilium-nhmtm" Sep 9 05:36:07.531869 kubelet[3285]: I0909 05:36:07.531854 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b7d0e43b-ead7-406e-b390-db0dfb4a191d-kube-proxy\") pod \"kube-proxy-zhbv4\" (UID: \"b7d0e43b-ead7-406e-b390-db0dfb4a191d\") " pod="kube-system/kube-proxy-zhbv4" Sep 9 05:36:07.531948 kubelet[3285]: I0909 05:36:07.531937 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/973febec-1d52-4e08-84d7-b41dca1ca333-etc-cni-netd\") pod \"cilium-nhmtm\" (UID: \"973febec-1d52-4e08-84d7-b41dca1ca333\") " pod="kube-system/cilium-nhmtm" Sep 9 05:36:07.532277 kubelet[3285]: I0909 05:36:07.532014 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/973febec-1d52-4e08-84d7-b41dca1ca333-cilium-config-path\") pod \"cilium-nhmtm\" (UID: \"973febec-1d52-4e08-84d7-b41dca1ca333\") " pod="kube-system/cilium-nhmtm" Sep 9 05:36:07.532277 kubelet[3285]: I0909 05:36:07.532041 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/973febec-1d52-4e08-84d7-b41dca1ca333-host-proc-sys-net\") pod \"cilium-nhmtm\" (UID: \"973febec-1d52-4e08-84d7-b41dca1ca333\") " pod="kube-system/cilium-nhmtm" Sep 9 05:36:07.532277 kubelet[3285]: I0909 05:36:07.532064 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/973febec-1d52-4e08-84d7-b41dca1ca333-cilium-cgroup\") pod \"cilium-nhmtm\" (UID: \"973febec-1d52-4e08-84d7-b41dca1ca333\") " pod="kube-system/cilium-nhmtm" Sep 9 05:36:07.532277 kubelet[3285]: I0909 05:36:07.532086 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/973febec-1d52-4e08-84d7-b41dca1ca333-xtables-lock\") pod \"cilium-nhmtm\" (UID: \"973febec-1d52-4e08-84d7-b41dca1ca333\") " pod="kube-system/cilium-nhmtm" Sep 9 05:36:07.532277 kubelet[3285]: I0909 05:36:07.532109 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/973febec-1d52-4e08-84d7-b41dca1ca333-host-proc-sys-kernel\") pod \"cilium-nhmtm\" (UID: \"973febec-1d52-4e08-84d7-b41dca1ca333\") " pod="kube-system/cilium-nhmtm" Sep 9 05:36:07.532277 kubelet[3285]: I0909 05:36:07.532132 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/973febec-1d52-4e08-84d7-b41dca1ca333-cilium-run\") pod \"cilium-nhmtm\" (UID: \"973febec-1d52-4e08-84d7-b41dca1ca333\") " pod="kube-system/cilium-nhmtm" Sep 9 05:36:07.532545 kubelet[3285]: I0909 05:36:07.532156 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b7d0e43b-ead7-406e-b390-db0dfb4a191d-xtables-lock\") pod \"kube-proxy-zhbv4\" (UID: \"b7d0e43b-ead7-406e-b390-db0dfb4a191d\") " pod="kube-system/kube-proxy-zhbv4" Sep 9 05:36:07.532545 kubelet[3285]: I0909 05:36:07.532182 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klt27\" (UniqueName: \"kubernetes.io/projected/973febec-1d52-4e08-84d7-b41dca1ca333-kube-api-access-klt27\") pod \"cilium-nhmtm\" (UID: \"973febec-1d52-4e08-84d7-b41dca1ca333\") " pod="kube-system/cilium-nhmtm" Sep 9 05:36:07.731234 systemd[1]: Created slice kubepods-besteffort-pod23b0a564_0383_4387_8353_7b772362e503.slice - libcontainer container kubepods-besteffort-pod23b0a564_0383_4387_8353_7b772362e503.slice. Sep 9 05:36:07.733705 kubelet[3285]: I0909 05:36:07.733615 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/23b0a564-0383-4387-8353-7b772362e503-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-4vtq9\" (UID: \"23b0a564-0383-4387-8353-7b772362e503\") " pod="kube-system/cilium-operator-6c4d7847fc-4vtq9" Sep 9 05:36:07.733705 kubelet[3285]: I0909 05:36:07.733701 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l7ww\" (UniqueName: \"kubernetes.io/projected/23b0a564-0383-4387-8353-7b772362e503-kube-api-access-7l7ww\") pod \"cilium-operator-6c4d7847fc-4vtq9\" (UID: \"23b0a564-0383-4387-8353-7b772362e503\") " pod="kube-system/cilium-operator-6c4d7847fc-4vtq9" Sep 9 05:36:07.771802 containerd[1901]: time="2025-09-09T05:36:07.771747636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zhbv4,Uid:b7d0e43b-ead7-406e-b390-db0dfb4a191d,Namespace:kube-system,Attempt:0,}" Sep 9 05:36:07.792932 containerd[1901]: time="2025-09-09T05:36:07.792891992Z" level=info msg="connecting to shim c631ba279f5d4637a66779c9ececefe3e1ba06f4334f0212d037935a8ecbbb15" address="unix:///run/containerd/s/6718024b12162314dc6bc636b6a940d2d831bbae60921c7b2ad46c6ef1cbe99e" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:36:07.820785 systemd[1]: Started cri-containerd-c631ba279f5d4637a66779c9ececefe3e1ba06f4334f0212d037935a8ecbbb15.scope - libcontainer container c631ba279f5d4637a66779c9ececefe3e1ba06f4334f0212d037935a8ecbbb15. Sep 9 05:36:07.824294 containerd[1901]: time="2025-09-09T05:36:07.824242582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nhmtm,Uid:973febec-1d52-4e08-84d7-b41dca1ca333,Namespace:kube-system,Attempt:0,}" Sep 9 05:36:07.858504 containerd[1901]: time="2025-09-09T05:36:07.858277893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zhbv4,Uid:b7d0e43b-ead7-406e-b390-db0dfb4a191d,Namespace:kube-system,Attempt:0,} returns sandbox id \"c631ba279f5d4637a66779c9ececefe3e1ba06f4334f0212d037935a8ecbbb15\"" Sep 9 05:36:07.858898 containerd[1901]: time="2025-09-09T05:36:07.858847919Z" level=info msg="connecting to shim 47c09c8ce358f869b3b7bbe61f914f969126ac7f365e4dacde39a7c33afc02d2" address="unix:///run/containerd/s/020daa23679eb9c1fe09bde1fd7ed65b7a7113b85e6eccc429f7320ff334a11f" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:36:07.863770 containerd[1901]: time="2025-09-09T05:36:07.863725928Z" level=info msg="CreateContainer within sandbox \"c631ba279f5d4637a66779c9ececefe3e1ba06f4334f0212d037935a8ecbbb15\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 05:36:07.884621 containerd[1901]: time="2025-09-09T05:36:07.884589345Z" level=info msg="Container 991626608be349eef2eed0253b91063cd32bc4f955a5ea124a4f99273fb6a17c: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:36:07.885753 systemd[1]: Started cri-containerd-47c09c8ce358f869b3b7bbe61f914f969126ac7f365e4dacde39a7c33afc02d2.scope - libcontainer container 47c09c8ce358f869b3b7bbe61f914f969126ac7f365e4dacde39a7c33afc02d2. Sep 9 05:36:07.901002 containerd[1901]: time="2025-09-09T05:36:07.900966457Z" level=info msg="CreateContainer within sandbox \"c631ba279f5d4637a66779c9ececefe3e1ba06f4334f0212d037935a8ecbbb15\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"991626608be349eef2eed0253b91063cd32bc4f955a5ea124a4f99273fb6a17c\"" Sep 9 05:36:07.903793 containerd[1901]: time="2025-09-09T05:36:07.903706680Z" level=info msg="StartContainer for \"991626608be349eef2eed0253b91063cd32bc4f955a5ea124a4f99273fb6a17c\"" Sep 9 05:36:07.905726 containerd[1901]: time="2025-09-09T05:36:07.905687309Z" level=info msg="connecting to shim 991626608be349eef2eed0253b91063cd32bc4f955a5ea124a4f99273fb6a17c" address="unix:///run/containerd/s/6718024b12162314dc6bc636b6a940d2d831bbae60921c7b2ad46c6ef1cbe99e" protocol=ttrpc version=3 Sep 9 05:36:07.921318 containerd[1901]: time="2025-09-09T05:36:07.921258693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nhmtm,Uid:973febec-1d52-4e08-84d7-b41dca1ca333,Namespace:kube-system,Attempt:0,} returns sandbox id \"47c09c8ce358f869b3b7bbe61f914f969126ac7f365e4dacde39a7c33afc02d2\"" Sep 9 05:36:07.925252 containerd[1901]: time="2025-09-09T05:36:07.925213600Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 9 05:36:07.929750 systemd[1]: Started cri-containerd-991626608be349eef2eed0253b91063cd32bc4f955a5ea124a4f99273fb6a17c.scope - libcontainer container 991626608be349eef2eed0253b91063cd32bc4f955a5ea124a4f99273fb6a17c. Sep 9 05:36:07.983897 containerd[1901]: time="2025-09-09T05:36:07.983355823Z" level=info msg="StartContainer for \"991626608be349eef2eed0253b91063cd32bc4f955a5ea124a4f99273fb6a17c\" returns successfully" Sep 9 05:36:08.042388 containerd[1901]: time="2025-09-09T05:36:08.042284557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-4vtq9,Uid:23b0a564-0383-4387-8353-7b772362e503,Namespace:kube-system,Attempt:0,}" Sep 9 05:36:08.079460 containerd[1901]: time="2025-09-09T05:36:08.078714710Z" level=info msg="connecting to shim 06334b2c3b9caa2284ac6813262f7b62f8fb59a07bd7c39328f630b92caeed47" address="unix:///run/containerd/s/5b6e8dfa744f7529b65bdd83c3089b2504ba5efa10d6bac5025dec1a330ec7aa" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:36:08.107782 systemd[1]: Started cri-containerd-06334b2c3b9caa2284ac6813262f7b62f8fb59a07bd7c39328f630b92caeed47.scope - libcontainer container 06334b2c3b9caa2284ac6813262f7b62f8fb59a07bd7c39328f630b92caeed47. Sep 9 05:36:08.165646 containerd[1901]: time="2025-09-09T05:36:08.165589380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-4vtq9,Uid:23b0a564-0383-4387-8353-7b772362e503,Namespace:kube-system,Attempt:0,} returns sandbox id \"06334b2c3b9caa2284ac6813262f7b62f8fb59a07bd7c39328f630b92caeed47\"" Sep 9 05:36:08.768972 kubelet[3285]: I0909 05:36:08.768814 3285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zhbv4" podStartSLOduration=1.768774844 podStartE2EDuration="1.768774844s" podCreationTimestamp="2025-09-09 05:36:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:36:08.768416659 +0000 UTC m=+5.351264491" watchObservedRunningTime="2025-09-09 05:36:08.768774844 +0000 UTC m=+5.351622678" Sep 9 05:36:17.495673 update_engine[1878]: I20250909 05:36:17.495558 1878 update_attempter.cc:509] Updating boot flags... Sep 9 05:36:17.879438 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3698997751.mount: Deactivated successfully. Sep 9 05:36:20.630718 containerd[1901]: time="2025-09-09T05:36:20.630651889Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:20.634417 containerd[1901]: time="2025-09-09T05:36:20.634027398Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 9 05:36:20.634925 containerd[1901]: time="2025-09-09T05:36:20.634863183Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:20.636808 containerd[1901]: time="2025-09-09T05:36:20.636759761Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 12.710165493s" Sep 9 05:36:20.636808 containerd[1901]: time="2025-09-09T05:36:20.636806312Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 9 05:36:20.639371 containerd[1901]: time="2025-09-09T05:36:20.639329250Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 9 05:36:20.641518 containerd[1901]: time="2025-09-09T05:36:20.641443880Z" level=info msg="CreateContainer within sandbox \"47c09c8ce358f869b3b7bbe61f914f969126ac7f365e4dacde39a7c33afc02d2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 05:36:20.674852 containerd[1901]: time="2025-09-09T05:36:20.674810899Z" level=info msg="Container 211ce4c330a48fc3e066e9921cdef226b67ed65ec46530914bff031ee3b9c1c3: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:36:20.678234 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1729994136.mount: Deactivated successfully. Sep 9 05:36:20.692822 containerd[1901]: time="2025-09-09T05:36:20.692757383Z" level=info msg="CreateContainer within sandbox \"47c09c8ce358f869b3b7bbe61f914f969126ac7f365e4dacde39a7c33afc02d2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"211ce4c330a48fc3e066e9921cdef226b67ed65ec46530914bff031ee3b9c1c3\"" Sep 9 05:36:20.697278 containerd[1901]: time="2025-09-09T05:36:20.697216868Z" level=info msg="StartContainer for \"211ce4c330a48fc3e066e9921cdef226b67ed65ec46530914bff031ee3b9c1c3\"" Sep 9 05:36:20.698635 containerd[1901]: time="2025-09-09T05:36:20.698596561Z" level=info msg="connecting to shim 211ce4c330a48fc3e066e9921cdef226b67ed65ec46530914bff031ee3b9c1c3" address="unix:///run/containerd/s/020daa23679eb9c1fe09bde1fd7ed65b7a7113b85e6eccc429f7320ff334a11f" protocol=ttrpc version=3 Sep 9 05:36:20.763918 systemd[1]: Started cri-containerd-211ce4c330a48fc3e066e9921cdef226b67ed65ec46530914bff031ee3b9c1c3.scope - libcontainer container 211ce4c330a48fc3e066e9921cdef226b67ed65ec46530914bff031ee3b9c1c3. Sep 9 05:36:20.815705 containerd[1901]: time="2025-09-09T05:36:20.815230194Z" level=info msg="StartContainer for \"211ce4c330a48fc3e066e9921cdef226b67ed65ec46530914bff031ee3b9c1c3\" returns successfully" Sep 9 05:36:20.824349 systemd[1]: cri-containerd-211ce4c330a48fc3e066e9921cdef226b67ed65ec46530914bff031ee3b9c1c3.scope: Deactivated successfully. Sep 9 05:36:20.858282 containerd[1901]: time="2025-09-09T05:36:20.858222091Z" level=info msg="received exit event container_id:\"211ce4c330a48fc3e066e9921cdef226b67ed65ec46530914bff031ee3b9c1c3\" id:\"211ce4c330a48fc3e066e9921cdef226b67ed65ec46530914bff031ee3b9c1c3\" pid:3883 exited_at:{seconds:1757396180 nanos:831856351}" Sep 9 05:36:20.866384 containerd[1901]: time="2025-09-09T05:36:20.866315323Z" level=info msg="TaskExit event in podsandbox handler container_id:\"211ce4c330a48fc3e066e9921cdef226b67ed65ec46530914bff031ee3b9c1c3\" id:\"211ce4c330a48fc3e066e9921cdef226b67ed65ec46530914bff031ee3b9c1c3\" pid:3883 exited_at:{seconds:1757396180 nanos:831856351}" Sep 9 05:36:20.899463 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-211ce4c330a48fc3e066e9921cdef226b67ed65ec46530914bff031ee3b9c1c3-rootfs.mount: Deactivated successfully. Sep 9 05:36:21.824140 containerd[1901]: time="2025-09-09T05:36:21.823731537Z" level=info msg="CreateContainer within sandbox \"47c09c8ce358f869b3b7bbe61f914f969126ac7f365e4dacde39a7c33afc02d2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 05:36:21.840606 containerd[1901]: time="2025-09-09T05:36:21.840566661Z" level=info msg="Container c23f504ec839da69da89251d87178c362c15f3b086cebf09cab9c916fc02b671: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:36:21.844768 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2671595650.mount: Deactivated successfully. Sep 9 05:36:21.863248 containerd[1901]: time="2025-09-09T05:36:21.863196816Z" level=info msg="CreateContainer within sandbox \"47c09c8ce358f869b3b7bbe61f914f969126ac7f365e4dacde39a7c33afc02d2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c23f504ec839da69da89251d87178c362c15f3b086cebf09cab9c916fc02b671\"" Sep 9 05:36:21.863970 containerd[1901]: time="2025-09-09T05:36:21.863939606Z" level=info msg="StartContainer for \"c23f504ec839da69da89251d87178c362c15f3b086cebf09cab9c916fc02b671\"" Sep 9 05:36:21.865186 containerd[1901]: time="2025-09-09T05:36:21.865102786Z" level=info msg="connecting to shim c23f504ec839da69da89251d87178c362c15f3b086cebf09cab9c916fc02b671" address="unix:///run/containerd/s/020daa23679eb9c1fe09bde1fd7ed65b7a7113b85e6eccc429f7320ff334a11f" protocol=ttrpc version=3 Sep 9 05:36:21.893788 systemd[1]: Started cri-containerd-c23f504ec839da69da89251d87178c362c15f3b086cebf09cab9c916fc02b671.scope - libcontainer container c23f504ec839da69da89251d87178c362c15f3b086cebf09cab9c916fc02b671. Sep 9 05:36:21.927281 containerd[1901]: time="2025-09-09T05:36:21.927238744Z" level=info msg="StartContainer for \"c23f504ec839da69da89251d87178c362c15f3b086cebf09cab9c916fc02b671\" returns successfully" Sep 9 05:36:21.945099 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 05:36:21.945573 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:36:21.946472 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 9 05:36:21.950198 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 05:36:21.955190 containerd[1901]: time="2025-09-09T05:36:21.954777987Z" level=info msg="received exit event container_id:\"c23f504ec839da69da89251d87178c362c15f3b086cebf09cab9c916fc02b671\" id:\"c23f504ec839da69da89251d87178c362c15f3b086cebf09cab9c916fc02b671\" pid:3928 exited_at:{seconds:1757396181 nanos:953092242}" Sep 9 05:36:21.955190 containerd[1901]: time="2025-09-09T05:36:21.955136141Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c23f504ec839da69da89251d87178c362c15f3b086cebf09cab9c916fc02b671\" id:\"c23f504ec839da69da89251d87178c362c15f3b086cebf09cab9c916fc02b671\" pid:3928 exited_at:{seconds:1757396181 nanos:953092242}" Sep 9 05:36:21.955679 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 05:36:21.956501 systemd[1]: cri-containerd-c23f504ec839da69da89251d87178c362c15f3b086cebf09cab9c916fc02b671.scope: Deactivated successfully. Sep 9 05:36:21.994391 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:36:22.827944 containerd[1901]: time="2025-09-09T05:36:22.827891384Z" level=info msg="CreateContainer within sandbox \"47c09c8ce358f869b3b7bbe61f914f969126ac7f365e4dacde39a7c33afc02d2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 05:36:22.840311 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c23f504ec839da69da89251d87178c362c15f3b086cebf09cab9c916fc02b671-rootfs.mount: Deactivated successfully. Sep 9 05:36:22.857029 containerd[1901]: time="2025-09-09T05:36:22.856987948Z" level=info msg="Container 80a1998991d6f4115ed97106e7c7299f589db7872da7ab8f377fc7b2bc8b604f: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:36:22.863981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2509751320.mount: Deactivated successfully. Sep 9 05:36:22.880691 containerd[1901]: time="2025-09-09T05:36:22.880639311Z" level=info msg="CreateContainer within sandbox \"47c09c8ce358f869b3b7bbe61f914f969126ac7f365e4dacde39a7c33afc02d2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"80a1998991d6f4115ed97106e7c7299f589db7872da7ab8f377fc7b2bc8b604f\"" Sep 9 05:36:22.881822 containerd[1901]: time="2025-09-09T05:36:22.881727143Z" level=info msg="StartContainer for \"80a1998991d6f4115ed97106e7c7299f589db7872da7ab8f377fc7b2bc8b604f\"" Sep 9 05:36:22.883871 containerd[1901]: time="2025-09-09T05:36:22.883821525Z" level=info msg="connecting to shim 80a1998991d6f4115ed97106e7c7299f589db7872da7ab8f377fc7b2bc8b604f" address="unix:///run/containerd/s/020daa23679eb9c1fe09bde1fd7ed65b7a7113b85e6eccc429f7320ff334a11f" protocol=ttrpc version=3 Sep 9 05:36:22.916774 systemd[1]: Started cri-containerd-80a1998991d6f4115ed97106e7c7299f589db7872da7ab8f377fc7b2bc8b604f.scope - libcontainer container 80a1998991d6f4115ed97106e7c7299f589db7872da7ab8f377fc7b2bc8b604f. Sep 9 05:36:22.962232 systemd[1]: cri-containerd-80a1998991d6f4115ed97106e7c7299f589db7872da7ab8f377fc7b2bc8b604f.scope: Deactivated successfully. Sep 9 05:36:22.967449 containerd[1901]: time="2025-09-09T05:36:22.966830026Z" level=info msg="StartContainer for \"80a1998991d6f4115ed97106e7c7299f589db7872da7ab8f377fc7b2bc8b604f\" returns successfully" Sep 9 05:36:22.967449 containerd[1901]: time="2025-09-09T05:36:22.966960967Z" level=info msg="received exit event container_id:\"80a1998991d6f4115ed97106e7c7299f589db7872da7ab8f377fc7b2bc8b604f\" id:\"80a1998991d6f4115ed97106e7c7299f589db7872da7ab8f377fc7b2bc8b604f\" pid:3979 exited_at:{seconds:1757396182 nanos:966450896}" Sep 9 05:36:22.967789 containerd[1901]: time="2025-09-09T05:36:22.966961066Z" level=info msg="TaskExit event in podsandbox handler container_id:\"80a1998991d6f4115ed97106e7c7299f589db7872da7ab8f377fc7b2bc8b604f\" id:\"80a1998991d6f4115ed97106e7c7299f589db7872da7ab8f377fc7b2bc8b604f\" pid:3979 exited_at:{seconds:1757396182 nanos:966450896}" Sep 9 05:36:23.832983 containerd[1901]: time="2025-09-09T05:36:23.832931570Z" level=info msg="CreateContainer within sandbox \"47c09c8ce358f869b3b7bbe61f914f969126ac7f365e4dacde39a7c33afc02d2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 05:36:23.841260 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-80a1998991d6f4115ed97106e7c7299f589db7872da7ab8f377fc7b2bc8b604f-rootfs.mount: Deactivated successfully. Sep 9 05:36:23.857917 containerd[1901]: time="2025-09-09T05:36:23.855812192Z" level=info msg="Container c59202e7ad01db9d084d9cdea18f8f00185afd31a56620d3d44b070915b06117: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:36:23.872778 containerd[1901]: time="2025-09-09T05:36:23.872735524Z" level=info msg="CreateContainer within sandbox \"47c09c8ce358f869b3b7bbe61f914f969126ac7f365e4dacde39a7c33afc02d2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c59202e7ad01db9d084d9cdea18f8f00185afd31a56620d3d44b070915b06117\"" Sep 9 05:36:23.873683 containerd[1901]: time="2025-09-09T05:36:23.873656192Z" level=info msg="StartContainer for \"c59202e7ad01db9d084d9cdea18f8f00185afd31a56620d3d44b070915b06117\"" Sep 9 05:36:23.874803 containerd[1901]: time="2025-09-09T05:36:23.874773093Z" level=info msg="connecting to shim c59202e7ad01db9d084d9cdea18f8f00185afd31a56620d3d44b070915b06117" address="unix:///run/containerd/s/020daa23679eb9c1fe09bde1fd7ed65b7a7113b85e6eccc429f7320ff334a11f" protocol=ttrpc version=3 Sep 9 05:36:23.903789 systemd[1]: Started cri-containerd-c59202e7ad01db9d084d9cdea18f8f00185afd31a56620d3d44b070915b06117.scope - libcontainer container c59202e7ad01db9d084d9cdea18f8f00185afd31a56620d3d44b070915b06117. Sep 9 05:36:23.944328 systemd[1]: cri-containerd-c59202e7ad01db9d084d9cdea18f8f00185afd31a56620d3d44b070915b06117.scope: Deactivated successfully. Sep 9 05:36:23.946819 containerd[1901]: time="2025-09-09T05:36:23.946792014Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c59202e7ad01db9d084d9cdea18f8f00185afd31a56620d3d44b070915b06117\" id:\"c59202e7ad01db9d084d9cdea18f8f00185afd31a56620d3d44b070915b06117\" pid:4024 exited_at:{seconds:1757396183 nanos:944924248}" Sep 9 05:36:23.951578 containerd[1901]: time="2025-09-09T05:36:23.951004153Z" level=info msg="received exit event container_id:\"c59202e7ad01db9d084d9cdea18f8f00185afd31a56620d3d44b070915b06117\" id:\"c59202e7ad01db9d084d9cdea18f8f00185afd31a56620d3d44b070915b06117\" pid:4024 exited_at:{seconds:1757396183 nanos:944924248}" Sep 9 05:36:23.952942 containerd[1901]: time="2025-09-09T05:36:23.952918990Z" level=info msg="StartContainer for \"c59202e7ad01db9d084d9cdea18f8f00185afd31a56620d3d44b070915b06117\" returns successfully" Sep 9 05:36:23.976783 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c59202e7ad01db9d084d9cdea18f8f00185afd31a56620d3d44b070915b06117-rootfs.mount: Deactivated successfully. Sep 9 05:36:24.604945 containerd[1901]: time="2025-09-09T05:36:24.604621345Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:24.606602 containerd[1901]: time="2025-09-09T05:36:24.606564160Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 9 05:36:24.608758 containerd[1901]: time="2025-09-09T05:36:24.608708082Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:24.610237 containerd[1901]: time="2025-09-09T05:36:24.610045119Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.970668735s" Sep 9 05:36:24.610237 containerd[1901]: time="2025-09-09T05:36:24.610077300Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 9 05:36:24.612968 containerd[1901]: time="2025-09-09T05:36:24.612939859Z" level=info msg="CreateContainer within sandbox \"06334b2c3b9caa2284ac6813262f7b62f8fb59a07bd7c39328f630b92caeed47\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 9 05:36:24.625253 containerd[1901]: time="2025-09-09T05:36:24.624701515Z" level=info msg="Container 7caad6476587b09503d2562a6573695c0ee8f8116ad043d33a24ef545dd272eb: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:36:24.643307 containerd[1901]: time="2025-09-09T05:36:24.643266746Z" level=info msg="CreateContainer within sandbox \"06334b2c3b9caa2284ac6813262f7b62f8fb59a07bd7c39328f630b92caeed47\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"7caad6476587b09503d2562a6573695c0ee8f8116ad043d33a24ef545dd272eb\"" Sep 9 05:36:24.644192 containerd[1901]: time="2025-09-09T05:36:24.644164116Z" level=info msg="StartContainer for \"7caad6476587b09503d2562a6573695c0ee8f8116ad043d33a24ef545dd272eb\"" Sep 9 05:36:24.645189 containerd[1901]: time="2025-09-09T05:36:24.645156488Z" level=info msg="connecting to shim 7caad6476587b09503d2562a6573695c0ee8f8116ad043d33a24ef545dd272eb" address="unix:///run/containerd/s/5b6e8dfa744f7529b65bdd83c3089b2504ba5efa10d6bac5025dec1a330ec7aa" protocol=ttrpc version=3 Sep 9 05:36:24.664794 systemd[1]: Started cri-containerd-7caad6476587b09503d2562a6573695c0ee8f8116ad043d33a24ef545dd272eb.scope - libcontainer container 7caad6476587b09503d2562a6573695c0ee8f8116ad043d33a24ef545dd272eb. Sep 9 05:36:24.697634 containerd[1901]: time="2025-09-09T05:36:24.697599398Z" level=info msg="StartContainer for \"7caad6476587b09503d2562a6573695c0ee8f8116ad043d33a24ef545dd272eb\" returns successfully" Sep 9 05:36:24.858768 containerd[1901]: time="2025-09-09T05:36:24.858634978Z" level=info msg="CreateContainer within sandbox \"47c09c8ce358f869b3b7bbe61f914f969126ac7f365e4dacde39a7c33afc02d2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 05:36:24.886577 containerd[1901]: time="2025-09-09T05:36:24.882792931Z" level=info msg="Container dd95901274a6ae0632941c9c49bec7215a03ec0f44b97e2f42ffc979ab763415: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:36:24.902464 containerd[1901]: time="2025-09-09T05:36:24.902414011Z" level=info msg="CreateContainer within sandbox \"47c09c8ce358f869b3b7bbe61f914f969126ac7f365e4dacde39a7c33afc02d2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"dd95901274a6ae0632941c9c49bec7215a03ec0f44b97e2f42ffc979ab763415\"" Sep 9 05:36:24.905121 containerd[1901]: time="2025-09-09T05:36:24.905012879Z" level=info msg="StartContainer for \"dd95901274a6ae0632941c9c49bec7215a03ec0f44b97e2f42ffc979ab763415\"" Sep 9 05:36:24.908816 containerd[1901]: time="2025-09-09T05:36:24.908772635Z" level=info msg="connecting to shim dd95901274a6ae0632941c9c49bec7215a03ec0f44b97e2f42ffc979ab763415" address="unix:///run/containerd/s/020daa23679eb9c1fe09bde1fd7ed65b7a7113b85e6eccc429f7320ff334a11f" protocol=ttrpc version=3 Sep 9 05:36:24.965826 systemd[1]: Started cri-containerd-dd95901274a6ae0632941c9c49bec7215a03ec0f44b97e2f42ffc979ab763415.scope - libcontainer container dd95901274a6ae0632941c9c49bec7215a03ec0f44b97e2f42ffc979ab763415. Sep 9 05:36:25.015127 kubelet[3285]: I0909 05:36:25.015049 3285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-4vtq9" podStartSLOduration=1.5715256640000002 podStartE2EDuration="18.015011611s" podCreationTimestamp="2025-09-09 05:36:07 +0000 UTC" firstStartedPulling="2025-09-09 05:36:08.167538487 +0000 UTC m=+4.750386299" lastFinishedPulling="2025-09-09 05:36:24.611024421 +0000 UTC m=+21.193872246" observedRunningTime="2025-09-09 05:36:24.913421795 +0000 UTC m=+21.496269628" watchObservedRunningTime="2025-09-09 05:36:25.015011611 +0000 UTC m=+21.597859445" Sep 9 05:36:25.062905 containerd[1901]: time="2025-09-09T05:36:25.062848972Z" level=info msg="StartContainer for \"dd95901274a6ae0632941c9c49bec7215a03ec0f44b97e2f42ffc979ab763415\" returns successfully" Sep 9 05:36:25.368680 containerd[1901]: time="2025-09-09T05:36:25.368637797Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dd95901274a6ae0632941c9c49bec7215a03ec0f44b97e2f42ffc979ab763415\" id:\"71628cdef41f5665b039d7251196c09b67b544d19562ab843cd4b5e1fd70ce43\" pid:4126 exited_at:{seconds:1757396185 nanos:365270908}" Sep 9 05:36:25.462905 kubelet[3285]: I0909 05:36:25.462877 3285 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 9 05:36:25.575628 systemd[1]: Created slice kubepods-burstable-poddc902c5e_0c21_4efd_b355_8569e250fca1.slice - libcontainer container kubepods-burstable-poddc902c5e_0c21_4efd_b355_8569e250fca1.slice. Sep 9 05:36:25.588933 systemd[1]: Created slice kubepods-burstable-podbab089e1_4d8a_4546_8a4d_cd96d75bf273.slice - libcontainer container kubepods-burstable-podbab089e1_4d8a_4546_8a4d_cd96d75bf273.slice. Sep 9 05:36:25.674428 kubelet[3285]: I0909 05:36:25.674055 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nbzk\" (UniqueName: \"kubernetes.io/projected/dc902c5e-0c21-4efd-b355-8569e250fca1-kube-api-access-9nbzk\") pod \"coredns-668d6bf9bc-5bxrj\" (UID: \"dc902c5e-0c21-4efd-b355-8569e250fca1\") " pod="kube-system/coredns-668d6bf9bc-5bxrj" Sep 9 05:36:25.674428 kubelet[3285]: I0909 05:36:25.674156 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc902c5e-0c21-4efd-b355-8569e250fca1-config-volume\") pod \"coredns-668d6bf9bc-5bxrj\" (UID: \"dc902c5e-0c21-4efd-b355-8569e250fca1\") " pod="kube-system/coredns-668d6bf9bc-5bxrj" Sep 9 05:36:25.674428 kubelet[3285]: I0909 05:36:25.674184 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bab089e1-4d8a-4546-8a4d-cd96d75bf273-config-volume\") pod \"coredns-668d6bf9bc-xn6b9\" (UID: \"bab089e1-4d8a-4546-8a4d-cd96d75bf273\") " pod="kube-system/coredns-668d6bf9bc-xn6b9" Sep 9 05:36:25.674428 kubelet[3285]: I0909 05:36:25.674242 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bmr2\" (UniqueName: \"kubernetes.io/projected/bab089e1-4d8a-4546-8a4d-cd96d75bf273-kube-api-access-7bmr2\") pod \"coredns-668d6bf9bc-xn6b9\" (UID: \"bab089e1-4d8a-4546-8a4d-cd96d75bf273\") " pod="kube-system/coredns-668d6bf9bc-xn6b9" Sep 9 05:36:26.183230 containerd[1901]: time="2025-09-09T05:36:26.183189285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5bxrj,Uid:dc902c5e-0c21-4efd-b355-8569e250fca1,Namespace:kube-system,Attempt:0,}" Sep 9 05:36:26.195272 containerd[1901]: time="2025-09-09T05:36:26.195229168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xn6b9,Uid:bab089e1-4d8a-4546-8a4d-cd96d75bf273,Namespace:kube-system,Attempt:0,}" Sep 9 05:36:28.315535 systemd-networkd[1817]: cilium_host: Link UP Sep 9 05:36:28.316221 (udev-worker)[4191]: Network interface NamePolicy= disabled on kernel command line. Sep 9 05:36:28.316883 systemd-networkd[1817]: cilium_net: Link UP Sep 9 05:36:28.317060 systemd-networkd[1817]: cilium_net: Gained carrier Sep 9 05:36:28.317207 systemd-networkd[1817]: cilium_host: Gained carrier Sep 9 05:36:28.320093 (udev-worker)[4224]: Network interface NamePolicy= disabled on kernel command line. Sep 9 05:36:28.444667 (udev-worker)[4234]: Network interface NamePolicy= disabled on kernel command line. Sep 9 05:36:28.460640 systemd-networkd[1817]: cilium_vxlan: Link UP Sep 9 05:36:28.460652 systemd-networkd[1817]: cilium_vxlan: Gained carrier Sep 9 05:36:29.028582 kernel: NET: Registered PF_ALG protocol family Sep 9 05:36:29.035632 systemd-networkd[1817]: cilium_host: Gained IPv6LL Sep 9 05:36:29.354909 systemd-networkd[1817]: cilium_net: Gained IPv6LL Sep 9 05:36:29.762478 (udev-worker)[4233]: Network interface NamePolicy= disabled on kernel command line. Sep 9 05:36:29.765273 systemd-networkd[1817]: lxc_health: Link UP Sep 9 05:36:29.771278 systemd-networkd[1817]: lxc_health: Gained carrier Sep 9 05:36:29.861546 kubelet[3285]: I0909 05:36:29.861477 3285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-nhmtm" podStartSLOduration=10.146569859 podStartE2EDuration="22.861451583s" podCreationTimestamp="2025-09-09 05:36:07 +0000 UTC" firstStartedPulling="2025-09-09 05:36:07.923424876 +0000 UTC m=+4.506272702" lastFinishedPulling="2025-09-09 05:36:20.638306597 +0000 UTC m=+17.221154426" observedRunningTime="2025-09-09 05:36:26.093387282 +0000 UTC m=+22.676235116" watchObservedRunningTime="2025-09-09 05:36:29.861451583 +0000 UTC m=+26.444299417" Sep 9 05:36:30.186866 systemd-networkd[1817]: cilium_vxlan: Gained IPv6LL Sep 9 05:36:30.317280 kernel: eth0: renamed from tmpdd6a2 Sep 9 05:36:30.316716 systemd-networkd[1817]: lxcf9484e781e68: Link UP Sep 9 05:36:30.321649 systemd-networkd[1817]: lxcf9484e781e68: Gained carrier Sep 9 05:36:30.325694 (udev-worker)[4556]: Network interface NamePolicy= disabled on kernel command line. Sep 9 05:36:30.326221 systemd-networkd[1817]: lxc5e3ea703bc3c: Link UP Sep 9 05:36:30.336654 kernel: eth0: renamed from tmp850c1 Sep 9 05:36:30.339929 systemd-networkd[1817]: lxc5e3ea703bc3c: Gained carrier Sep 9 05:36:31.530817 systemd-networkd[1817]: lxc_health: Gained IPv6LL Sep 9 05:36:32.042708 systemd-networkd[1817]: lxcf9484e781e68: Gained IPv6LL Sep 9 05:36:32.106795 systemd-networkd[1817]: lxc5e3ea703bc3c: Gained IPv6LL Sep 9 05:36:34.641197 ntpd[1863]: Listen normally on 8 cilium_host 192.168.0.214:123 Sep 9 05:36:34.643008 ntpd[1863]: 9 Sep 05:36:34 ntpd[1863]: Listen normally on 8 cilium_host 192.168.0.214:123 Sep 9 05:36:34.643008 ntpd[1863]: 9 Sep 05:36:34 ntpd[1863]: Listen normally on 9 cilium_net [fe80::7c91:97ff:feb8:172%4]:123 Sep 9 05:36:34.643008 ntpd[1863]: 9 Sep 05:36:34 ntpd[1863]: Listen normally on 10 cilium_host [fe80::a428:57ff:fecf:ed54%5]:123 Sep 9 05:36:34.643008 ntpd[1863]: 9 Sep 05:36:34 ntpd[1863]: Listen normally on 11 cilium_vxlan [fe80::fcbc:dbff:fe70:28fb%6]:123 Sep 9 05:36:34.643008 ntpd[1863]: 9 Sep 05:36:34 ntpd[1863]: Listen normally on 12 lxc_health [fe80::a836:90ff:fe84:8d4f%8]:123 Sep 9 05:36:34.643008 ntpd[1863]: 9 Sep 05:36:34 ntpd[1863]: Listen normally on 13 lxc5e3ea703bc3c [fe80::48e0:33ff:fe19:c995%10]:123 Sep 9 05:36:34.643008 ntpd[1863]: 9 Sep 05:36:34 ntpd[1863]: Listen normally on 14 lxcf9484e781e68 [fe80::a0ed:c9ff:fe4b:8b60%12]:123 Sep 9 05:36:34.641298 ntpd[1863]: Listen normally on 9 cilium_net [fe80::7c91:97ff:feb8:172%4]:123 Sep 9 05:36:34.641351 ntpd[1863]: Listen normally on 10 cilium_host [fe80::a428:57ff:fecf:ed54%5]:123 Sep 9 05:36:34.641388 ntpd[1863]: Listen normally on 11 cilium_vxlan [fe80::fcbc:dbff:fe70:28fb%6]:123 Sep 9 05:36:34.641427 ntpd[1863]: Listen normally on 12 lxc_health [fe80::a836:90ff:fe84:8d4f%8]:123 Sep 9 05:36:34.641467 ntpd[1863]: Listen normally on 13 lxc5e3ea703bc3c [fe80::48e0:33ff:fe19:c995%10]:123 Sep 9 05:36:34.641504 ntpd[1863]: Listen normally on 14 lxcf9484e781e68 [fe80::a0ed:c9ff:fe4b:8b60%12]:123 Sep 9 05:36:34.840581 containerd[1901]: time="2025-09-09T05:36:34.840204742Z" level=info msg="connecting to shim dd6a20f46ce3e408a36a713f222cc25938240673a656965028f9f12161ca11b8" address="unix:///run/containerd/s/b6460193b5bfe3f696c753e6d23aa5272265e4bdfe750686bbf8997f6706a548" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:36:34.846598 containerd[1901]: time="2025-09-09T05:36:34.845699077Z" level=info msg="connecting to shim 850c1b46430a38f20adb2af75b6b07e6cf6716be9a64fc0ba51b3b2fefd12eb2" address="unix:///run/containerd/s/b05a10f0e248ae37ad615538dd1e02d950dee16f33d8108a0f52c14e52bb1c1d" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:36:34.909016 systemd[1]: Started cri-containerd-850c1b46430a38f20adb2af75b6b07e6cf6716be9a64fc0ba51b3b2fefd12eb2.scope - libcontainer container 850c1b46430a38f20adb2af75b6b07e6cf6716be9a64fc0ba51b3b2fefd12eb2. Sep 9 05:36:34.920970 systemd[1]: Started cri-containerd-dd6a20f46ce3e408a36a713f222cc25938240673a656965028f9f12161ca11b8.scope - libcontainer container dd6a20f46ce3e408a36a713f222cc25938240673a656965028f9f12161ca11b8. Sep 9 05:36:35.029258 containerd[1901]: time="2025-09-09T05:36:35.029175858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xn6b9,Uid:bab089e1-4d8a-4546-8a4d-cd96d75bf273,Namespace:kube-system,Attempt:0,} returns sandbox id \"850c1b46430a38f20adb2af75b6b07e6cf6716be9a64fc0ba51b3b2fefd12eb2\"" Sep 9 05:36:35.061732 containerd[1901]: time="2025-09-09T05:36:35.061249854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5bxrj,Uid:dc902c5e-0c21-4efd-b355-8569e250fca1,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd6a20f46ce3e408a36a713f222cc25938240673a656965028f9f12161ca11b8\"" Sep 9 05:36:35.074817 containerd[1901]: time="2025-09-09T05:36:35.074236580Z" level=info msg="CreateContainer within sandbox \"850c1b46430a38f20adb2af75b6b07e6cf6716be9a64fc0ba51b3b2fefd12eb2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 05:36:35.075267 containerd[1901]: time="2025-09-09T05:36:35.075233042Z" level=info msg="CreateContainer within sandbox \"dd6a20f46ce3e408a36a713f222cc25938240673a656965028f9f12161ca11b8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 05:36:35.110595 containerd[1901]: time="2025-09-09T05:36:35.110334789Z" level=info msg="Container 940bbc10dcc6b8be0a3bcf686accb22f157ae31f1537d1a7fdceda8030886135: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:36:35.110823 containerd[1901]: time="2025-09-09T05:36:35.110803107Z" level=info msg="Container e02d1c21e58aa35f06addc09a9e2735eb8951986675785ef1a10e3f2c1468f14: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:36:35.124207 containerd[1901]: time="2025-09-09T05:36:35.124158139Z" level=info msg="CreateContainer within sandbox \"850c1b46430a38f20adb2af75b6b07e6cf6716be9a64fc0ba51b3b2fefd12eb2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"940bbc10dcc6b8be0a3bcf686accb22f157ae31f1537d1a7fdceda8030886135\"" Sep 9 05:36:35.126283 containerd[1901]: time="2025-09-09T05:36:35.126133346Z" level=info msg="StartContainer for \"940bbc10dcc6b8be0a3bcf686accb22f157ae31f1537d1a7fdceda8030886135\"" Sep 9 05:36:35.127250 containerd[1901]: time="2025-09-09T05:36:35.127218782Z" level=info msg="connecting to shim 940bbc10dcc6b8be0a3bcf686accb22f157ae31f1537d1a7fdceda8030886135" address="unix:///run/containerd/s/b05a10f0e248ae37ad615538dd1e02d950dee16f33d8108a0f52c14e52bb1c1d" protocol=ttrpc version=3 Sep 9 05:36:35.135020 containerd[1901]: time="2025-09-09T05:36:35.134963223Z" level=info msg="CreateContainer within sandbox \"dd6a20f46ce3e408a36a713f222cc25938240673a656965028f9f12161ca11b8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e02d1c21e58aa35f06addc09a9e2735eb8951986675785ef1a10e3f2c1468f14\"" Sep 9 05:36:35.136072 containerd[1901]: time="2025-09-09T05:36:35.136025243Z" level=info msg="StartContainer for \"e02d1c21e58aa35f06addc09a9e2735eb8951986675785ef1a10e3f2c1468f14\"" Sep 9 05:36:35.137574 containerd[1901]: time="2025-09-09T05:36:35.137526451Z" level=info msg="connecting to shim e02d1c21e58aa35f06addc09a9e2735eb8951986675785ef1a10e3f2c1468f14" address="unix:///run/containerd/s/b6460193b5bfe3f696c753e6d23aa5272265e4bdfe750686bbf8997f6706a548" protocol=ttrpc version=3 Sep 9 05:36:35.152776 systemd[1]: Started cri-containerd-940bbc10dcc6b8be0a3bcf686accb22f157ae31f1537d1a7fdceda8030886135.scope - libcontainer container 940bbc10dcc6b8be0a3bcf686accb22f157ae31f1537d1a7fdceda8030886135. Sep 9 05:36:35.156017 systemd[1]: Started cri-containerd-e02d1c21e58aa35f06addc09a9e2735eb8951986675785ef1a10e3f2c1468f14.scope - libcontainer container e02d1c21e58aa35f06addc09a9e2735eb8951986675785ef1a10e3f2c1468f14. Sep 9 05:36:35.217090 containerd[1901]: time="2025-09-09T05:36:35.216969091Z" level=info msg="StartContainer for \"e02d1c21e58aa35f06addc09a9e2735eb8951986675785ef1a10e3f2c1468f14\" returns successfully" Sep 9 05:36:35.217782 containerd[1901]: time="2025-09-09T05:36:35.217751963Z" level=info msg="StartContainer for \"940bbc10dcc6b8be0a3bcf686accb22f157ae31f1537d1a7fdceda8030886135\" returns successfully" Sep 9 05:36:35.814546 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2694158359.mount: Deactivated successfully. Sep 9 05:36:35.814671 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2273715352.mount: Deactivated successfully. Sep 9 05:36:36.020222 kubelet[3285]: I0909 05:36:36.019823 3285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-5bxrj" podStartSLOduration=29.018819742 podStartE2EDuration="29.018819742s" podCreationTimestamp="2025-09-09 05:36:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:36:36.015850731 +0000 UTC m=+32.598698565" watchObservedRunningTime="2025-09-09 05:36:36.018819742 +0000 UTC m=+32.601667574" Sep 9 05:36:36.082988 kubelet[3285]: I0909 05:36:36.081033 3285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-xn6b9" podStartSLOduration=29.0810099 podStartE2EDuration="29.0810099s" podCreationTimestamp="2025-09-09 05:36:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:36:36.075277674 +0000 UTC m=+32.658125508" watchObservedRunningTime="2025-09-09 05:36:36.0810099 +0000 UTC m=+32.663857734" Sep 9 05:36:38.673801 systemd[1]: Started sshd@7-172.31.25.117:22-147.75.109.163:49008.service - OpenSSH per-connection server daemon (147.75.109.163:49008). Sep 9 05:36:38.880102 sshd[4763]: Accepted publickey for core from 147.75.109.163 port 49008 ssh2: RSA SHA256:k1gUnX9WA3dyp6ylgbUnG2K6cUpm99lcEZsxDzZ5bM4 Sep 9 05:36:38.882227 sshd-session[4763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:36:38.900953 systemd-logind[1872]: New session 8 of user core. Sep 9 05:36:38.914860 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 05:36:39.837581 sshd[4767]: Connection closed by 147.75.109.163 port 49008 Sep 9 05:36:39.838293 sshd-session[4763]: pam_unix(sshd:session): session closed for user core Sep 9 05:36:39.848215 systemd[1]: sshd@7-172.31.25.117:22-147.75.109.163:49008.service: Deactivated successfully. Sep 9 05:36:39.853079 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 05:36:39.855757 systemd-logind[1872]: Session 8 logged out. Waiting for processes to exit. Sep 9 05:36:39.857374 systemd-logind[1872]: Removed session 8. Sep 9 05:36:44.872820 systemd[1]: Started sshd@8-172.31.25.117:22-147.75.109.163:49338.service - OpenSSH per-connection server daemon (147.75.109.163:49338). Sep 9 05:36:45.056196 sshd[4782]: Accepted publickey for core from 147.75.109.163 port 49338 ssh2: RSA SHA256:k1gUnX9WA3dyp6ylgbUnG2K6cUpm99lcEZsxDzZ5bM4 Sep 9 05:36:45.060400 sshd-session[4782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:36:45.067635 systemd-logind[1872]: New session 9 of user core. Sep 9 05:36:45.077840 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 05:36:45.288888 sshd[4785]: Connection closed by 147.75.109.163 port 49338 Sep 9 05:36:45.290846 sshd-session[4782]: pam_unix(sshd:session): session closed for user core Sep 9 05:36:45.295183 systemd[1]: sshd@8-172.31.25.117:22-147.75.109.163:49338.service: Deactivated successfully. Sep 9 05:36:45.298993 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 05:36:45.300077 systemd-logind[1872]: Session 9 logged out. Waiting for processes to exit. Sep 9 05:36:45.302243 systemd-logind[1872]: Removed session 9. Sep 9 05:36:50.324809 systemd[1]: Started sshd@9-172.31.25.117:22-147.75.109.163:53362.service - OpenSSH per-connection server daemon (147.75.109.163:53362). Sep 9 05:36:50.506388 sshd[4798]: Accepted publickey for core from 147.75.109.163 port 53362 ssh2: RSA SHA256:k1gUnX9WA3dyp6ylgbUnG2K6cUpm99lcEZsxDzZ5bM4 Sep 9 05:36:50.507795 sshd-session[4798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:36:50.513427 systemd-logind[1872]: New session 10 of user core. Sep 9 05:36:50.519819 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 05:36:50.712873 sshd[4801]: Connection closed by 147.75.109.163 port 53362 Sep 9 05:36:50.713764 sshd-session[4798]: pam_unix(sshd:session): session closed for user core Sep 9 05:36:50.718347 systemd[1]: sshd@9-172.31.25.117:22-147.75.109.163:53362.service: Deactivated successfully. Sep 9 05:36:50.721122 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 05:36:50.722431 systemd-logind[1872]: Session 10 logged out. Waiting for processes to exit. Sep 9 05:36:50.724403 systemd-logind[1872]: Removed session 10. Sep 9 05:36:55.746938 systemd[1]: Started sshd@10-172.31.25.117:22-147.75.109.163:53372.service - OpenSSH per-connection server daemon (147.75.109.163:53372). Sep 9 05:36:55.916653 sshd[4815]: Accepted publickey for core from 147.75.109.163 port 53372 ssh2: RSA SHA256:k1gUnX9WA3dyp6ylgbUnG2K6cUpm99lcEZsxDzZ5bM4 Sep 9 05:36:55.918424 sshd-session[4815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:36:55.924026 systemd-logind[1872]: New session 11 of user core. Sep 9 05:36:55.928798 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 05:36:56.147751 sshd[4818]: Connection closed by 147.75.109.163 port 53372 Sep 9 05:36:56.148971 sshd-session[4815]: pam_unix(sshd:session): session closed for user core Sep 9 05:36:56.153026 systemd[1]: sshd@10-172.31.25.117:22-147.75.109.163:53372.service: Deactivated successfully. Sep 9 05:36:56.155412 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 05:36:56.156818 systemd-logind[1872]: Session 11 logged out. Waiting for processes to exit. Sep 9 05:36:56.159671 systemd-logind[1872]: Removed session 11. Sep 9 05:36:56.189891 systemd[1]: Started sshd@11-172.31.25.117:22-147.75.109.163:53374.service - OpenSSH per-connection server daemon (147.75.109.163:53374). Sep 9 05:36:56.357760 sshd[4831]: Accepted publickey for core from 147.75.109.163 port 53374 ssh2: RSA SHA256:k1gUnX9WA3dyp6ylgbUnG2K6cUpm99lcEZsxDzZ5bM4 Sep 9 05:36:56.359324 sshd-session[4831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:36:56.365925 systemd-logind[1872]: New session 12 of user core. Sep 9 05:36:56.370939 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 05:36:56.627592 sshd[4834]: Connection closed by 147.75.109.163 port 53374 Sep 9 05:36:56.628683 sshd-session[4831]: pam_unix(sshd:session): session closed for user core Sep 9 05:36:56.638739 systemd[1]: sshd@11-172.31.25.117:22-147.75.109.163:53374.service: Deactivated successfully. Sep 9 05:36:56.647028 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 05:36:56.649137 systemd-logind[1872]: Session 12 logged out. Waiting for processes to exit. Sep 9 05:36:56.665898 systemd[1]: Started sshd@12-172.31.25.117:22-147.75.109.163:53384.service - OpenSSH per-connection server daemon (147.75.109.163:53384). Sep 9 05:36:56.669379 systemd-logind[1872]: Removed session 12. Sep 9 05:36:56.837270 sshd[4844]: Accepted publickey for core from 147.75.109.163 port 53384 ssh2: RSA SHA256:k1gUnX9WA3dyp6ylgbUnG2K6cUpm99lcEZsxDzZ5bM4 Sep 9 05:36:56.838635 sshd-session[4844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:36:56.844713 systemd-logind[1872]: New session 13 of user core. Sep 9 05:36:56.851792 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 05:36:57.047226 sshd[4847]: Connection closed by 147.75.109.163 port 53384 Sep 9 05:36:57.048738 sshd-session[4844]: pam_unix(sshd:session): session closed for user core Sep 9 05:36:57.052822 systemd-logind[1872]: Session 13 logged out. Waiting for processes to exit. Sep 9 05:36:57.052967 systemd[1]: sshd@12-172.31.25.117:22-147.75.109.163:53384.service: Deactivated successfully. Sep 9 05:36:57.055290 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 05:36:57.057496 systemd-logind[1872]: Removed session 13. Sep 9 05:37:02.100151 systemd[1]: Started sshd@13-172.31.25.117:22-147.75.109.163:48076.service - OpenSSH per-connection server daemon (147.75.109.163:48076). Sep 9 05:37:02.361466 sshd[4860]: Accepted publickey for core from 147.75.109.163 port 48076 ssh2: RSA SHA256:k1gUnX9WA3dyp6ylgbUnG2K6cUpm99lcEZsxDzZ5bM4 Sep 9 05:37:02.364626 sshd-session[4860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:37:02.379015 systemd-logind[1872]: New session 14 of user core. Sep 9 05:37:02.396015 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 05:37:02.767820 sshd[4863]: Connection closed by 147.75.109.163 port 48076 Sep 9 05:37:02.769807 sshd-session[4860]: pam_unix(sshd:session): session closed for user core Sep 9 05:37:02.775710 systemd[1]: sshd@13-172.31.25.117:22-147.75.109.163:48076.service: Deactivated successfully. Sep 9 05:37:02.781509 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 05:37:02.783821 systemd-logind[1872]: Session 14 logged out. Waiting for processes to exit. Sep 9 05:37:02.787410 systemd-logind[1872]: Removed session 14. Sep 9 05:37:07.805288 systemd[1]: Started sshd@14-172.31.25.117:22-147.75.109.163:48082.service - OpenSSH per-connection server daemon (147.75.109.163:48082). Sep 9 05:37:07.977058 sshd[4877]: Accepted publickey for core from 147.75.109.163 port 48082 ssh2: RSA SHA256:k1gUnX9WA3dyp6ylgbUnG2K6cUpm99lcEZsxDzZ5bM4 Sep 9 05:37:07.978760 sshd-session[4877]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:37:07.985518 systemd-logind[1872]: New session 15 of user core. Sep 9 05:37:07.988751 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 05:37:08.179536 sshd[4880]: Connection closed by 147.75.109.163 port 48082 Sep 9 05:37:08.180080 sshd-session[4877]: pam_unix(sshd:session): session closed for user core Sep 9 05:37:08.184677 systemd[1]: sshd@14-172.31.25.117:22-147.75.109.163:48082.service: Deactivated successfully. Sep 9 05:37:08.186872 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 05:37:08.187624 systemd-logind[1872]: Session 15 logged out. Waiting for processes to exit. Sep 9 05:37:08.189734 systemd-logind[1872]: Removed session 15. Sep 9 05:37:08.220752 systemd[1]: Started sshd@15-172.31.25.117:22-147.75.109.163:48098.service - OpenSSH per-connection server daemon (147.75.109.163:48098). Sep 9 05:37:08.389406 sshd[4893]: Accepted publickey for core from 147.75.109.163 port 48098 ssh2: RSA SHA256:k1gUnX9WA3dyp6ylgbUnG2K6cUpm99lcEZsxDzZ5bM4 Sep 9 05:37:08.390789 sshd-session[4893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:37:08.396049 systemd-logind[1872]: New session 16 of user core. Sep 9 05:37:08.403806 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 05:37:09.103829 sshd[4896]: Connection closed by 147.75.109.163 port 48098 Sep 9 05:37:09.104713 sshd-session[4893]: pam_unix(sshd:session): session closed for user core Sep 9 05:37:09.113808 systemd[1]: sshd@15-172.31.25.117:22-147.75.109.163:48098.service: Deactivated successfully. Sep 9 05:37:09.116392 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 05:37:09.118724 systemd-logind[1872]: Session 16 logged out. Waiting for processes to exit. Sep 9 05:37:09.120315 systemd-logind[1872]: Removed session 16. Sep 9 05:37:09.136163 systemd[1]: Started sshd@16-172.31.25.117:22-147.75.109.163:48100.service - OpenSSH per-connection server daemon (147.75.109.163:48100). Sep 9 05:37:09.333271 sshd[4908]: Accepted publickey for core from 147.75.109.163 port 48100 ssh2: RSA SHA256:k1gUnX9WA3dyp6ylgbUnG2K6cUpm99lcEZsxDzZ5bM4 Sep 9 05:37:09.334722 sshd-session[4908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:37:09.341700 systemd-logind[1872]: New session 17 of user core. Sep 9 05:37:09.345834 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 05:37:10.124071 sshd[4911]: Connection closed by 147.75.109.163 port 48100 Sep 9 05:37:10.124878 sshd-session[4908]: pam_unix(sshd:session): session closed for user core Sep 9 05:37:10.134496 systemd-logind[1872]: Session 17 logged out. Waiting for processes to exit. Sep 9 05:37:10.136099 systemd[1]: sshd@16-172.31.25.117:22-147.75.109.163:48100.service: Deactivated successfully. Sep 9 05:37:10.139622 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 05:37:10.144925 systemd-logind[1872]: Removed session 17. Sep 9 05:37:10.158968 systemd[1]: Started sshd@17-172.31.25.117:22-147.75.109.163:58442.service - OpenSSH per-connection server daemon (147.75.109.163:58442). Sep 9 05:37:10.335682 sshd[4928]: Accepted publickey for core from 147.75.109.163 port 58442 ssh2: RSA SHA256:k1gUnX9WA3dyp6ylgbUnG2K6cUpm99lcEZsxDzZ5bM4 Sep 9 05:37:10.337035 sshd-session[4928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:37:10.342866 systemd-logind[1872]: New session 18 of user core. Sep 9 05:37:10.346750 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 05:37:10.691069 sshd[4931]: Connection closed by 147.75.109.163 port 58442 Sep 9 05:37:10.691649 sshd-session[4928]: pam_unix(sshd:session): session closed for user core Sep 9 05:37:10.697395 systemd-logind[1872]: Session 18 logged out. Waiting for processes to exit. Sep 9 05:37:10.699150 systemd[1]: sshd@17-172.31.25.117:22-147.75.109.163:58442.service: Deactivated successfully. Sep 9 05:37:10.701774 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 05:37:10.704143 systemd-logind[1872]: Removed session 18. Sep 9 05:37:10.721904 systemd[1]: Started sshd@18-172.31.25.117:22-147.75.109.163:58452.service - OpenSSH per-connection server daemon (147.75.109.163:58452). Sep 9 05:37:10.890757 sshd[4941]: Accepted publickey for core from 147.75.109.163 port 58452 ssh2: RSA SHA256:k1gUnX9WA3dyp6ylgbUnG2K6cUpm99lcEZsxDzZ5bM4 Sep 9 05:37:10.892521 sshd-session[4941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:37:10.898914 systemd-logind[1872]: New session 19 of user core. Sep 9 05:37:10.904789 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 05:37:11.095033 sshd[4944]: Connection closed by 147.75.109.163 port 58452 Sep 9 05:37:11.095652 sshd-session[4941]: pam_unix(sshd:session): session closed for user core Sep 9 05:37:11.100733 systemd-logind[1872]: Session 19 logged out. Waiting for processes to exit. Sep 9 05:37:11.101216 systemd[1]: sshd@18-172.31.25.117:22-147.75.109.163:58452.service: Deactivated successfully. Sep 9 05:37:11.104024 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 05:37:11.106107 systemd-logind[1872]: Removed session 19. Sep 9 05:37:16.130916 systemd[1]: Started sshd@19-172.31.25.117:22-147.75.109.163:58454.service - OpenSSH per-connection server daemon (147.75.109.163:58454). Sep 9 05:37:16.296048 sshd[4959]: Accepted publickey for core from 147.75.109.163 port 58454 ssh2: RSA SHA256:k1gUnX9WA3dyp6ylgbUnG2K6cUpm99lcEZsxDzZ5bM4 Sep 9 05:37:16.297698 sshd-session[4959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:37:16.303697 systemd-logind[1872]: New session 20 of user core. Sep 9 05:37:16.310889 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 05:37:16.505493 sshd[4962]: Connection closed by 147.75.109.163 port 58454 Sep 9 05:37:16.506035 sshd-session[4959]: pam_unix(sshd:session): session closed for user core Sep 9 05:37:16.509484 systemd[1]: sshd@19-172.31.25.117:22-147.75.109.163:58454.service: Deactivated successfully. Sep 9 05:37:16.511428 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 05:37:16.514758 systemd-logind[1872]: Session 20 logged out. Waiting for processes to exit. Sep 9 05:37:16.516096 systemd-logind[1872]: Removed session 20. Sep 9 05:37:21.540267 systemd[1]: Started sshd@20-172.31.25.117:22-147.75.109.163:34890.service - OpenSSH per-connection server daemon (147.75.109.163:34890). Sep 9 05:37:21.707763 sshd[4974]: Accepted publickey for core from 147.75.109.163 port 34890 ssh2: RSA SHA256:k1gUnX9WA3dyp6ylgbUnG2K6cUpm99lcEZsxDzZ5bM4 Sep 9 05:37:21.709334 sshd-session[4974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:37:21.715269 systemd-logind[1872]: New session 21 of user core. Sep 9 05:37:21.717712 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 9 05:37:21.902873 sshd[4977]: Connection closed by 147.75.109.163 port 34890 Sep 9 05:37:21.903435 sshd-session[4974]: pam_unix(sshd:session): session closed for user core Sep 9 05:37:21.907728 systemd[1]: sshd@20-172.31.25.117:22-147.75.109.163:34890.service: Deactivated successfully. Sep 9 05:37:21.910334 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 05:37:21.911499 systemd-logind[1872]: Session 21 logged out. Waiting for processes to exit. Sep 9 05:37:21.913261 systemd-logind[1872]: Removed session 21. Sep 9 05:37:26.937261 systemd[1]: Started sshd@21-172.31.25.117:22-147.75.109.163:34906.service - OpenSSH per-connection server daemon (147.75.109.163:34906). Sep 9 05:37:27.111520 sshd[4989]: Accepted publickey for core from 147.75.109.163 port 34906 ssh2: RSA SHA256:k1gUnX9WA3dyp6ylgbUnG2K6cUpm99lcEZsxDzZ5bM4 Sep 9 05:37:27.112106 sshd-session[4989]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:37:27.118171 systemd-logind[1872]: New session 22 of user core. Sep 9 05:37:27.123851 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 9 05:37:27.320411 sshd[4992]: Connection closed by 147.75.109.163 port 34906 Sep 9 05:37:27.320945 sshd-session[4989]: pam_unix(sshd:session): session closed for user core Sep 9 05:37:27.326430 systemd-logind[1872]: Session 22 logged out. Waiting for processes to exit. Sep 9 05:37:27.327270 systemd[1]: sshd@21-172.31.25.117:22-147.75.109.163:34906.service: Deactivated successfully. Sep 9 05:37:27.329572 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 05:37:27.330933 systemd-logind[1872]: Removed session 22. Sep 9 05:37:27.356836 systemd[1]: Started sshd@22-172.31.25.117:22-147.75.109.163:34910.service - OpenSSH per-connection server daemon (147.75.109.163:34910). Sep 9 05:37:27.540443 sshd[5004]: Accepted publickey for core from 147.75.109.163 port 34910 ssh2: RSA SHA256:k1gUnX9WA3dyp6ylgbUnG2K6cUpm99lcEZsxDzZ5bM4 Sep 9 05:37:27.541870 sshd-session[5004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:37:27.547101 systemd-logind[1872]: New session 23 of user core. Sep 9 05:37:27.550729 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 9 05:37:28.980596 containerd[1901]: time="2025-09-09T05:37:28.979326810Z" level=info msg="StopContainer for \"7caad6476587b09503d2562a6573695c0ee8f8116ad043d33a24ef545dd272eb\" with timeout 30 (s)" Sep 9 05:37:28.982530 containerd[1901]: time="2025-09-09T05:37:28.981853297Z" level=info msg="Stop container \"7caad6476587b09503d2562a6573695c0ee8f8116ad043d33a24ef545dd272eb\" with signal terminated" Sep 9 05:37:28.999278 systemd[1]: cri-containerd-7caad6476587b09503d2562a6573695c0ee8f8116ad043d33a24ef545dd272eb.scope: Deactivated successfully. Sep 9 05:37:29.003988 containerd[1901]: time="2025-09-09T05:37:29.003947608Z" level=info msg="received exit event container_id:\"7caad6476587b09503d2562a6573695c0ee8f8116ad043d33a24ef545dd272eb\" id:\"7caad6476587b09503d2562a6573695c0ee8f8116ad043d33a24ef545dd272eb\" pid:4070 exited_at:{seconds:1757396249 nanos:2404354}" Sep 9 05:37:29.005172 containerd[1901]: time="2025-09-09T05:37:29.004162231Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7caad6476587b09503d2562a6573695c0ee8f8116ad043d33a24ef545dd272eb\" id:\"7caad6476587b09503d2562a6573695c0ee8f8116ad043d33a24ef545dd272eb\" pid:4070 exited_at:{seconds:1757396249 nanos:2404354}" Sep 9 05:37:29.022430 containerd[1901]: time="2025-09-09T05:37:29.022381965Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 05:37:29.025463 containerd[1901]: time="2025-09-09T05:37:29.025425391Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dd95901274a6ae0632941c9c49bec7215a03ec0f44b97e2f42ffc979ab763415\" id:\"81bd538bc3682be7fcfaed4dcfcc36d4d2ccf81dd98271e8d4f2f9f966dab078\" pid:5033 exited_at:{seconds:1757396249 nanos:23922795}" Sep 9 05:37:29.026522 containerd[1901]: time="2025-09-09T05:37:29.026469150Z" level=info msg="StopContainer for \"dd95901274a6ae0632941c9c49bec7215a03ec0f44b97e2f42ffc979ab763415\" with timeout 2 (s)" Sep 9 05:37:29.027016 containerd[1901]: time="2025-09-09T05:37:29.026951744Z" level=info msg="Stop container \"dd95901274a6ae0632941c9c49bec7215a03ec0f44b97e2f42ffc979ab763415\" with signal terminated" Sep 9 05:37:29.037300 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7caad6476587b09503d2562a6573695c0ee8f8116ad043d33a24ef545dd272eb-rootfs.mount: Deactivated successfully. Sep 9 05:37:29.038093 systemd-networkd[1817]: lxc_health: Link DOWN Sep 9 05:37:29.038097 systemd-networkd[1817]: lxc_health: Lost carrier Sep 9 05:37:29.059087 systemd[1]: cri-containerd-dd95901274a6ae0632941c9c49bec7215a03ec0f44b97e2f42ffc979ab763415.scope: Deactivated successfully. Sep 9 05:37:29.060129 systemd[1]: cri-containerd-dd95901274a6ae0632941c9c49bec7215a03ec0f44b97e2f42ffc979ab763415.scope: Consumed 8.061s CPU time, 194.5M memory peak, 73.6M read from disk, 13.3M written to disk. Sep 9 05:37:29.061226 containerd[1901]: time="2025-09-09T05:37:29.061185864Z" level=info msg="received exit event container_id:\"dd95901274a6ae0632941c9c49bec7215a03ec0f44b97e2f42ffc979ab763415\" id:\"dd95901274a6ae0632941c9c49bec7215a03ec0f44b97e2f42ffc979ab763415\" pid:4101 exited_at:{seconds:1757396249 nanos:60517540}" Sep 9 05:37:29.061530 containerd[1901]: time="2025-09-09T05:37:29.061469768Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dd95901274a6ae0632941c9c49bec7215a03ec0f44b97e2f42ffc979ab763415\" id:\"dd95901274a6ae0632941c9c49bec7215a03ec0f44b97e2f42ffc979ab763415\" pid:4101 exited_at:{seconds:1757396249 nanos:60517540}" Sep 9 05:37:29.064157 containerd[1901]: time="2025-09-09T05:37:29.064094129Z" level=info msg="StopContainer for \"7caad6476587b09503d2562a6573695c0ee8f8116ad043d33a24ef545dd272eb\" returns successfully" Sep 9 05:37:29.065971 containerd[1901]: time="2025-09-09T05:37:29.065727526Z" level=info msg="StopPodSandbox for \"06334b2c3b9caa2284ac6813262f7b62f8fb59a07bd7c39328f630b92caeed47\"" Sep 9 05:37:29.065971 containerd[1901]: time="2025-09-09T05:37:29.065801278Z" level=info msg="Container to stop \"7caad6476587b09503d2562a6573695c0ee8f8116ad043d33a24ef545dd272eb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 05:37:29.092639 systemd[1]: cri-containerd-06334b2c3b9caa2284ac6813262f7b62f8fb59a07bd7c39328f630b92caeed47.scope: Deactivated successfully. Sep 9 05:37:29.095639 containerd[1901]: time="2025-09-09T05:37:29.095588539Z" level=info msg="TaskExit event in podsandbox handler container_id:\"06334b2c3b9caa2284ac6813262f7b62f8fb59a07bd7c39328f630b92caeed47\" id:\"06334b2c3b9caa2284ac6813262f7b62f8fb59a07bd7c39328f630b92caeed47\" pid:3514 exit_status:137 exited_at:{seconds:1757396249 nanos:93081714}" Sep 9 05:37:29.109298 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd95901274a6ae0632941c9c49bec7215a03ec0f44b97e2f42ffc979ab763415-rootfs.mount: Deactivated successfully. Sep 9 05:37:29.133370 containerd[1901]: time="2025-09-09T05:37:29.133298391Z" level=info msg="StopContainer for \"dd95901274a6ae0632941c9c49bec7215a03ec0f44b97e2f42ffc979ab763415\" returns successfully" Sep 9 05:37:29.136219 containerd[1901]: time="2025-09-09T05:37:29.135852851Z" level=info msg="StopPodSandbox for \"47c09c8ce358f869b3b7bbe61f914f969126ac7f365e4dacde39a7c33afc02d2\"" Sep 9 05:37:29.136219 containerd[1901]: time="2025-09-09T05:37:29.135936531Z" level=info msg="Container to stop \"211ce4c330a48fc3e066e9921cdef226b67ed65ec46530914bff031ee3b9c1c3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 05:37:29.136219 containerd[1901]: time="2025-09-09T05:37:29.135952892Z" level=info msg="Container to stop \"c23f504ec839da69da89251d87178c362c15f3b086cebf09cab9c916fc02b671\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 05:37:29.136219 containerd[1901]: time="2025-09-09T05:37:29.135965774Z" level=info msg="Container to stop \"80a1998991d6f4115ed97106e7c7299f589db7872da7ab8f377fc7b2bc8b604f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 05:37:29.136219 containerd[1901]: time="2025-09-09T05:37:29.135979992Z" level=info msg="Container to stop \"c59202e7ad01db9d084d9cdea18f8f00185afd31a56620d3d44b070915b06117\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 05:37:29.136219 containerd[1901]: time="2025-09-09T05:37:29.135992587Z" level=info msg="Container to stop \"dd95901274a6ae0632941c9c49bec7215a03ec0f44b97e2f42ffc979ab763415\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 05:37:29.145831 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06334b2c3b9caa2284ac6813262f7b62f8fb59a07bd7c39328f630b92caeed47-rootfs.mount: Deactivated successfully. Sep 9 05:37:29.147940 systemd[1]: cri-containerd-47c09c8ce358f869b3b7bbe61f914f969126ac7f365e4dacde39a7c33afc02d2.scope: Deactivated successfully. Sep 9 05:37:29.152082 containerd[1901]: time="2025-09-09T05:37:29.152037440Z" level=info msg="shim disconnected" id=06334b2c3b9caa2284ac6813262f7b62f8fb59a07bd7c39328f630b92caeed47 namespace=k8s.io Sep 9 05:37:29.152082 containerd[1901]: time="2025-09-09T05:37:29.152066744Z" level=warning msg="cleaning up after shim disconnected" id=06334b2c3b9caa2284ac6813262f7b62f8fb59a07bd7c39328f630b92caeed47 namespace=k8s.io Sep 9 05:37:29.152249 containerd[1901]: time="2025-09-09T05:37:29.152077464Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 05:37:29.168497 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-06334b2c3b9caa2284ac6813262f7b62f8fb59a07bd7c39328f630b92caeed47-shm.mount: Deactivated successfully. Sep 9 05:37:29.185797 containerd[1901]: time="2025-09-09T05:37:29.185734743Z" level=info msg="TearDown network for sandbox \"06334b2c3b9caa2284ac6813262f7b62f8fb59a07bd7c39328f630b92caeed47\" successfully" Sep 9 05:37:29.185797 containerd[1901]: time="2025-09-09T05:37:29.185790307Z" level=info msg="StopPodSandbox for \"06334b2c3b9caa2284ac6813262f7b62f8fb59a07bd7c39328f630b92caeed47\" returns successfully" Sep 9 05:37:29.193605 containerd[1901]: time="2025-09-09T05:37:29.193338834Z" level=info msg="received exit event sandbox_id:\"06334b2c3b9caa2284ac6813262f7b62f8fb59a07bd7c39328f630b92caeed47\" exit_status:137 exited_at:{seconds:1757396249 nanos:93081714}" Sep 9 05:37:29.197951 containerd[1901]: time="2025-09-09T05:37:29.197911704Z" level=info msg="TaskExit event in podsandbox handler container_id:\"47c09c8ce358f869b3b7bbe61f914f969126ac7f365e4dacde39a7c33afc02d2\" id:\"47c09c8ce358f869b3b7bbe61f914f969126ac7f365e4dacde39a7c33afc02d2\" pid:3440 exit_status:137 exited_at:{seconds:1757396249 nanos:150413466}" Sep 9 05:37:29.215249 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47c09c8ce358f869b3b7bbe61f914f969126ac7f365e4dacde39a7c33afc02d2-rootfs.mount: Deactivated successfully. Sep 9 05:37:29.226207 containerd[1901]: time="2025-09-09T05:37:29.226172436Z" level=info msg="shim disconnected" id=47c09c8ce358f869b3b7bbe61f914f969126ac7f365e4dacde39a7c33afc02d2 namespace=k8s.io Sep 9 05:37:29.226207 containerd[1901]: time="2025-09-09T05:37:29.226202168Z" level=warning msg="cleaning up after shim disconnected" id=47c09c8ce358f869b3b7bbe61f914f969126ac7f365e4dacde39a7c33afc02d2 namespace=k8s.io Sep 9 05:37:29.226410 containerd[1901]: time="2025-09-09T05:37:29.226209557Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 05:37:29.227565 containerd[1901]: time="2025-09-09T05:37:29.226924651Z" level=info msg="received exit event sandbox_id:\"47c09c8ce358f869b3b7bbe61f914f969126ac7f365e4dacde39a7c33afc02d2\" exit_status:137 exited_at:{seconds:1757396249 nanos:150413466}" Sep 9 05:37:29.227848 containerd[1901]: time="2025-09-09T05:37:29.227748985Z" level=info msg="TearDown network for sandbox \"47c09c8ce358f869b3b7bbe61f914f969126ac7f365e4dacde39a7c33afc02d2\" successfully" Sep 9 05:37:29.227848 containerd[1901]: time="2025-09-09T05:37:29.227774001Z" level=info msg="StopPodSandbox for \"47c09c8ce358f869b3b7bbe61f914f969126ac7f365e4dacde39a7c33afc02d2\" returns successfully" Sep 9 05:37:29.340430 kubelet[3285]: I0909 05:37:29.339118 3285 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/973febec-1d52-4e08-84d7-b41dca1ca333-hubble-tls\") pod \"973febec-1d52-4e08-84d7-b41dca1ca333\" (UID: \"973febec-1d52-4e08-84d7-b41dca1ca333\") " Sep 9 05:37:29.340430 kubelet[3285]: I0909 05:37:29.339158 3285 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/973febec-1d52-4e08-84d7-b41dca1ca333-host-proc-sys-net\") pod \"973febec-1d52-4e08-84d7-b41dca1ca333\" (UID: \"973febec-1d52-4e08-84d7-b41dca1ca333\") " Sep 9 05:37:29.340430 kubelet[3285]: I0909 05:37:29.339178 3285 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/973febec-1d52-4e08-84d7-b41dca1ca333-host-proc-sys-kernel\") pod \"973febec-1d52-4e08-84d7-b41dca1ca333\" (UID: \"973febec-1d52-4e08-84d7-b41dca1ca333\") " Sep 9 05:37:29.340430 kubelet[3285]: I0909 05:37:29.339196 3285 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/973febec-1d52-4e08-84d7-b41dca1ca333-bpf-maps\") pod \"973febec-1d52-4e08-84d7-b41dca1ca333\" (UID: \"973febec-1d52-4e08-84d7-b41dca1ca333\") " Sep 9 05:37:29.340430 kubelet[3285]: I0909 05:37:29.339211 3285 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/973febec-1d52-4e08-84d7-b41dca1ca333-hostproc\") pod \"973febec-1d52-4e08-84d7-b41dca1ca333\" (UID: \"973febec-1d52-4e08-84d7-b41dca1ca333\") " Sep 9 05:37:29.340430 kubelet[3285]: I0909 05:37:29.339224 3285 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/973febec-1d52-4e08-84d7-b41dca1ca333-cilium-cgroup\") pod \"973febec-1d52-4e08-84d7-b41dca1ca333\" (UID: \"973febec-1d52-4e08-84d7-b41dca1ca333\") " Sep 9 05:37:29.340966 kubelet[3285]: I0909 05:37:29.339237 3285 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/973febec-1d52-4e08-84d7-b41dca1ca333-cni-path\") pod \"973febec-1d52-4e08-84d7-b41dca1ca333\" (UID: \"973febec-1d52-4e08-84d7-b41dca1ca333\") " Sep 9 05:37:29.340966 kubelet[3285]: I0909 05:37:29.339251 3285 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/973febec-1d52-4e08-84d7-b41dca1ca333-lib-modules\") pod \"973febec-1d52-4e08-84d7-b41dca1ca333\" (UID: \"973febec-1d52-4e08-84d7-b41dca1ca333\") " Sep 9 05:37:29.340966 kubelet[3285]: I0909 05:37:29.339264 3285 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/973febec-1d52-4e08-84d7-b41dca1ca333-cilium-run\") pod \"973febec-1d52-4e08-84d7-b41dca1ca333\" (UID: \"973febec-1d52-4e08-84d7-b41dca1ca333\") " Sep 9 05:37:29.340966 kubelet[3285]: I0909 05:37:29.339284 3285 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klt27\" (UniqueName: \"kubernetes.io/projected/973febec-1d52-4e08-84d7-b41dca1ca333-kube-api-access-klt27\") pod \"973febec-1d52-4e08-84d7-b41dca1ca333\" (UID: \"973febec-1d52-4e08-84d7-b41dca1ca333\") " Sep 9 05:37:29.340966 kubelet[3285]: I0909 05:37:29.339304 3285 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/973febec-1d52-4e08-84d7-b41dca1ca333-xtables-lock\") pod \"973febec-1d52-4e08-84d7-b41dca1ca333\" (UID: \"973febec-1d52-4e08-84d7-b41dca1ca333\") " Sep 9 05:37:29.340966 kubelet[3285]: I0909 05:37:29.339320 3285 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7l7ww\" (UniqueName: \"kubernetes.io/projected/23b0a564-0383-4387-8353-7b772362e503-kube-api-access-7l7ww\") pod \"23b0a564-0383-4387-8353-7b772362e503\" (UID: \"23b0a564-0383-4387-8353-7b772362e503\") " Sep 9 05:37:29.341228 kubelet[3285]: I0909 05:37:29.339337 3285 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/973febec-1d52-4e08-84d7-b41dca1ca333-clustermesh-secrets\") pod \"973febec-1d52-4e08-84d7-b41dca1ca333\" (UID: \"973febec-1d52-4e08-84d7-b41dca1ca333\") " Sep 9 05:37:29.341228 kubelet[3285]: I0909 05:37:29.339352 3285 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/973febec-1d52-4e08-84d7-b41dca1ca333-etc-cni-netd\") pod \"973febec-1d52-4e08-84d7-b41dca1ca333\" (UID: \"973febec-1d52-4e08-84d7-b41dca1ca333\") " Sep 9 05:37:29.341228 kubelet[3285]: I0909 05:37:29.339368 3285 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/973febec-1d52-4e08-84d7-b41dca1ca333-cilium-config-path\") pod \"973febec-1d52-4e08-84d7-b41dca1ca333\" (UID: \"973febec-1d52-4e08-84d7-b41dca1ca333\") " Sep 9 05:37:29.341228 kubelet[3285]: I0909 05:37:29.339386 3285 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/23b0a564-0383-4387-8353-7b772362e503-cilium-config-path\") pod \"23b0a564-0383-4387-8353-7b772362e503\" (UID: \"23b0a564-0383-4387-8353-7b772362e503\") " Sep 9 05:37:29.342459 kubelet[3285]: I0909 05:37:29.342380 3285 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23b0a564-0383-4387-8353-7b772362e503-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "23b0a564-0383-4387-8353-7b772362e503" (UID: "23b0a564-0383-4387-8353-7b772362e503"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 05:37:29.342655 kubelet[3285]: I0909 05:37:29.342640 3285 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/973febec-1d52-4e08-84d7-b41dca1ca333-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "973febec-1d52-4e08-84d7-b41dca1ca333" (UID: "973febec-1d52-4e08-84d7-b41dca1ca333"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 05:37:29.342722 kubelet[3285]: I0909 05:37:29.342693 3285 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/973febec-1d52-4e08-84d7-b41dca1ca333-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "973febec-1d52-4e08-84d7-b41dca1ca333" (UID: "973febec-1d52-4e08-84d7-b41dca1ca333"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 05:37:29.342761 kubelet[3285]: I0909 05:37:29.342751 3285 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/973febec-1d52-4e08-84d7-b41dca1ca333-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "973febec-1d52-4e08-84d7-b41dca1ca333" (UID: "973febec-1d52-4e08-84d7-b41dca1ca333"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 05:37:29.342790 kubelet[3285]: I0909 05:37:29.342768 3285 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/973febec-1d52-4e08-84d7-b41dca1ca333-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "973febec-1d52-4e08-84d7-b41dca1ca333" (UID: "973febec-1d52-4e08-84d7-b41dca1ca333"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 05:37:29.342790 kubelet[3285]: I0909 05:37:29.342781 3285 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/973febec-1d52-4e08-84d7-b41dca1ca333-hostproc" (OuterVolumeSpecName: "hostproc") pod "973febec-1d52-4e08-84d7-b41dca1ca333" (UID: "973febec-1d52-4e08-84d7-b41dca1ca333"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 05:37:29.342845 kubelet[3285]: I0909 05:37:29.342794 3285 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/973febec-1d52-4e08-84d7-b41dca1ca333-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "973febec-1d52-4e08-84d7-b41dca1ca333" (UID: "973febec-1d52-4e08-84d7-b41dca1ca333"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 05:37:29.342845 kubelet[3285]: I0909 05:37:29.342819 3285 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/973febec-1d52-4e08-84d7-b41dca1ca333-cni-path" (OuterVolumeSpecName: "cni-path") pod "973febec-1d52-4e08-84d7-b41dca1ca333" (UID: "973febec-1d52-4e08-84d7-b41dca1ca333"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 05:37:29.342845 kubelet[3285]: I0909 05:37:29.342831 3285 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/973febec-1d52-4e08-84d7-b41dca1ca333-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "973febec-1d52-4e08-84d7-b41dca1ca333" (UID: "973febec-1d52-4e08-84d7-b41dca1ca333"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 05:37:29.343480 kubelet[3285]: I0909 05:37:29.343460 3285 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/973febec-1d52-4e08-84d7-b41dca1ca333-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "973febec-1d52-4e08-84d7-b41dca1ca333" (UID: "973febec-1d52-4e08-84d7-b41dca1ca333"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 05:37:29.344645 kubelet[3285]: I0909 05:37:29.343460 3285 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/973febec-1d52-4e08-84d7-b41dca1ca333-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "973febec-1d52-4e08-84d7-b41dca1ca333" (UID: "973febec-1d52-4e08-84d7-b41dca1ca333"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 05:37:29.346094 kubelet[3285]: I0909 05:37:29.344925 3285 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/973febec-1d52-4e08-84d7-b41dca1ca333-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "973febec-1d52-4e08-84d7-b41dca1ca333" (UID: "973febec-1d52-4e08-84d7-b41dca1ca333"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 05:37:29.347818 kubelet[3285]: I0909 05:37:29.347795 3285 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/973febec-1d52-4e08-84d7-b41dca1ca333-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "973febec-1d52-4e08-84d7-b41dca1ca333" (UID: "973febec-1d52-4e08-84d7-b41dca1ca333"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 05:37:29.348340 kubelet[3285]: I0909 05:37:29.348293 3285 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/973febec-1d52-4e08-84d7-b41dca1ca333-kube-api-access-klt27" (OuterVolumeSpecName: "kube-api-access-klt27") pod "973febec-1d52-4e08-84d7-b41dca1ca333" (UID: "973febec-1d52-4e08-84d7-b41dca1ca333"). InnerVolumeSpecName "kube-api-access-klt27". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 05:37:29.348667 kubelet[3285]: I0909 05:37:29.348651 3285 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/973febec-1d52-4e08-84d7-b41dca1ca333-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "973febec-1d52-4e08-84d7-b41dca1ca333" (UID: "973febec-1d52-4e08-84d7-b41dca1ca333"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 9 05:37:29.348859 kubelet[3285]: I0909 05:37:29.348835 3285 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23b0a564-0383-4387-8353-7b772362e503-kube-api-access-7l7ww" (OuterVolumeSpecName: "kube-api-access-7l7ww") pod "23b0a564-0383-4387-8353-7b772362e503" (UID: "23b0a564-0383-4387-8353-7b772362e503"). InnerVolumeSpecName "kube-api-access-7l7ww". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 05:37:29.440103 kubelet[3285]: I0909 05:37:29.440050 3285 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/973febec-1d52-4e08-84d7-b41dca1ca333-cni-path\") on node \"ip-172-31-25-117\" DevicePath \"\"" Sep 9 05:37:29.440103 kubelet[3285]: I0909 05:37:29.440088 3285 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/973febec-1d52-4e08-84d7-b41dca1ca333-lib-modules\") on node \"ip-172-31-25-117\" DevicePath \"\"" Sep 9 05:37:29.440103 kubelet[3285]: I0909 05:37:29.440102 3285 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/973febec-1d52-4e08-84d7-b41dca1ca333-cilium-run\") on node \"ip-172-31-25-117\" DevicePath \"\"" Sep 9 05:37:29.440364 kubelet[3285]: I0909 05:37:29.440117 3285 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-klt27\" (UniqueName: \"kubernetes.io/projected/973febec-1d52-4e08-84d7-b41dca1ca333-kube-api-access-klt27\") on node \"ip-172-31-25-117\" DevicePath \"\"" Sep 9 05:37:29.440364 kubelet[3285]: I0909 05:37:29.440131 3285 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/973febec-1d52-4e08-84d7-b41dca1ca333-xtables-lock\") on node \"ip-172-31-25-117\" DevicePath \"\"" Sep 9 05:37:29.440364 kubelet[3285]: I0909 05:37:29.440144 3285 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7l7ww\" (UniqueName: \"kubernetes.io/projected/23b0a564-0383-4387-8353-7b772362e503-kube-api-access-7l7ww\") on node \"ip-172-31-25-117\" DevicePath \"\"" Sep 9 05:37:29.440364 kubelet[3285]: I0909 05:37:29.440155 3285 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/973febec-1d52-4e08-84d7-b41dca1ca333-clustermesh-secrets\") on node \"ip-172-31-25-117\" DevicePath \"\"" Sep 9 05:37:29.440364 kubelet[3285]: I0909 05:37:29.440166 3285 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/973febec-1d52-4e08-84d7-b41dca1ca333-etc-cni-netd\") on node \"ip-172-31-25-117\" DevicePath \"\"" Sep 9 05:37:29.440364 kubelet[3285]: I0909 05:37:29.440176 3285 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/973febec-1d52-4e08-84d7-b41dca1ca333-cilium-config-path\") on node \"ip-172-31-25-117\" DevicePath \"\"" Sep 9 05:37:29.440364 kubelet[3285]: I0909 05:37:29.440187 3285 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/23b0a564-0383-4387-8353-7b772362e503-cilium-config-path\") on node \"ip-172-31-25-117\" DevicePath \"\"" Sep 9 05:37:29.440364 kubelet[3285]: I0909 05:37:29.440198 3285 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/973febec-1d52-4e08-84d7-b41dca1ca333-host-proc-sys-net\") on node \"ip-172-31-25-117\" DevicePath \"\"" Sep 9 05:37:29.440596 kubelet[3285]: I0909 05:37:29.440209 3285 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/973febec-1d52-4e08-84d7-b41dca1ca333-host-proc-sys-kernel\") on node \"ip-172-31-25-117\" DevicePath \"\"" Sep 9 05:37:29.440596 kubelet[3285]: I0909 05:37:29.440223 3285 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/973febec-1d52-4e08-84d7-b41dca1ca333-hubble-tls\") on node \"ip-172-31-25-117\" DevicePath \"\"" Sep 9 05:37:29.440596 kubelet[3285]: I0909 05:37:29.440234 3285 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/973febec-1d52-4e08-84d7-b41dca1ca333-cilium-cgroup\") on node \"ip-172-31-25-117\" DevicePath \"\"" Sep 9 05:37:29.440596 kubelet[3285]: I0909 05:37:29.440247 3285 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/973febec-1d52-4e08-84d7-b41dca1ca333-bpf-maps\") on node \"ip-172-31-25-117\" DevicePath \"\"" Sep 9 05:37:29.440596 kubelet[3285]: I0909 05:37:29.440259 3285 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/973febec-1d52-4e08-84d7-b41dca1ca333-hostproc\") on node \"ip-172-31-25-117\" DevicePath \"\"" Sep 9 05:37:29.671660 systemd[1]: Removed slice kubepods-burstable-pod973febec_1d52_4e08_84d7_b41dca1ca333.slice - libcontainer container kubepods-burstable-pod973febec_1d52_4e08_84d7_b41dca1ca333.slice. Sep 9 05:37:29.671890 systemd[1]: kubepods-burstable-pod973febec_1d52_4e08_84d7_b41dca1ca333.slice: Consumed 8.161s CPU time, 194.9M memory peak, 73.6M read from disk, 13.3M written to disk. Sep 9 05:37:29.674838 systemd[1]: Removed slice kubepods-besteffort-pod23b0a564_0383_4387_8353_7b772362e503.slice - libcontainer container kubepods-besteffort-pod23b0a564_0383_4387_8353_7b772362e503.slice. Sep 9 05:37:30.036325 systemd[1]: var-lib-kubelet-pods-23b0a564\x2d0383\x2d4387\x2d8353\x2d7b772362e503-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7l7ww.mount: Deactivated successfully. Sep 9 05:37:30.036473 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-47c09c8ce358f869b3b7bbe61f914f969126ac7f365e4dacde39a7c33afc02d2-shm.mount: Deactivated successfully. Sep 9 05:37:30.046879 systemd[1]: var-lib-kubelet-pods-973febec\x2d1d52\x2d4e08\x2d84d7\x2db41dca1ca333-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dklt27.mount: Deactivated successfully. Sep 9 05:37:30.047057 systemd[1]: var-lib-kubelet-pods-973febec\x2d1d52\x2d4e08\x2d84d7\x2db41dca1ca333-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 9 05:37:30.047150 systemd[1]: var-lib-kubelet-pods-973febec\x2d1d52\x2d4e08\x2d84d7\x2db41dca1ca333-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 9 05:37:30.125101 kubelet[3285]: I0909 05:37:30.124993 3285 scope.go:117] "RemoveContainer" containerID="7caad6476587b09503d2562a6573695c0ee8f8116ad043d33a24ef545dd272eb" Sep 9 05:37:30.134179 containerd[1901]: time="2025-09-09T05:37:30.134150194Z" level=info msg="RemoveContainer for \"7caad6476587b09503d2562a6573695c0ee8f8116ad043d33a24ef545dd272eb\"" Sep 9 05:37:30.150703 containerd[1901]: time="2025-09-09T05:37:30.150652496Z" level=info msg="RemoveContainer for \"7caad6476587b09503d2562a6573695c0ee8f8116ad043d33a24ef545dd272eb\" returns successfully" Sep 9 05:37:30.151179 kubelet[3285]: I0909 05:37:30.151162 3285 scope.go:117] "RemoveContainer" containerID="dd95901274a6ae0632941c9c49bec7215a03ec0f44b97e2f42ffc979ab763415" Sep 9 05:37:30.153726 containerd[1901]: time="2025-09-09T05:37:30.153649385Z" level=info msg="RemoveContainer for \"dd95901274a6ae0632941c9c49bec7215a03ec0f44b97e2f42ffc979ab763415\"" Sep 9 05:37:30.162156 containerd[1901]: time="2025-09-09T05:37:30.162115256Z" level=info msg="RemoveContainer for \"dd95901274a6ae0632941c9c49bec7215a03ec0f44b97e2f42ffc979ab763415\" returns successfully" Sep 9 05:37:30.163155 kubelet[3285]: I0909 05:37:30.163104 3285 scope.go:117] "RemoveContainer" containerID="c59202e7ad01db9d084d9cdea18f8f00185afd31a56620d3d44b070915b06117" Sep 9 05:37:30.164774 containerd[1901]: time="2025-09-09T05:37:30.164738847Z" level=info msg="RemoveContainer for \"c59202e7ad01db9d084d9cdea18f8f00185afd31a56620d3d44b070915b06117\"" Sep 9 05:37:30.171033 containerd[1901]: time="2025-09-09T05:37:30.170997386Z" level=info msg="RemoveContainer for \"c59202e7ad01db9d084d9cdea18f8f00185afd31a56620d3d44b070915b06117\" returns successfully" Sep 9 05:37:30.171227 kubelet[3285]: I0909 05:37:30.171198 3285 scope.go:117] "RemoveContainer" containerID="80a1998991d6f4115ed97106e7c7299f589db7872da7ab8f377fc7b2bc8b604f" Sep 9 05:37:30.173586 containerd[1901]: time="2025-09-09T05:37:30.173445470Z" level=info msg="RemoveContainer for \"80a1998991d6f4115ed97106e7c7299f589db7872da7ab8f377fc7b2bc8b604f\"" Sep 9 05:37:30.179623 containerd[1901]: time="2025-09-09T05:37:30.179566969Z" level=info msg="RemoveContainer for \"80a1998991d6f4115ed97106e7c7299f589db7872da7ab8f377fc7b2bc8b604f\" returns successfully" Sep 9 05:37:30.179918 kubelet[3285]: I0909 05:37:30.179889 3285 scope.go:117] "RemoveContainer" containerID="c23f504ec839da69da89251d87178c362c15f3b086cebf09cab9c916fc02b671" Sep 9 05:37:30.181481 containerd[1901]: time="2025-09-09T05:37:30.181446888Z" level=info msg="RemoveContainer for \"c23f504ec839da69da89251d87178c362c15f3b086cebf09cab9c916fc02b671\"" Sep 9 05:37:30.186980 containerd[1901]: time="2025-09-09T05:37:30.186941809Z" level=info msg="RemoveContainer for \"c23f504ec839da69da89251d87178c362c15f3b086cebf09cab9c916fc02b671\" returns successfully" Sep 9 05:37:30.187317 kubelet[3285]: I0909 05:37:30.187277 3285 scope.go:117] "RemoveContainer" containerID="211ce4c330a48fc3e066e9921cdef226b67ed65ec46530914bff031ee3b9c1c3" Sep 9 05:37:30.188807 containerd[1901]: time="2025-09-09T05:37:30.188759939Z" level=info msg="RemoveContainer for \"211ce4c330a48fc3e066e9921cdef226b67ed65ec46530914bff031ee3b9c1c3\"" Sep 9 05:37:30.194366 containerd[1901]: time="2025-09-09T05:37:30.194326320Z" level=info msg="RemoveContainer for \"211ce4c330a48fc3e066e9921cdef226b67ed65ec46530914bff031ee3b9c1c3\" returns successfully" Sep 9 05:37:30.194711 kubelet[3285]: I0909 05:37:30.194680 3285 scope.go:117] "RemoveContainer" containerID="dd95901274a6ae0632941c9c49bec7215a03ec0f44b97e2f42ffc979ab763415" Sep 9 05:37:30.197712 containerd[1901]: time="2025-09-09T05:37:30.194940381Z" level=error msg="ContainerStatus for \"dd95901274a6ae0632941c9c49bec7215a03ec0f44b97e2f42ffc979ab763415\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dd95901274a6ae0632941c9c49bec7215a03ec0f44b97e2f42ffc979ab763415\": not found" Sep 9 05:37:30.201219 kubelet[3285]: E0909 05:37:30.201154 3285 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dd95901274a6ae0632941c9c49bec7215a03ec0f44b97e2f42ffc979ab763415\": not found" containerID="dd95901274a6ae0632941c9c49bec7215a03ec0f44b97e2f42ffc979ab763415" Sep 9 05:37:30.201347 kubelet[3285]: I0909 05:37:30.201232 3285 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dd95901274a6ae0632941c9c49bec7215a03ec0f44b97e2f42ffc979ab763415"} err="failed to get container status \"dd95901274a6ae0632941c9c49bec7215a03ec0f44b97e2f42ffc979ab763415\": rpc error: code = NotFound desc = an error occurred when try to find container \"dd95901274a6ae0632941c9c49bec7215a03ec0f44b97e2f42ffc979ab763415\": not found" Sep 9 05:37:30.201347 kubelet[3285]: I0909 05:37:30.201340 3285 scope.go:117] "RemoveContainer" containerID="c59202e7ad01db9d084d9cdea18f8f00185afd31a56620d3d44b070915b06117" Sep 9 05:37:30.201835 containerd[1901]: time="2025-09-09T05:37:30.201793157Z" level=error msg="ContainerStatus for \"c59202e7ad01db9d084d9cdea18f8f00185afd31a56620d3d44b070915b06117\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c59202e7ad01db9d084d9cdea18f8f00185afd31a56620d3d44b070915b06117\": not found" Sep 9 05:37:30.212964 kubelet[3285]: E0909 05:37:30.212816 3285 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c59202e7ad01db9d084d9cdea18f8f00185afd31a56620d3d44b070915b06117\": not found" containerID="c59202e7ad01db9d084d9cdea18f8f00185afd31a56620d3d44b070915b06117" Sep 9 05:37:30.212964 kubelet[3285]: I0909 05:37:30.212854 3285 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c59202e7ad01db9d084d9cdea18f8f00185afd31a56620d3d44b070915b06117"} err="failed to get container status \"c59202e7ad01db9d084d9cdea18f8f00185afd31a56620d3d44b070915b06117\": rpc error: code = NotFound desc = an error occurred when try to find container \"c59202e7ad01db9d084d9cdea18f8f00185afd31a56620d3d44b070915b06117\": not found" Sep 9 05:37:30.212964 kubelet[3285]: I0909 05:37:30.212877 3285 scope.go:117] "RemoveContainer" containerID="80a1998991d6f4115ed97106e7c7299f589db7872da7ab8f377fc7b2bc8b604f" Sep 9 05:37:30.213696 containerd[1901]: time="2025-09-09T05:37:30.213448364Z" level=error msg="ContainerStatus for \"80a1998991d6f4115ed97106e7c7299f589db7872da7ab8f377fc7b2bc8b604f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"80a1998991d6f4115ed97106e7c7299f589db7872da7ab8f377fc7b2bc8b604f\": not found" Sep 9 05:37:30.213760 kubelet[3285]: E0909 05:37:30.213639 3285 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"80a1998991d6f4115ed97106e7c7299f589db7872da7ab8f377fc7b2bc8b604f\": not found" containerID="80a1998991d6f4115ed97106e7c7299f589db7872da7ab8f377fc7b2bc8b604f" Sep 9 05:37:30.213760 kubelet[3285]: I0909 05:37:30.213662 3285 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"80a1998991d6f4115ed97106e7c7299f589db7872da7ab8f377fc7b2bc8b604f"} err="failed to get container status \"80a1998991d6f4115ed97106e7c7299f589db7872da7ab8f377fc7b2bc8b604f\": rpc error: code = NotFound desc = an error occurred when try to find container \"80a1998991d6f4115ed97106e7c7299f589db7872da7ab8f377fc7b2bc8b604f\": not found" Sep 9 05:37:30.213863 kubelet[3285]: I0909 05:37:30.213681 3285 scope.go:117] "RemoveContainer" containerID="c23f504ec839da69da89251d87178c362c15f3b086cebf09cab9c916fc02b671" Sep 9 05:37:30.214067 containerd[1901]: time="2025-09-09T05:37:30.214040578Z" level=error msg="ContainerStatus for \"c23f504ec839da69da89251d87178c362c15f3b086cebf09cab9c916fc02b671\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c23f504ec839da69da89251d87178c362c15f3b086cebf09cab9c916fc02b671\": not found" Sep 9 05:37:30.214163 kubelet[3285]: E0909 05:37:30.214142 3285 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c23f504ec839da69da89251d87178c362c15f3b086cebf09cab9c916fc02b671\": not found" containerID="c23f504ec839da69da89251d87178c362c15f3b086cebf09cab9c916fc02b671" Sep 9 05:37:30.214202 kubelet[3285]: I0909 05:37:30.214168 3285 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c23f504ec839da69da89251d87178c362c15f3b086cebf09cab9c916fc02b671"} err="failed to get container status \"c23f504ec839da69da89251d87178c362c15f3b086cebf09cab9c916fc02b671\": rpc error: code = NotFound desc = an error occurred when try to find container \"c23f504ec839da69da89251d87178c362c15f3b086cebf09cab9c916fc02b671\": not found" Sep 9 05:37:30.214202 kubelet[3285]: I0909 05:37:30.214184 3285 scope.go:117] "RemoveContainer" containerID="211ce4c330a48fc3e066e9921cdef226b67ed65ec46530914bff031ee3b9c1c3" Sep 9 05:37:30.214359 containerd[1901]: time="2025-09-09T05:37:30.214313644Z" level=error msg="ContainerStatus for \"211ce4c330a48fc3e066e9921cdef226b67ed65ec46530914bff031ee3b9c1c3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"211ce4c330a48fc3e066e9921cdef226b67ed65ec46530914bff031ee3b9c1c3\": not found" Sep 9 05:37:30.214436 kubelet[3285]: E0909 05:37:30.214423 3285 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"211ce4c330a48fc3e066e9921cdef226b67ed65ec46530914bff031ee3b9c1c3\": not found" containerID="211ce4c330a48fc3e066e9921cdef226b67ed65ec46530914bff031ee3b9c1c3" Sep 9 05:37:30.214478 kubelet[3285]: I0909 05:37:30.214458 3285 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"211ce4c330a48fc3e066e9921cdef226b67ed65ec46530914bff031ee3b9c1c3"} err="failed to get container status \"211ce4c330a48fc3e066e9921cdef226b67ed65ec46530914bff031ee3b9c1c3\": rpc error: code = NotFound desc = an error occurred when try to find container \"211ce4c330a48fc3e066e9921cdef226b67ed65ec46530914bff031ee3b9c1c3\": not found" Sep 9 05:37:30.943725 sshd[5007]: Connection closed by 147.75.109.163 port 34910 Sep 9 05:37:30.944794 sshd-session[5004]: pam_unix(sshd:session): session closed for user core Sep 9 05:37:30.949533 systemd[1]: sshd@22-172.31.25.117:22-147.75.109.163:34910.service: Deactivated successfully. Sep 9 05:37:30.951710 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 05:37:30.952812 systemd-logind[1872]: Session 23 logged out. Waiting for processes to exit. Sep 9 05:37:30.954364 systemd-logind[1872]: Removed session 23. Sep 9 05:37:30.978778 systemd[1]: Started sshd@23-172.31.25.117:22-147.75.109.163:50990.service - OpenSSH per-connection server daemon (147.75.109.163:50990). Sep 9 05:37:31.169246 sshd[5153]: Accepted publickey for core from 147.75.109.163 port 50990 ssh2: RSA SHA256:k1gUnX9WA3dyp6ylgbUnG2K6cUpm99lcEZsxDzZ5bM4 Sep 9 05:37:31.170715 sshd-session[5153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:37:31.176508 systemd-logind[1872]: New session 24 of user core. Sep 9 05:37:31.179737 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 9 05:37:31.641098 ntpd[1863]: Deleting interface #12 lxc_health, fe80::a836:90ff:fe84:8d4f%8#123, interface stats: received=0, sent=0, dropped=0, active_time=57 secs Sep 9 05:37:31.641689 ntpd[1863]: 9 Sep 05:37:31 ntpd[1863]: Deleting interface #12 lxc_health, fe80::a836:90ff:fe84:8d4f%8#123, interface stats: received=0, sent=0, dropped=0, active_time=57 secs Sep 9 05:37:31.666726 kubelet[3285]: I0909 05:37:31.666689 3285 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23b0a564-0383-4387-8353-7b772362e503" path="/var/lib/kubelet/pods/23b0a564-0383-4387-8353-7b772362e503/volumes" Sep 9 05:37:31.668581 kubelet[3285]: I0909 05:37:31.667269 3285 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="973febec-1d52-4e08-84d7-b41dca1ca333" path="/var/lib/kubelet/pods/973febec-1d52-4e08-84d7-b41dca1ca333/volumes" Sep 9 05:37:32.157666 sshd[5156]: Connection closed by 147.75.109.163 port 50990 Sep 9 05:37:32.158489 sshd-session[5153]: pam_unix(sshd:session): session closed for user core Sep 9 05:37:32.169460 kubelet[3285]: I0909 05:37:32.167274 3285 memory_manager.go:355] "RemoveStaleState removing state" podUID="23b0a564-0383-4387-8353-7b772362e503" containerName="cilium-operator" Sep 9 05:37:32.169460 kubelet[3285]: I0909 05:37:32.167302 3285 memory_manager.go:355] "RemoveStaleState removing state" podUID="973febec-1d52-4e08-84d7-b41dca1ca333" containerName="cilium-agent" Sep 9 05:37:32.169254 systemd[1]: sshd@23-172.31.25.117:22-147.75.109.163:50990.service: Deactivated successfully. Sep 9 05:37:32.169964 systemd-logind[1872]: Session 24 logged out. Waiting for processes to exit. Sep 9 05:37:32.176419 systemd[1]: session-24.scope: Deactivated successfully. Sep 9 05:37:32.197544 systemd-logind[1872]: Removed session 24. Sep 9 05:37:32.200435 systemd[1]: Started sshd@24-172.31.25.117:22-147.75.109.163:51002.service - OpenSSH per-connection server daemon (147.75.109.163:51002). Sep 9 05:37:32.221886 systemd[1]: Created slice kubepods-burstable-podf75dbd3c_24d0_413b_87f0_448dac36c1d7.slice - libcontainer container kubepods-burstable-podf75dbd3c_24d0_413b_87f0_448dac36c1d7.slice. Sep 9 05:37:32.262574 kubelet[3285]: I0909 05:37:32.260757 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f75dbd3c-24d0-413b-87f0-448dac36c1d7-cni-path\") pod \"cilium-9l4sk\" (UID: \"f75dbd3c-24d0-413b-87f0-448dac36c1d7\") " pod="kube-system/cilium-9l4sk" Sep 9 05:37:32.262574 kubelet[3285]: I0909 05:37:32.260811 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f75dbd3c-24d0-413b-87f0-448dac36c1d7-cilium-run\") pod \"cilium-9l4sk\" (UID: \"f75dbd3c-24d0-413b-87f0-448dac36c1d7\") " pod="kube-system/cilium-9l4sk" Sep 9 05:37:32.262574 kubelet[3285]: I0909 05:37:32.260844 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f75dbd3c-24d0-413b-87f0-448dac36c1d7-etc-cni-netd\") pod \"cilium-9l4sk\" (UID: \"f75dbd3c-24d0-413b-87f0-448dac36c1d7\") " pod="kube-system/cilium-9l4sk" Sep 9 05:37:32.262574 kubelet[3285]: I0909 05:37:32.260866 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f75dbd3c-24d0-413b-87f0-448dac36c1d7-bpf-maps\") pod \"cilium-9l4sk\" (UID: \"f75dbd3c-24d0-413b-87f0-448dac36c1d7\") " pod="kube-system/cilium-9l4sk" Sep 9 05:37:32.262574 kubelet[3285]: I0909 05:37:32.260891 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f75dbd3c-24d0-413b-87f0-448dac36c1d7-xtables-lock\") pod \"cilium-9l4sk\" (UID: \"f75dbd3c-24d0-413b-87f0-448dac36c1d7\") " pod="kube-system/cilium-9l4sk" Sep 9 05:37:32.262574 kubelet[3285]: I0909 05:37:32.260915 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f75dbd3c-24d0-413b-87f0-448dac36c1d7-cilium-ipsec-secrets\") pod \"cilium-9l4sk\" (UID: \"f75dbd3c-24d0-413b-87f0-448dac36c1d7\") " pod="kube-system/cilium-9l4sk" Sep 9 05:37:32.262994 kubelet[3285]: I0909 05:37:32.260941 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f75dbd3c-24d0-413b-87f0-448dac36c1d7-hostproc\") pod \"cilium-9l4sk\" (UID: \"f75dbd3c-24d0-413b-87f0-448dac36c1d7\") " pod="kube-system/cilium-9l4sk" Sep 9 05:37:32.262994 kubelet[3285]: I0909 05:37:32.260965 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f75dbd3c-24d0-413b-87f0-448dac36c1d7-hubble-tls\") pod \"cilium-9l4sk\" (UID: \"f75dbd3c-24d0-413b-87f0-448dac36c1d7\") " pod="kube-system/cilium-9l4sk" Sep 9 05:37:32.262994 kubelet[3285]: I0909 05:37:32.260985 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f75dbd3c-24d0-413b-87f0-448dac36c1d7-lib-modules\") pod \"cilium-9l4sk\" (UID: \"f75dbd3c-24d0-413b-87f0-448dac36c1d7\") " pod="kube-system/cilium-9l4sk" Sep 9 05:37:32.262994 kubelet[3285]: I0909 05:37:32.261021 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f75dbd3c-24d0-413b-87f0-448dac36c1d7-clustermesh-secrets\") pod \"cilium-9l4sk\" (UID: \"f75dbd3c-24d0-413b-87f0-448dac36c1d7\") " pod="kube-system/cilium-9l4sk" Sep 9 05:37:32.262994 kubelet[3285]: I0909 05:37:32.261043 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f75dbd3c-24d0-413b-87f0-448dac36c1d7-cilium-config-path\") pod \"cilium-9l4sk\" (UID: \"f75dbd3c-24d0-413b-87f0-448dac36c1d7\") " pod="kube-system/cilium-9l4sk" Sep 9 05:37:32.262994 kubelet[3285]: I0909 05:37:32.261066 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f75dbd3c-24d0-413b-87f0-448dac36c1d7-host-proc-sys-net\") pod \"cilium-9l4sk\" (UID: \"f75dbd3c-24d0-413b-87f0-448dac36c1d7\") " pod="kube-system/cilium-9l4sk" Sep 9 05:37:32.263236 kubelet[3285]: I0909 05:37:32.261103 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f75dbd3c-24d0-413b-87f0-448dac36c1d7-host-proc-sys-kernel\") pod \"cilium-9l4sk\" (UID: \"f75dbd3c-24d0-413b-87f0-448dac36c1d7\") " pod="kube-system/cilium-9l4sk" Sep 9 05:37:32.263236 kubelet[3285]: I0909 05:37:32.261129 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f75dbd3c-24d0-413b-87f0-448dac36c1d7-cilium-cgroup\") pod \"cilium-9l4sk\" (UID: \"f75dbd3c-24d0-413b-87f0-448dac36c1d7\") " pod="kube-system/cilium-9l4sk" Sep 9 05:37:32.263236 kubelet[3285]: I0909 05:37:32.261155 3285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42gx6\" (UniqueName: \"kubernetes.io/projected/f75dbd3c-24d0-413b-87f0-448dac36c1d7-kube-api-access-42gx6\") pod \"cilium-9l4sk\" (UID: \"f75dbd3c-24d0-413b-87f0-448dac36c1d7\") " pod="kube-system/cilium-9l4sk" Sep 9 05:37:32.414689 sshd[5167]: Accepted publickey for core from 147.75.109.163 port 51002 ssh2: RSA SHA256:k1gUnX9WA3dyp6ylgbUnG2K6cUpm99lcEZsxDzZ5bM4 Sep 9 05:37:32.416650 sshd-session[5167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:37:32.422029 systemd-logind[1872]: New session 25 of user core. Sep 9 05:37:32.426755 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 9 05:37:32.527747 containerd[1901]: time="2025-09-09T05:37:32.527706614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9l4sk,Uid:f75dbd3c-24d0-413b-87f0-448dac36c1d7,Namespace:kube-system,Attempt:0,}" Sep 9 05:37:32.545364 sshd[5174]: Connection closed by 147.75.109.163 port 51002 Sep 9 05:37:32.551641 sshd-session[5167]: pam_unix(sshd:session): session closed for user core Sep 9 05:37:32.561742 systemd[1]: sshd@24-172.31.25.117:22-147.75.109.163:51002.service: Deactivated successfully. Sep 9 05:37:32.564534 systemd[1]: session-25.scope: Deactivated successfully. Sep 9 05:37:32.569383 containerd[1901]: time="2025-09-09T05:37:32.569253139Z" level=info msg="connecting to shim ab0e7c0df60deb1fdbad6e178f384d28561b9f78edde7070af738ff7b5f3f305" address="unix:///run/containerd/s/53ce83808c8accd62c4b40a93a6ac4b4f48b93ef3065dad536ad7a12f490eb39" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:37:32.571330 systemd-logind[1872]: Session 25 logged out. Waiting for processes to exit. Sep 9 05:37:32.587163 systemd[1]: Started sshd@25-172.31.25.117:22-147.75.109.163:51018.service - OpenSSH per-connection server daemon (147.75.109.163:51018). Sep 9 05:37:32.590243 systemd-logind[1872]: Removed session 25. Sep 9 05:37:32.602878 systemd[1]: Started cri-containerd-ab0e7c0df60deb1fdbad6e178f384d28561b9f78edde7070af738ff7b5f3f305.scope - libcontainer container ab0e7c0df60deb1fdbad6e178f384d28561b9f78edde7070af738ff7b5f3f305. Sep 9 05:37:32.646290 containerd[1901]: time="2025-09-09T05:37:32.646251126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9l4sk,Uid:f75dbd3c-24d0-413b-87f0-448dac36c1d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab0e7c0df60deb1fdbad6e178f384d28561b9f78edde7070af738ff7b5f3f305\"" Sep 9 05:37:32.650168 containerd[1901]: time="2025-09-09T05:37:32.650131910Z" level=info msg="CreateContainer within sandbox \"ab0e7c0df60deb1fdbad6e178f384d28561b9f78edde7070af738ff7b5f3f305\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 05:37:32.661385 containerd[1901]: time="2025-09-09T05:37:32.661341182Z" level=info msg="Container e366ca8cf8be4c24ade3d17780c09cec2bbd1adc0d2ccee66b3042c0f4024e69: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:37:32.673333 containerd[1901]: time="2025-09-09T05:37:32.672993118Z" level=info msg="CreateContainer within sandbox \"ab0e7c0df60deb1fdbad6e178f384d28561b9f78edde7070af738ff7b5f3f305\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e366ca8cf8be4c24ade3d17780c09cec2bbd1adc0d2ccee66b3042c0f4024e69\"" Sep 9 05:37:32.674876 containerd[1901]: time="2025-09-09T05:37:32.674835258Z" level=info msg="StartContainer for \"e366ca8cf8be4c24ade3d17780c09cec2bbd1adc0d2ccee66b3042c0f4024e69\"" Sep 9 05:37:32.676209 containerd[1901]: time="2025-09-09T05:37:32.676163874Z" level=info msg="connecting to shim e366ca8cf8be4c24ade3d17780c09cec2bbd1adc0d2ccee66b3042c0f4024e69" address="unix:///run/containerd/s/53ce83808c8accd62c4b40a93a6ac4b4f48b93ef3065dad536ad7a12f490eb39" protocol=ttrpc version=3 Sep 9 05:37:32.701764 systemd[1]: Started cri-containerd-e366ca8cf8be4c24ade3d17780c09cec2bbd1adc0d2ccee66b3042c0f4024e69.scope - libcontainer container e366ca8cf8be4c24ade3d17780c09cec2bbd1adc0d2ccee66b3042c0f4024e69. Sep 9 05:37:32.736648 containerd[1901]: time="2025-09-09T05:37:32.736593935Z" level=info msg="StartContainer for \"e366ca8cf8be4c24ade3d17780c09cec2bbd1adc0d2ccee66b3042c0f4024e69\" returns successfully" Sep 9 05:37:32.756623 systemd[1]: cri-containerd-e366ca8cf8be4c24ade3d17780c09cec2bbd1adc0d2ccee66b3042c0f4024e69.scope: Deactivated successfully. Sep 9 05:37:32.757231 systemd[1]: cri-containerd-e366ca8cf8be4c24ade3d17780c09cec2bbd1adc0d2ccee66b3042c0f4024e69.scope: Consumed 21ms CPU time, 9.6M memory peak, 3.1M read from disk. Sep 9 05:37:32.759175 containerd[1901]: time="2025-09-09T05:37:32.759138832Z" level=info msg="received exit event container_id:\"e366ca8cf8be4c24ade3d17780c09cec2bbd1adc0d2ccee66b3042c0f4024e69\" id:\"e366ca8cf8be4c24ade3d17780c09cec2bbd1adc0d2ccee66b3042c0f4024e69\" pid:5244 exited_at:{seconds:1757396252 nanos:758488671}" Sep 9 05:37:32.759436 containerd[1901]: time="2025-09-09T05:37:32.759298927Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e366ca8cf8be4c24ade3d17780c09cec2bbd1adc0d2ccee66b3042c0f4024e69\" id:\"e366ca8cf8be4c24ade3d17780c09cec2bbd1adc0d2ccee66b3042c0f4024e69\" pid:5244 exited_at:{seconds:1757396252 nanos:758488671}" Sep 9 05:37:32.769827 sshd[5213]: Accepted publickey for core from 147.75.109.163 port 51018 ssh2: RSA SHA256:k1gUnX9WA3dyp6ylgbUnG2K6cUpm99lcEZsxDzZ5bM4 Sep 9 05:37:32.771481 sshd-session[5213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:37:32.779075 systemd-logind[1872]: New session 26 of user core. Sep 9 05:37:32.786804 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 9 05:37:33.161571 containerd[1901]: time="2025-09-09T05:37:33.161386993Z" level=info msg="CreateContainer within sandbox \"ab0e7c0df60deb1fdbad6e178f384d28561b9f78edde7070af738ff7b5f3f305\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 05:37:33.173268 containerd[1901]: time="2025-09-09T05:37:33.173221969Z" level=info msg="Container 795d65ad1fd513f6f3f7a2bacd32fe2db6223e0600871d9f4d1b1376bdfc9c31: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:37:33.183583 containerd[1901]: time="2025-09-09T05:37:33.183525526Z" level=info msg="CreateContainer within sandbox \"ab0e7c0df60deb1fdbad6e178f384d28561b9f78edde7070af738ff7b5f3f305\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"795d65ad1fd513f6f3f7a2bacd32fe2db6223e0600871d9f4d1b1376bdfc9c31\"" Sep 9 05:37:33.184197 containerd[1901]: time="2025-09-09T05:37:33.184170846Z" level=info msg="StartContainer for \"795d65ad1fd513f6f3f7a2bacd32fe2db6223e0600871d9f4d1b1376bdfc9c31\"" Sep 9 05:37:33.185509 containerd[1901]: time="2025-09-09T05:37:33.185372921Z" level=info msg="connecting to shim 795d65ad1fd513f6f3f7a2bacd32fe2db6223e0600871d9f4d1b1376bdfc9c31" address="unix:///run/containerd/s/53ce83808c8accd62c4b40a93a6ac4b4f48b93ef3065dad536ad7a12f490eb39" protocol=ttrpc version=3 Sep 9 05:37:33.208801 systemd[1]: Started cri-containerd-795d65ad1fd513f6f3f7a2bacd32fe2db6223e0600871d9f4d1b1376bdfc9c31.scope - libcontainer container 795d65ad1fd513f6f3f7a2bacd32fe2db6223e0600871d9f4d1b1376bdfc9c31. Sep 9 05:37:33.241251 containerd[1901]: time="2025-09-09T05:37:33.241206593Z" level=info msg="StartContainer for \"795d65ad1fd513f6f3f7a2bacd32fe2db6223e0600871d9f4d1b1376bdfc9c31\" returns successfully" Sep 9 05:37:33.254094 systemd[1]: cri-containerd-795d65ad1fd513f6f3f7a2bacd32fe2db6223e0600871d9f4d1b1376bdfc9c31.scope: Deactivated successfully. Sep 9 05:37:33.254410 systemd[1]: cri-containerd-795d65ad1fd513f6f3f7a2bacd32fe2db6223e0600871d9f4d1b1376bdfc9c31.scope: Consumed 20ms CPU time, 7.3M memory peak, 1.9M read from disk. Sep 9 05:37:33.256203 containerd[1901]: time="2025-09-09T05:37:33.256159893Z" level=info msg="received exit event container_id:\"795d65ad1fd513f6f3f7a2bacd32fe2db6223e0600871d9f4d1b1376bdfc9c31\" id:\"795d65ad1fd513f6f3f7a2bacd32fe2db6223e0600871d9f4d1b1376bdfc9c31\" pid:5298 exited_at:{seconds:1757396253 nanos:255913245}" Sep 9 05:37:33.256484 containerd[1901]: time="2025-09-09T05:37:33.256444958Z" level=info msg="TaskExit event in podsandbox handler container_id:\"795d65ad1fd513f6f3f7a2bacd32fe2db6223e0600871d9f4d1b1376bdfc9c31\" id:\"795d65ad1fd513f6f3f7a2bacd32fe2db6223e0600871d9f4d1b1376bdfc9c31\" pid:5298 exited_at:{seconds:1757396253 nanos:255913245}" Sep 9 05:37:33.824291 kubelet[3285]: E0909 05:37:33.824245 3285 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 05:37:34.165441 containerd[1901]: time="2025-09-09T05:37:34.164726863Z" level=info msg="CreateContainer within sandbox \"ab0e7c0df60deb1fdbad6e178f384d28561b9f78edde7070af738ff7b5f3f305\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 05:37:34.183344 containerd[1901]: time="2025-09-09T05:37:34.183302621Z" level=info msg="Container 13a8f2d6baf29c2fad54ae30b109823882aab82ec651634c4d3f3972f33004d6: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:37:34.210434 containerd[1901]: time="2025-09-09T05:37:34.210380557Z" level=info msg="CreateContainer within sandbox \"ab0e7c0df60deb1fdbad6e178f384d28561b9f78edde7070af738ff7b5f3f305\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"13a8f2d6baf29c2fad54ae30b109823882aab82ec651634c4d3f3972f33004d6\"" Sep 9 05:37:34.217277 containerd[1901]: time="2025-09-09T05:37:34.217218601Z" level=info msg="StartContainer for \"13a8f2d6baf29c2fad54ae30b109823882aab82ec651634c4d3f3972f33004d6\"" Sep 9 05:37:34.218667 containerd[1901]: time="2025-09-09T05:37:34.218627449Z" level=info msg="connecting to shim 13a8f2d6baf29c2fad54ae30b109823882aab82ec651634c4d3f3972f33004d6" address="unix:///run/containerd/s/53ce83808c8accd62c4b40a93a6ac4b4f48b93ef3065dad536ad7a12f490eb39" protocol=ttrpc version=3 Sep 9 05:37:34.248812 systemd[1]: Started cri-containerd-13a8f2d6baf29c2fad54ae30b109823882aab82ec651634c4d3f3972f33004d6.scope - libcontainer container 13a8f2d6baf29c2fad54ae30b109823882aab82ec651634c4d3f3972f33004d6. Sep 9 05:37:34.296351 containerd[1901]: time="2025-09-09T05:37:34.296254988Z" level=info msg="StartContainer for \"13a8f2d6baf29c2fad54ae30b109823882aab82ec651634c4d3f3972f33004d6\" returns successfully" Sep 9 05:37:34.304964 systemd[1]: cri-containerd-13a8f2d6baf29c2fad54ae30b109823882aab82ec651634c4d3f3972f33004d6.scope: Deactivated successfully. Sep 9 05:37:34.306517 containerd[1901]: time="2025-09-09T05:37:34.306478035Z" level=info msg="received exit event container_id:\"13a8f2d6baf29c2fad54ae30b109823882aab82ec651634c4d3f3972f33004d6\" id:\"13a8f2d6baf29c2fad54ae30b109823882aab82ec651634c4d3f3972f33004d6\" pid:5343 exited_at:{seconds:1757396254 nanos:306192458}" Sep 9 05:37:34.306955 containerd[1901]: time="2025-09-09T05:37:34.306928395Z" level=info msg="TaskExit event in podsandbox handler container_id:\"13a8f2d6baf29c2fad54ae30b109823882aab82ec651634c4d3f3972f33004d6\" id:\"13a8f2d6baf29c2fad54ae30b109823882aab82ec651634c4d3f3972f33004d6\" pid:5343 exited_at:{seconds:1757396254 nanos:306192458}" Sep 9 05:37:34.331933 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13a8f2d6baf29c2fad54ae30b109823882aab82ec651634c4d3f3972f33004d6-rootfs.mount: Deactivated successfully. Sep 9 05:37:35.170011 containerd[1901]: time="2025-09-09T05:37:35.169196874Z" level=info msg="CreateContainer within sandbox \"ab0e7c0df60deb1fdbad6e178f384d28561b9f78edde7070af738ff7b5f3f305\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 05:37:35.188284 containerd[1901]: time="2025-09-09T05:37:35.186665406Z" level=info msg="Container 013d31909c993d3d0c46f12490e938cde92beb18bad86f6e6958296cbc904bda: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:37:35.186982 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2395098175.mount: Deactivated successfully. Sep 9 05:37:35.202187 containerd[1901]: time="2025-09-09T05:37:35.202144068Z" level=info msg="CreateContainer within sandbox \"ab0e7c0df60deb1fdbad6e178f384d28561b9f78edde7070af738ff7b5f3f305\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"013d31909c993d3d0c46f12490e938cde92beb18bad86f6e6958296cbc904bda\"" Sep 9 05:37:35.202953 containerd[1901]: time="2025-09-09T05:37:35.202908391Z" level=info msg="StartContainer for \"013d31909c993d3d0c46f12490e938cde92beb18bad86f6e6958296cbc904bda\"" Sep 9 05:37:35.204936 containerd[1901]: time="2025-09-09T05:37:35.204891522Z" level=info msg="connecting to shim 013d31909c993d3d0c46f12490e938cde92beb18bad86f6e6958296cbc904bda" address="unix:///run/containerd/s/53ce83808c8accd62c4b40a93a6ac4b4f48b93ef3065dad536ad7a12f490eb39" protocol=ttrpc version=3 Sep 9 05:37:35.219892 kubelet[3285]: I0909 05:37:35.219843 3285 setters.go:602] "Node became not ready" node="ip-172-31-25-117" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-09T05:37:35Z","lastTransitionTime":"2025-09-09T05:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 9 05:37:35.234828 systemd[1]: Started cri-containerd-013d31909c993d3d0c46f12490e938cde92beb18bad86f6e6958296cbc904bda.scope - libcontainer container 013d31909c993d3d0c46f12490e938cde92beb18bad86f6e6958296cbc904bda. Sep 9 05:37:35.278948 systemd[1]: cri-containerd-013d31909c993d3d0c46f12490e938cde92beb18bad86f6e6958296cbc904bda.scope: Deactivated successfully. Sep 9 05:37:35.281806 containerd[1901]: time="2025-09-09T05:37:35.281763600Z" level=info msg="TaskExit event in podsandbox handler container_id:\"013d31909c993d3d0c46f12490e938cde92beb18bad86f6e6958296cbc904bda\" id:\"013d31909c993d3d0c46f12490e938cde92beb18bad86f6e6958296cbc904bda\" pid:5384 exited_at:{seconds:1757396255 nanos:281158767}" Sep 9 05:37:35.285159 containerd[1901]: time="2025-09-09T05:37:35.285059939Z" level=info msg="received exit event container_id:\"013d31909c993d3d0c46f12490e938cde92beb18bad86f6e6958296cbc904bda\" id:\"013d31909c993d3d0c46f12490e938cde92beb18bad86f6e6958296cbc904bda\" pid:5384 exited_at:{seconds:1757396255 nanos:281158767}" Sep 9 05:37:35.286797 containerd[1901]: time="2025-09-09T05:37:35.286707828Z" level=info msg="StartContainer for \"013d31909c993d3d0c46f12490e938cde92beb18bad86f6e6958296cbc904bda\" returns successfully" Sep 9 05:37:35.308051 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-013d31909c993d3d0c46f12490e938cde92beb18bad86f6e6958296cbc904bda-rootfs.mount: Deactivated successfully. Sep 9 05:37:36.175284 containerd[1901]: time="2025-09-09T05:37:36.175241313Z" level=info msg="CreateContainer within sandbox \"ab0e7c0df60deb1fdbad6e178f384d28561b9f78edde7070af738ff7b5f3f305\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 05:37:36.194715 containerd[1901]: time="2025-09-09T05:37:36.192707365Z" level=info msg="Container 0dc2c3be2c9a7cbb85d315e5ec76f339b9951583a0b6972c88526acfbfd8c6e4: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:37:36.208952 containerd[1901]: time="2025-09-09T05:37:36.208898187Z" level=info msg="CreateContainer within sandbox \"ab0e7c0df60deb1fdbad6e178f384d28561b9f78edde7070af738ff7b5f3f305\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0dc2c3be2c9a7cbb85d315e5ec76f339b9951583a0b6972c88526acfbfd8c6e4\"" Sep 9 05:37:36.209862 containerd[1901]: time="2025-09-09T05:37:36.209762490Z" level=info msg="StartContainer for \"0dc2c3be2c9a7cbb85d315e5ec76f339b9951583a0b6972c88526acfbfd8c6e4\"" Sep 9 05:37:36.211769 containerd[1901]: time="2025-09-09T05:37:36.211728318Z" level=info msg="connecting to shim 0dc2c3be2c9a7cbb85d315e5ec76f339b9951583a0b6972c88526acfbfd8c6e4" address="unix:///run/containerd/s/53ce83808c8accd62c4b40a93a6ac4b4f48b93ef3065dad536ad7a12f490eb39" protocol=ttrpc version=3 Sep 9 05:37:36.234871 systemd[1]: Started cri-containerd-0dc2c3be2c9a7cbb85d315e5ec76f339b9951583a0b6972c88526acfbfd8c6e4.scope - libcontainer container 0dc2c3be2c9a7cbb85d315e5ec76f339b9951583a0b6972c88526acfbfd8c6e4. Sep 9 05:37:36.285168 containerd[1901]: time="2025-09-09T05:37:36.285119717Z" level=info msg="StartContainer for \"0dc2c3be2c9a7cbb85d315e5ec76f339b9951583a0b6972c88526acfbfd8c6e4\" returns successfully" Sep 9 05:37:36.402669 containerd[1901]: time="2025-09-09T05:37:36.402614417Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0dc2c3be2c9a7cbb85d315e5ec76f339b9951583a0b6972c88526acfbfd8c6e4\" id:\"3d0d0b914fcca5124215133efff03aadf1846f310233696f797ce5ec98d31f51\" pid:5451 exited_at:{seconds:1757396256 nanos:402176670}" Sep 9 05:37:37.046587 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Sep 9 05:37:39.367894 containerd[1901]: time="2025-09-09T05:37:39.367846295Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0dc2c3be2c9a7cbb85d315e5ec76f339b9951583a0b6972c88526acfbfd8c6e4\" id:\"3b9947cfc12ba6199dddb7e88a93e496a13c543e757a88c02abf6ca656b27f51\" pid:5726 exit_status:1 exited_at:{seconds:1757396259 nanos:366970397}" Sep 9 05:37:40.242821 systemd-networkd[1817]: lxc_health: Link UP Sep 9 05:37:40.243212 systemd-networkd[1817]: lxc_health: Gained carrier Sep 9 05:37:40.251229 (udev-worker)[5943]: Network interface NamePolicy= disabled on kernel command line. Sep 9 05:37:40.563420 kubelet[3285]: I0909 05:37:40.563276 3285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9l4sk" podStartSLOduration=8.563255744 podStartE2EDuration="8.563255744s" podCreationTimestamp="2025-09-09 05:37:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:37:37.204281141 +0000 UTC m=+93.787128976" watchObservedRunningTime="2025-09-09 05:37:40.563255744 +0000 UTC m=+97.146103571" Sep 9 05:37:41.631733 containerd[1901]: time="2025-09-09T05:37:41.631685475Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0dc2c3be2c9a7cbb85d315e5ec76f339b9951583a0b6972c88526acfbfd8c6e4\" id:\"ec0e1f100d0e31f3e48fa5432b3c035ec9cc376311257b3c798b6981a2aaeb75\" pid:5973 exited_at:{seconds:1757396261 nanos:631329879}" Sep 9 05:37:42.252723 systemd-networkd[1817]: lxc_health: Gained IPv6LL Sep 9 05:37:43.827815 containerd[1901]: time="2025-09-09T05:37:43.827751908Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0dc2c3be2c9a7cbb85d315e5ec76f339b9951583a0b6972c88526acfbfd8c6e4\" id:\"42c63057d9f40e592381c58ab54c33dd837a5b68b56c931c566029cad236ae9e\" pid:6003 exited_at:{seconds:1757396263 nanos:824968584}" Sep 9 05:37:44.641335 ntpd[1863]: Listen normally on 15 lxc_health [fe80::a8a9:98ff:fe43:c5c1%14]:123 Sep 9 05:37:44.641842 ntpd[1863]: 9 Sep 05:37:44 ntpd[1863]: Listen normally on 15 lxc_health [fe80::a8a9:98ff:fe43:c5c1%14]:123 Sep 9 05:37:45.984544 containerd[1901]: time="2025-09-09T05:37:45.984497327Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0dc2c3be2c9a7cbb85d315e5ec76f339b9951583a0b6972c88526acfbfd8c6e4\" id:\"7a03052ebfd5455923f8c551e00592849a99578d535ba8692a3201b3a624fcf2\" pid:6035 exited_at:{seconds:1757396265 nanos:983723624}" Sep 9 05:37:46.014238 sshd[5278]: Connection closed by 147.75.109.163 port 51018 Sep 9 05:37:46.015700 sshd-session[5213]: pam_unix(sshd:session): session closed for user core Sep 9 05:37:46.020465 systemd[1]: sshd@25-172.31.25.117:22-147.75.109.163:51018.service: Deactivated successfully. Sep 9 05:37:46.023863 systemd[1]: session-26.scope: Deactivated successfully. Sep 9 05:37:46.025360 systemd-logind[1872]: Session 26 logged out. Waiting for processes to exit. Sep 9 05:37:46.027674 systemd-logind[1872]: Removed session 26. Sep 9 05:38:00.128423 systemd[1]: cri-containerd-02360799fce9bf620a4a6b9d306e225021af2ff66750e09c31006b0aea212700.scope: Deactivated successfully. Sep 9 05:38:00.129261 systemd[1]: cri-containerd-02360799fce9bf620a4a6b9d306e225021af2ff66750e09c31006b0aea212700.scope: Consumed 3.479s CPU time, 73.2M memory peak, 21.3M read from disk. Sep 9 05:38:00.133366 containerd[1901]: time="2025-09-09T05:38:00.133234673Z" level=info msg="received exit event container_id:\"02360799fce9bf620a4a6b9d306e225021af2ff66750e09c31006b0aea212700\" id:\"02360799fce9bf620a4a6b9d306e225021af2ff66750e09c31006b0aea212700\" pid:3114 exit_status:1 exited_at:{seconds:1757396280 nanos:132760359}" Sep 9 05:38:00.134166 containerd[1901]: time="2025-09-09T05:38:00.133306243Z" level=info msg="TaskExit event in podsandbox handler container_id:\"02360799fce9bf620a4a6b9d306e225021af2ff66750e09c31006b0aea212700\" id:\"02360799fce9bf620a4a6b9d306e225021af2ff66750e09c31006b0aea212700\" pid:3114 exit_status:1 exited_at:{seconds:1757396280 nanos:132760359}" Sep 9 05:38:00.169309 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-02360799fce9bf620a4a6b9d306e225021af2ff66750e09c31006b0aea212700-rootfs.mount: Deactivated successfully. Sep 9 05:38:00.252657 kubelet[3285]: I0909 05:38:00.252624 3285 scope.go:117] "RemoveContainer" containerID="02360799fce9bf620a4a6b9d306e225021af2ff66750e09c31006b0aea212700" Sep 9 05:38:00.257417 containerd[1901]: time="2025-09-09T05:38:00.257368243Z" level=info msg="CreateContainer within sandbox \"9b78b2f2a6116e6822005f558999d4de7e911befa011a5078dad0e301375cb7e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 9 05:38:00.274973 containerd[1901]: time="2025-09-09T05:38:00.273510101Z" level=info msg="Container d3ca457e2f11990c1f9ed638f4b4a7aeb7bcdef47dac5f4cab390e20014a1d92: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:38:00.283809 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2625870038.mount: Deactivated successfully. Sep 9 05:38:00.289207 containerd[1901]: time="2025-09-09T05:38:00.289149868Z" level=info msg="CreateContainer within sandbox \"9b78b2f2a6116e6822005f558999d4de7e911befa011a5078dad0e301375cb7e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"d3ca457e2f11990c1f9ed638f4b4a7aeb7bcdef47dac5f4cab390e20014a1d92\"" Sep 9 05:38:00.290043 containerd[1901]: time="2025-09-09T05:38:00.289971965Z" level=info msg="StartContainer for \"d3ca457e2f11990c1f9ed638f4b4a7aeb7bcdef47dac5f4cab390e20014a1d92\"" Sep 9 05:38:00.291276 containerd[1901]: time="2025-09-09T05:38:00.291242206Z" level=info msg="connecting to shim d3ca457e2f11990c1f9ed638f4b4a7aeb7bcdef47dac5f4cab390e20014a1d92" address="unix:///run/containerd/s/16e7e41a156f580262d25be1ea12415793f80c8e481edd9da58710fc2afaf765" protocol=ttrpc version=3 Sep 9 05:38:00.320817 systemd[1]: Started cri-containerd-d3ca457e2f11990c1f9ed638f4b4a7aeb7bcdef47dac5f4cab390e20014a1d92.scope - libcontainer container d3ca457e2f11990c1f9ed638f4b4a7aeb7bcdef47dac5f4cab390e20014a1d92. Sep 9 05:38:00.381141 containerd[1901]: time="2025-09-09T05:38:00.381032337Z" level=info msg="StartContainer for \"d3ca457e2f11990c1f9ed638f4b4a7aeb7bcdef47dac5f4cab390e20014a1d92\" returns successfully" Sep 9 05:38:03.642622 containerd[1901]: time="2025-09-09T05:38:03.642567525Z" level=info msg="StopPodSandbox for \"47c09c8ce358f869b3b7bbe61f914f969126ac7f365e4dacde39a7c33afc02d2\"" Sep 9 05:38:03.643349 containerd[1901]: time="2025-09-09T05:38:03.642730106Z" level=info msg="TearDown network for sandbox \"47c09c8ce358f869b3b7bbe61f914f969126ac7f365e4dacde39a7c33afc02d2\" successfully" Sep 9 05:38:03.643349 containerd[1901]: time="2025-09-09T05:38:03.642748763Z" level=info msg="StopPodSandbox for \"47c09c8ce358f869b3b7bbe61f914f969126ac7f365e4dacde39a7c33afc02d2\" returns successfully" Sep 9 05:38:03.643349 containerd[1901]: time="2025-09-09T05:38:03.643238778Z" level=info msg="RemovePodSandbox for \"47c09c8ce358f869b3b7bbe61f914f969126ac7f365e4dacde39a7c33afc02d2\"" Sep 9 05:38:03.643349 containerd[1901]: time="2025-09-09T05:38:03.643266576Z" level=info msg="Forcibly stopping sandbox \"47c09c8ce358f869b3b7bbe61f914f969126ac7f365e4dacde39a7c33afc02d2\"" Sep 9 05:38:03.643512 containerd[1901]: time="2025-09-09T05:38:03.643394852Z" level=info msg="TearDown network for sandbox \"47c09c8ce358f869b3b7bbe61f914f969126ac7f365e4dacde39a7c33afc02d2\" successfully" Sep 9 05:38:03.644821 containerd[1901]: time="2025-09-09T05:38:03.644791051Z" level=info msg="Ensure that sandbox 47c09c8ce358f869b3b7bbe61f914f969126ac7f365e4dacde39a7c33afc02d2 in task-service has been cleanup successfully" Sep 9 05:38:03.651811 containerd[1901]: time="2025-09-09T05:38:03.651746854Z" level=info msg="RemovePodSandbox \"47c09c8ce358f869b3b7bbe61f914f969126ac7f365e4dacde39a7c33afc02d2\" returns successfully" Sep 9 05:38:03.652589 containerd[1901]: time="2025-09-09T05:38:03.652319894Z" level=info msg="StopPodSandbox for \"06334b2c3b9caa2284ac6813262f7b62f8fb59a07bd7c39328f630b92caeed47\"" Sep 9 05:38:03.652589 containerd[1901]: time="2025-09-09T05:38:03.652465944Z" level=info msg="TearDown network for sandbox \"06334b2c3b9caa2284ac6813262f7b62f8fb59a07bd7c39328f630b92caeed47\" successfully" Sep 9 05:38:03.652589 containerd[1901]: time="2025-09-09T05:38:03.652481442Z" level=info msg="StopPodSandbox for \"06334b2c3b9caa2284ac6813262f7b62f8fb59a07bd7c39328f630b92caeed47\" returns successfully" Sep 9 05:38:03.653131 containerd[1901]: time="2025-09-09T05:38:03.653030630Z" level=info msg="RemovePodSandbox for \"06334b2c3b9caa2284ac6813262f7b62f8fb59a07bd7c39328f630b92caeed47\"" Sep 9 05:38:03.653131 containerd[1901]: time="2025-09-09T05:38:03.653066859Z" level=info msg="Forcibly stopping sandbox \"06334b2c3b9caa2284ac6813262f7b62f8fb59a07bd7c39328f630b92caeed47\"" Sep 9 05:38:03.653268 containerd[1901]: time="2025-09-09T05:38:03.653198289Z" level=info msg="TearDown network for sandbox \"06334b2c3b9caa2284ac6813262f7b62f8fb59a07bd7c39328f630b92caeed47\" successfully" Sep 9 05:38:03.655074 containerd[1901]: time="2025-09-09T05:38:03.654535884Z" level=info msg="Ensure that sandbox 06334b2c3b9caa2284ac6813262f7b62f8fb59a07bd7c39328f630b92caeed47 in task-service has been cleanup successfully" Sep 9 05:38:03.661723 containerd[1901]: time="2025-09-09T05:38:03.661668734Z" level=info msg="RemovePodSandbox \"06334b2c3b9caa2284ac6813262f7b62f8fb59a07bd7c39328f630b92caeed47\" returns successfully" Sep 9 05:38:04.986102 systemd[1]: cri-containerd-ffc220fa1608a81b45d6192cbedd2153d09cba4db3d2bc183616002fcf4fc4c2.scope: Deactivated successfully. Sep 9 05:38:04.986865 systemd[1]: cri-containerd-ffc220fa1608a81b45d6192cbedd2153d09cba4db3d2bc183616002fcf4fc4c2.scope: Consumed 2.230s CPU time, 32.7M memory peak, 14.6M read from disk. Sep 9 05:38:04.990111 containerd[1901]: time="2025-09-09T05:38:04.990070665Z" level=info msg="received exit event container_id:\"ffc220fa1608a81b45d6192cbedd2153d09cba4db3d2bc183616002fcf4fc4c2\" id:\"ffc220fa1608a81b45d6192cbedd2153d09cba4db3d2bc183616002fcf4fc4c2\" pid:3131 exit_status:1 exited_at:{seconds:1757396284 nanos:989425311}" Sep 9 05:38:04.991168 containerd[1901]: time="2025-09-09T05:38:04.990360345Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ffc220fa1608a81b45d6192cbedd2153d09cba4db3d2bc183616002fcf4fc4c2\" id:\"ffc220fa1608a81b45d6192cbedd2153d09cba4db3d2bc183616002fcf4fc4c2\" pid:3131 exit_status:1 exited_at:{seconds:1757396284 nanos:989425311}" Sep 9 05:38:05.018764 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ffc220fa1608a81b45d6192cbedd2153d09cba4db3d2bc183616002fcf4fc4c2-rootfs.mount: Deactivated successfully. Sep 9 05:38:05.271136 kubelet[3285]: I0909 05:38:05.271025 3285 scope.go:117] "RemoveContainer" containerID="ffc220fa1608a81b45d6192cbedd2153d09cba4db3d2bc183616002fcf4fc4c2" Sep 9 05:38:05.273519 containerd[1901]: time="2025-09-09T05:38:05.273486250Z" level=info msg="CreateContainer within sandbox \"0a7e2cdc47fe7a512f9fd3723192f1f6725bc854c0569213641071790e13d1fc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 9 05:38:05.293838 containerd[1901]: time="2025-09-09T05:38:05.293792240Z" level=info msg="Container 530d84f38c70e664785a10f4c19f888d2dbdb2b4f8574d9261a263490b29c580: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:38:05.308638 containerd[1901]: time="2025-09-09T05:38:05.308564390Z" level=info msg="CreateContainer within sandbox \"0a7e2cdc47fe7a512f9fd3723192f1f6725bc854c0569213641071790e13d1fc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"530d84f38c70e664785a10f4c19f888d2dbdb2b4f8574d9261a263490b29c580\"" Sep 9 05:38:05.309596 containerd[1901]: time="2025-09-09T05:38:05.309371085Z" level=info msg="StartContainer for \"530d84f38c70e664785a10f4c19f888d2dbdb2b4f8574d9261a263490b29c580\"" Sep 9 05:38:05.310769 containerd[1901]: time="2025-09-09T05:38:05.310726120Z" level=info msg="connecting to shim 530d84f38c70e664785a10f4c19f888d2dbdb2b4f8574d9261a263490b29c580" address="unix:///run/containerd/s/a500bfe95940e1318425e09565d5fabd125c60c6626c25da9c65c50928997300" protocol=ttrpc version=3 Sep 9 05:38:05.337818 systemd[1]: Started cri-containerd-530d84f38c70e664785a10f4c19f888d2dbdb2b4f8574d9261a263490b29c580.scope - libcontainer container 530d84f38c70e664785a10f4c19f888d2dbdb2b4f8574d9261a263490b29c580. Sep 9 05:38:05.411596 containerd[1901]: time="2025-09-09T05:38:05.411541430Z" level=info msg="StartContainer for \"530d84f38c70e664785a10f4c19f888d2dbdb2b4f8574d9261a263490b29c580\" returns successfully" Sep 9 05:38:06.169573 kubelet[3285]: E0909 05:38:06.168990 3285 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-117?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 9 05:38:16.170433 kubelet[3285]: E0909 05:38:16.170132 3285 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-117?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"