Sep 12 17:47:31.941483 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 15:34:39 -00 2025 Sep 12 17:47:31.941527 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=271a44cc8ea1639cfb6fdf777202a5f025fda0b3ce9b293cc4e0e7047aecb858 Sep 12 17:47:31.941543 kernel: BIOS-provided physical RAM map: Sep 12 17:47:31.941555 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 12 17:47:31.941567 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Sep 12 17:47:31.941579 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Sep 12 17:47:31.941594 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Sep 12 17:47:31.941605 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Sep 12 17:47:31.941618 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Sep 12 17:47:31.941630 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Sep 12 17:47:31.941642 kernel: NX (Execute Disable) protection: active Sep 12 17:47:31.941653 kernel: APIC: Static calls initialized Sep 12 17:47:31.941664 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Sep 12 17:47:31.941675 kernel: extended physical RAM map: Sep 12 17:47:31.941693 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 12 17:47:31.941706 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Sep 12 17:47:31.941719 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Sep 12 17:47:31.941731 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Sep 12 17:47:31.941742 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Sep 12 17:47:31.941755 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Sep 12 17:47:31.941768 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Sep 12 17:47:31.941781 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Sep 12 17:47:31.941793 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Sep 12 17:47:31.941807 kernel: efi: EFI v2.7 by EDK II Sep 12 17:47:31.941824 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 Sep 12 17:47:31.941836 kernel: secureboot: Secure boot disabled Sep 12 17:47:31.941850 kernel: SMBIOS 2.7 present. Sep 12 17:47:31.941863 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Sep 12 17:47:31.941876 kernel: DMI: Memory slots populated: 1/1 Sep 12 17:47:31.941889 kernel: Hypervisor detected: KVM Sep 12 17:47:31.941902 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 12 17:47:31.941915 kernel: kvm-clock: using sched offset of 5054744762 cycles Sep 12 17:47:31.941929 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 17:47:31.941943 kernel: tsc: Detected 2499.998 MHz processor Sep 12 17:47:31.941956 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 17:47:31.941973 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 17:47:31.941988 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Sep 12 17:47:31.942002 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 12 17:47:31.942017 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 17:47:31.942032 kernel: Using GB pages for direct mapping Sep 12 17:47:31.942052 kernel: ACPI: Early table checksum verification disabled Sep 12 17:47:31.942071 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Sep 12 17:47:31.943155 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Sep 12 17:47:31.943176 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 12 17:47:31.943190 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Sep 12 17:47:31.943205 kernel: ACPI: FACS 0x00000000789D0000 000040 Sep 12 17:47:31.943219 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Sep 12 17:47:31.943234 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 12 17:47:31.943249 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 12 17:47:31.943269 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Sep 12 17:47:31.943283 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Sep 12 17:47:31.943298 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 12 17:47:31.943312 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 12 17:47:31.943327 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Sep 12 17:47:31.943342 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Sep 12 17:47:31.943355 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Sep 12 17:47:31.943369 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Sep 12 17:47:31.943386 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Sep 12 17:47:31.943400 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Sep 12 17:47:31.943414 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Sep 12 17:47:31.943429 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Sep 12 17:47:31.943444 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Sep 12 17:47:31.943457 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Sep 12 17:47:31.943471 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Sep 12 17:47:31.943485 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Sep 12 17:47:31.943499 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Sep 12 17:47:31.943513 kernel: NUMA: Initialized distance table, cnt=1 Sep 12 17:47:31.943532 kernel: NODE_DATA(0) allocated [mem 0x7a8eddc0-0x7a8f4fff] Sep 12 17:47:31.943547 kernel: Zone ranges: Sep 12 17:47:31.943562 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 17:47:31.943577 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Sep 12 17:47:31.943592 kernel: Normal empty Sep 12 17:47:31.943607 kernel: Device empty Sep 12 17:47:31.943621 kernel: Movable zone start for each node Sep 12 17:47:31.943636 kernel: Early memory node ranges Sep 12 17:47:31.943651 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 12 17:47:31.943667 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Sep 12 17:47:31.943682 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Sep 12 17:47:31.943697 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Sep 12 17:47:31.943711 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 17:47:31.943726 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 12 17:47:31.943742 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 12 17:47:31.943757 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Sep 12 17:47:31.943772 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 12 17:47:31.943786 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 12 17:47:31.943799 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Sep 12 17:47:31.943816 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 12 17:47:31.943831 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 17:47:31.943846 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 12 17:47:31.943859 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 12 17:47:31.943872 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 17:47:31.943886 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 12 17:47:31.943900 kernel: TSC deadline timer available Sep 12 17:47:31.943913 kernel: CPU topo: Max. logical packages: 1 Sep 12 17:47:31.943926 kernel: CPU topo: Max. logical dies: 1 Sep 12 17:47:31.943943 kernel: CPU topo: Max. dies per package: 1 Sep 12 17:47:31.943958 kernel: CPU topo: Max. threads per core: 2 Sep 12 17:47:31.943973 kernel: CPU topo: Num. cores per package: 1 Sep 12 17:47:31.943987 kernel: CPU topo: Num. threads per package: 2 Sep 12 17:47:31.944001 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Sep 12 17:47:31.944014 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 12 17:47:31.944027 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Sep 12 17:47:31.944041 kernel: Booting paravirtualized kernel on KVM Sep 12 17:47:31.944055 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 17:47:31.944074 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 12 17:47:31.944102 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Sep 12 17:47:31.944117 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Sep 12 17:47:31.944132 kernel: pcpu-alloc: [0] 0 1 Sep 12 17:47:31.944146 kernel: kvm-guest: PV spinlocks enabled Sep 12 17:47:31.944162 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 12 17:47:31.944179 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=271a44cc8ea1639cfb6fdf777202a5f025fda0b3ce9b293cc4e0e7047aecb858 Sep 12 17:47:31.944195 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:47:31.944213 kernel: random: crng init done Sep 12 17:47:31.944228 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:47:31.944243 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 12 17:47:31.944257 kernel: Fallback order for Node 0: 0 Sep 12 17:47:31.944271 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Sep 12 17:47:31.944287 kernel: Policy zone: DMA32 Sep 12 17:47:31.944313 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:47:31.944333 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 17:47:31.944352 kernel: Kernel/User page tables isolation: enabled Sep 12 17:47:31.944371 kernel: ftrace: allocating 40125 entries in 157 pages Sep 12 17:47:31.944390 kernel: ftrace: allocated 157 pages with 5 groups Sep 12 17:47:31.944409 kernel: Dynamic Preempt: voluntary Sep 12 17:47:31.944431 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:47:31.944452 kernel: rcu: RCU event tracing is enabled. Sep 12 17:47:31.944471 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 17:47:31.944490 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:47:31.944510 kernel: Rude variant of Tasks RCU enabled. Sep 12 17:47:31.944531 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:47:31.944551 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:47:31.944570 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 17:47:31.944589 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:47:31.944608 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:47:31.944628 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:47:31.944647 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 12 17:47:31.944666 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:47:31.944686 kernel: Console: colour dummy device 80x25 Sep 12 17:47:31.944707 kernel: printk: legacy console [tty0] enabled Sep 12 17:47:31.944726 kernel: printk: legacy console [ttyS0] enabled Sep 12 17:47:31.944745 kernel: ACPI: Core revision 20240827 Sep 12 17:47:31.944764 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Sep 12 17:47:31.944783 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 17:47:31.944802 kernel: x2apic enabled Sep 12 17:47:31.944821 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 17:47:31.944841 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Sep 12 17:47:31.944860 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Sep 12 17:47:31.944882 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 12 17:47:31.944901 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 12 17:47:31.944920 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 17:47:31.944939 kernel: Spectre V2 : Mitigation: Retpolines Sep 12 17:47:31.944957 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 17:47:31.944976 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 12 17:47:31.944995 kernel: RETBleed: Vulnerable Sep 12 17:47:31.945013 kernel: Speculative Store Bypass: Vulnerable Sep 12 17:47:31.945032 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Sep 12 17:47:31.945050 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 12 17:47:31.945072 kernel: GDS: Unknown: Dependent on hypervisor status Sep 12 17:47:31.945572 kernel: active return thunk: its_return_thunk Sep 12 17:47:31.945590 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 12 17:47:31.945606 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 17:47:31.945624 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 17:47:31.945640 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 17:47:31.945657 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Sep 12 17:47:31.945674 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Sep 12 17:47:31.945690 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 12 17:47:31.945707 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 12 17:47:31.945724 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 12 17:47:31.945747 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Sep 12 17:47:31.945764 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 17:47:31.945780 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Sep 12 17:47:31.945797 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Sep 12 17:47:31.945813 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Sep 12 17:47:31.945830 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Sep 12 17:47:31.945846 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Sep 12 17:47:31.945863 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Sep 12 17:47:31.945880 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Sep 12 17:47:31.945896 kernel: Freeing SMP alternatives memory: 32K Sep 12 17:47:31.945913 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:47:31.945934 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 12 17:47:31.945951 kernel: landlock: Up and running. Sep 12 17:47:31.945967 kernel: SELinux: Initializing. Sep 12 17:47:31.945984 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 12 17:47:31.946001 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 12 17:47:31.946017 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Sep 12 17:47:31.946033 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Sep 12 17:47:31.946051 kernel: signal: max sigframe size: 3632 Sep 12 17:47:31.946068 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:47:31.946114 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:47:31.946131 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 12 17:47:31.946153 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 12 17:47:31.946170 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:47:31.946187 kernel: smpboot: x86: Booting SMP configuration: Sep 12 17:47:31.946204 kernel: .... node #0, CPUs: #1 Sep 12 17:47:31.946221 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 12 17:47:31.946240 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 12 17:47:31.946256 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 17:47:31.946273 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Sep 12 17:47:31.946295 kernel: Memory: 1908056K/2037804K available (14336K kernel code, 2432K rwdata, 9960K rodata, 54040K init, 2924K bss, 125192K reserved, 0K cma-reserved) Sep 12 17:47:31.946311 kernel: devtmpfs: initialized Sep 12 17:47:31.946328 kernel: x86/mm: Memory block size: 128MB Sep 12 17:47:31.946345 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Sep 12 17:47:31.946362 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:47:31.946379 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 17:47:31.946396 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:47:31.946413 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:47:31.946430 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:47:31.946451 kernel: audit: type=2000 audit(1757699249.171:1): state=initialized audit_enabled=0 res=1 Sep 12 17:47:31.946468 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:47:31.946483 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 17:47:31.946497 kernel: cpuidle: using governor menu Sep 12 17:47:31.946514 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:47:31.946530 kernel: dca service started, version 1.12.1 Sep 12 17:47:31.946547 kernel: PCI: Using configuration type 1 for base access Sep 12 17:47:31.946564 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 17:47:31.946580 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:47:31.946601 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:47:31.946617 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:47:31.946632 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:47:31.946646 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:47:31.946659 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:47:31.946682 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:47:31.946697 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Sep 12 17:47:31.946711 kernel: ACPI: Interpreter enabled Sep 12 17:47:31.946726 kernel: ACPI: PM: (supports S0 S5) Sep 12 17:47:31.946744 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 17:47:31.946759 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 17:47:31.946774 kernel: PCI: Using E820 reservations for host bridge windows Sep 12 17:47:31.946788 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 12 17:47:31.946803 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 17:47:31.947032 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 12 17:47:31.947193 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 12 17:47:31.947331 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 12 17:47:31.947350 kernel: acpiphp: Slot [3] registered Sep 12 17:47:31.947367 kernel: acpiphp: Slot [4] registered Sep 12 17:47:31.947382 kernel: acpiphp: Slot [5] registered Sep 12 17:47:31.947397 kernel: acpiphp: Slot [6] registered Sep 12 17:47:31.947413 kernel: acpiphp: Slot [7] registered Sep 12 17:47:31.947428 kernel: acpiphp: Slot [8] registered Sep 12 17:47:31.947444 kernel: acpiphp: Slot [9] registered Sep 12 17:47:31.947459 kernel: acpiphp: Slot [10] registered Sep 12 17:47:31.947478 kernel: acpiphp: Slot [11] registered Sep 12 17:47:31.947493 kernel: acpiphp: Slot [12] registered Sep 12 17:47:31.947510 kernel: acpiphp: Slot [13] registered Sep 12 17:47:31.947526 kernel: acpiphp: Slot [14] registered Sep 12 17:47:31.947541 kernel: acpiphp: Slot [15] registered Sep 12 17:47:31.947557 kernel: acpiphp: Slot [16] registered Sep 12 17:47:31.947573 kernel: acpiphp: Slot [17] registered Sep 12 17:47:31.947589 kernel: acpiphp: Slot [18] registered Sep 12 17:47:31.947605 kernel: acpiphp: Slot [19] registered Sep 12 17:47:31.947621 kernel: acpiphp: Slot [20] registered Sep 12 17:47:31.947639 kernel: acpiphp: Slot [21] registered Sep 12 17:47:31.947654 kernel: acpiphp: Slot [22] registered Sep 12 17:47:31.947669 kernel: acpiphp: Slot [23] registered Sep 12 17:47:31.947685 kernel: acpiphp: Slot [24] registered Sep 12 17:47:31.947700 kernel: acpiphp: Slot [25] registered Sep 12 17:47:31.947715 kernel: acpiphp: Slot [26] registered Sep 12 17:47:31.947731 kernel: acpiphp: Slot [27] registered Sep 12 17:47:31.947746 kernel: acpiphp: Slot [28] registered Sep 12 17:47:31.947762 kernel: acpiphp: Slot [29] registered Sep 12 17:47:31.947780 kernel: acpiphp: Slot [30] registered Sep 12 17:47:31.947795 kernel: acpiphp: Slot [31] registered Sep 12 17:47:31.947811 kernel: PCI host bridge to bus 0000:00 Sep 12 17:47:31.947948 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 17:47:31.948070 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 17:47:31.951149 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 17:47:31.951282 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 12 17:47:31.951400 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Sep 12 17:47:31.951843 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 17:47:31.952007 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Sep 12 17:47:31.952181 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Sep 12 17:47:31.952334 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Sep 12 17:47:31.952483 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 12 17:47:31.952632 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Sep 12 17:47:31.952784 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Sep 12 17:47:31.952930 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Sep 12 17:47:31.953075 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Sep 12 17:47:31.954022 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Sep 12 17:47:31.955219 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Sep 12 17:47:31.955376 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Sep 12 17:47:31.955513 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Sep 12 17:47:31.955654 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Sep 12 17:47:31.955789 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 12 17:47:31.955931 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Sep 12 17:47:31.956068 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Sep 12 17:47:31.956563 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Sep 12 17:47:31.956702 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Sep 12 17:47:31.956728 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 12 17:47:31.956745 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 12 17:47:31.956760 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 17:47:31.956776 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 12 17:47:31.956793 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 12 17:47:31.956808 kernel: iommu: Default domain type: Translated Sep 12 17:47:31.956824 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 17:47:31.956840 kernel: efivars: Registered efivars operations Sep 12 17:47:31.956856 kernel: PCI: Using ACPI for IRQ routing Sep 12 17:47:31.956875 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 17:47:31.956891 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Sep 12 17:47:31.956906 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Sep 12 17:47:31.956921 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Sep 12 17:47:31.957059 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Sep 12 17:47:31.958249 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Sep 12 17:47:31.958408 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 12 17:47:31.958428 kernel: vgaarb: loaded Sep 12 17:47:31.958443 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Sep 12 17:47:31.958462 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Sep 12 17:47:31.958477 kernel: clocksource: Switched to clocksource kvm-clock Sep 12 17:47:31.958491 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:47:31.958506 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:47:31.958520 kernel: pnp: PnP ACPI init Sep 12 17:47:31.958534 kernel: pnp: PnP ACPI: found 5 devices Sep 12 17:47:31.958549 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 17:47:31.958564 kernel: NET: Registered PF_INET protocol family Sep 12 17:47:31.958579 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 17:47:31.958596 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 12 17:47:31.958611 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:47:31.958625 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 12 17:47:31.958639 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 12 17:47:31.958654 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 12 17:47:31.958669 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 12 17:47:31.958684 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 12 17:47:31.958699 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:47:31.958714 kernel: NET: Registered PF_XDP protocol family Sep 12 17:47:31.958842 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 17:47:31.958958 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 17:47:31.959072 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 17:47:31.959208 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 12 17:47:31.959327 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Sep 12 17:47:31.959472 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 12 17:47:31.959493 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:47:31.959510 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 12 17:47:31.959531 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Sep 12 17:47:31.959547 kernel: clocksource: Switched to clocksource tsc Sep 12 17:47:31.959563 kernel: Initialise system trusted keyrings Sep 12 17:47:31.959579 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 12 17:47:31.959595 kernel: Key type asymmetric registered Sep 12 17:47:31.959611 kernel: Asymmetric key parser 'x509' registered Sep 12 17:47:31.959626 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 12 17:47:31.959643 kernel: io scheduler mq-deadline registered Sep 12 17:47:31.959660 kernel: io scheduler kyber registered Sep 12 17:47:31.959679 kernel: io scheduler bfq registered Sep 12 17:47:31.959694 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 17:47:31.959710 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:47:31.959727 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 17:47:31.959744 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 12 17:47:31.959759 kernel: i8042: Warning: Keylock active Sep 12 17:47:31.959775 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 17:47:31.959791 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 17:47:31.959939 kernel: rtc_cmos 00:00: RTC can wake from S4 Sep 12 17:47:31.960070 kernel: rtc_cmos 00:00: registered as rtc0 Sep 12 17:47:31.963508 kernel: rtc_cmos 00:00: setting system clock to 2025-09-12T17:47:31 UTC (1757699251) Sep 12 17:47:31.963642 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Sep 12 17:47:31.963688 kernel: intel_pstate: CPU model not supported Sep 12 17:47:31.963708 kernel: efifb: probing for efifb Sep 12 17:47:31.963726 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Sep 12 17:47:31.963743 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Sep 12 17:47:31.963763 kernel: efifb: scrolling: redraw Sep 12 17:47:31.963779 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 12 17:47:31.963796 kernel: Console: switching to colour frame buffer device 100x37 Sep 12 17:47:31.963813 kernel: fb0: EFI VGA frame buffer device Sep 12 17:47:31.963829 kernel: pstore: Using crash dump compression: deflate Sep 12 17:47:31.963847 kernel: pstore: Registered efi_pstore as persistent store backend Sep 12 17:47:31.963864 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:47:31.963880 kernel: Segment Routing with IPv6 Sep 12 17:47:31.963897 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:47:31.963914 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:47:31.963934 kernel: Key type dns_resolver registered Sep 12 17:47:31.963950 kernel: IPI shorthand broadcast: enabled Sep 12 17:47:31.963967 kernel: sched_clock: Marking stable (2641001816, 146815683)->(2881107520, -93290021) Sep 12 17:47:31.963983 kernel: registered taskstats version 1 Sep 12 17:47:31.964000 kernel: Loading compiled-in X.509 certificates Sep 12 17:47:31.964017 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: f1ae8d6e9bfae84d90f4136cf098b0465b2a5bd7' Sep 12 17:47:31.964034 kernel: Demotion targets for Node 0: null Sep 12 17:47:31.964051 kernel: Key type .fscrypt registered Sep 12 17:47:31.964068 kernel: Key type fscrypt-provisioning registered Sep 12 17:47:31.964102 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:47:31.964120 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:47:31.964135 kernel: ima: No architecture policies found Sep 12 17:47:31.964150 kernel: clk: Disabling unused clocks Sep 12 17:47:31.964165 kernel: Warning: unable to open an initial console. Sep 12 17:47:31.964180 kernel: Freeing unused kernel image (initmem) memory: 54040K Sep 12 17:47:31.964195 kernel: Write protecting the kernel read-only data: 24576k Sep 12 17:47:31.964213 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Sep 12 17:47:31.964232 kernel: Run /init as init process Sep 12 17:47:31.964248 kernel: with arguments: Sep 12 17:47:31.964264 kernel: /init Sep 12 17:47:31.964278 kernel: with environment: Sep 12 17:47:31.964292 kernel: HOME=/ Sep 12 17:47:31.964308 kernel: TERM=linux Sep 12 17:47:31.964326 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:47:31.964344 systemd[1]: Successfully made /usr/ read-only. Sep 12 17:47:31.964366 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 17:47:31.964388 systemd[1]: Detected virtualization amazon. Sep 12 17:47:31.964405 systemd[1]: Detected architecture x86-64. Sep 12 17:47:31.964421 systemd[1]: Running in initrd. Sep 12 17:47:31.964440 systemd[1]: No hostname configured, using default hostname. Sep 12 17:47:31.964465 systemd[1]: Hostname set to . Sep 12 17:47:31.964482 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:47:31.964499 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:47:31.964517 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:47:31.964533 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:47:31.964553 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:47:31.964572 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:47:31.964591 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:47:31.964614 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:47:31.964636 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:47:31.964655 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:47:31.964672 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:47:31.964687 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:47:31.964704 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:47:31.964721 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:47:31.964742 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:47:31.964761 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:47:31.964781 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:47:31.964799 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:47:31.964818 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:47:31.964837 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 12 17:47:31.964856 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:47:31.964874 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:47:31.964895 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:47:31.964913 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:47:31.964932 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:47:31.964951 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:47:31.964969 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:47:31.964988 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 12 17:47:31.965010 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:47:31.965029 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:47:31.965044 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:47:31.965066 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:47:31.967141 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:47:31.967170 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:47:31.967188 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:47:31.967212 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:47:31.967271 systemd-journald[206]: Collecting audit messages is disabled. Sep 12 17:47:31.967311 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:47:31.967333 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:47:31.967355 systemd-journald[206]: Journal started Sep 12 17:47:31.967390 systemd-journald[206]: Runtime Journal (/run/log/journal/ec2da26bb19bcc873304e1dfc4a80109) is 4.8M, max 38.4M, 33.6M free. Sep 12 17:47:31.938135 systemd-modules-load[208]: Inserted module 'overlay' Sep 12 17:47:31.974110 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:47:31.978270 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:47:31.984221 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:47:31.991230 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:47:31.992408 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:47:32.000982 kernel: Bridge firewalling registered Sep 12 17:47:32.000936 systemd-modules-load[208]: Inserted module 'br_netfilter' Sep 12 17:47:32.002419 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:47:32.012235 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Sep 12 17:47:32.007238 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:47:32.016513 systemd-tmpfiles[226]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 12 17:47:32.024247 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:47:32.025285 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:47:32.030842 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:47:32.033832 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:47:32.036146 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:47:32.041236 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:47:32.064714 dracut-cmdline[243]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=271a44cc8ea1639cfb6fdf777202a5f025fda0b3ce9b293cc4e0e7047aecb858 Sep 12 17:47:32.094034 systemd-resolved[246]: Positive Trust Anchors: Sep 12 17:47:32.094973 systemd-resolved[246]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:47:32.095030 systemd-resolved[246]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:47:32.102373 systemd-resolved[246]: Defaulting to hostname 'linux'. Sep 12 17:47:32.105630 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:47:32.106372 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:47:32.161121 kernel: SCSI subsystem initialized Sep 12 17:47:32.170123 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:47:32.182120 kernel: iscsi: registered transport (tcp) Sep 12 17:47:32.203501 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:47:32.203582 kernel: QLogic iSCSI HBA Driver Sep 12 17:47:32.221966 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:47:32.237612 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:47:32.240058 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:47:32.286610 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:47:32.288642 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:47:32.342136 kernel: raid6: avx512x4 gen() 17793 MB/s Sep 12 17:47:32.360114 kernel: raid6: avx512x2 gen() 17636 MB/s Sep 12 17:47:32.378120 kernel: raid6: avx512x1 gen() 17649 MB/s Sep 12 17:47:32.396110 kernel: raid6: avx2x4 gen() 17542 MB/s Sep 12 17:47:32.414129 kernel: raid6: avx2x2 gen() 17482 MB/s Sep 12 17:47:32.432381 kernel: raid6: avx2x1 gen() 13577 MB/s Sep 12 17:47:32.432434 kernel: raid6: using algorithm avx512x4 gen() 17793 MB/s Sep 12 17:47:32.451374 kernel: raid6: .... xor() 7833 MB/s, rmw enabled Sep 12 17:47:32.451448 kernel: raid6: using avx512x2 recovery algorithm Sep 12 17:47:32.472117 kernel: xor: automatically using best checksumming function avx Sep 12 17:47:32.642125 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:47:32.648508 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:47:32.650799 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:47:32.678462 systemd-udevd[455]: Using default interface naming scheme 'v255'. Sep 12 17:47:32.685197 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:47:32.689312 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:47:32.716353 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation Sep 12 17:47:32.743414 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:47:32.745320 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:47:32.803663 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:47:32.806507 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:47:32.868467 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 12 17:47:32.868697 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 12 17:47:32.869959 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 17:47:32.875223 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Sep 12 17:47:32.894137 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:35:53:a5:76:d3 Sep 12 17:47:32.894381 kernel: AES CTR mode by8 optimization enabled Sep 12 17:47:32.899411 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 12 17:47:32.899620 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 12 17:47:32.918412 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 12 17:47:32.925133 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Sep 12 17:47:32.930179 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:47:32.930902 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:47:32.931576 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:47:32.953006 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 17:47:32.953048 kernel: GPT:9289727 != 16777215 Sep 12 17:47:32.953067 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 17:47:32.953758 kernel: GPT:9289727 != 16777215 Sep 12 17:47:32.953793 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 17:47:32.953811 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:47:32.939059 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:47:32.944123 (udev-worker)[504]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:47:32.954299 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 17:47:32.978305 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:47:32.998155 kernel: nvme nvme0: using unchecked data buffer Sep 12 17:47:33.081868 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 12 17:47:33.105618 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 12 17:47:33.117098 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 12 17:47:33.118490 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:47:33.136465 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 12 17:47:33.136993 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 12 17:47:33.138356 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:47:33.139231 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:47:33.140235 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:47:33.141913 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:47:33.143745 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:47:33.162300 disk-uuid[688]: Primary Header is updated. Sep 12 17:47:33.162300 disk-uuid[688]: Secondary Entries is updated. Sep 12 17:47:33.162300 disk-uuid[688]: Secondary Header is updated. Sep 12 17:47:33.166319 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:47:33.168197 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:47:33.182129 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:47:34.181110 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:47:34.182413 disk-uuid[693]: The operation has completed successfully. Sep 12 17:47:34.295973 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:47:34.296073 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:47:34.345211 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:47:34.360933 sh[954]: Success Sep 12 17:47:34.388131 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:47:34.388212 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:47:34.391356 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 12 17:47:34.402108 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Sep 12 17:47:34.511264 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:47:34.515224 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:47:34.527019 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:47:34.555131 kernel: BTRFS: device fsid 74707491-1b86-4926-8bdb-c533ce2a0c32 devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (977) Sep 12 17:47:34.559170 kernel: BTRFS info (device dm-0): first mount of filesystem 74707491-1b86-4926-8bdb-c533ce2a0c32 Sep 12 17:47:34.559246 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:47:34.637211 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 12 17:47:34.637290 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:47:34.639630 kernel: BTRFS info (device dm-0): enabling free space tree Sep 12 17:47:34.654738 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:47:34.655632 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 12 17:47:34.656574 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:47:34.657341 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:47:34.660214 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:47:34.702111 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1012) Sep 12 17:47:34.706571 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5410dae6-8d31-4ea4-a4b4-868064445761 Sep 12 17:47:34.706626 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:47:34.715550 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 17:47:34.715614 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 12 17:47:34.721107 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 5410dae6-8d31-4ea4-a4b4-868064445761 Sep 12 17:47:34.722880 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:47:34.725723 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:47:34.766200 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:47:34.768732 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:47:34.814770 systemd-networkd[1147]: lo: Link UP Sep 12 17:47:34.814783 systemd-networkd[1147]: lo: Gained carrier Sep 12 17:47:34.816497 systemd-networkd[1147]: Enumeration completed Sep 12 17:47:34.816926 systemd-networkd[1147]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:47:34.816932 systemd-networkd[1147]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:47:34.817250 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:47:34.818519 systemd[1]: Reached target network.target - Network. Sep 12 17:47:34.820223 systemd-networkd[1147]: eth0: Link UP Sep 12 17:47:34.820229 systemd-networkd[1147]: eth0: Gained carrier Sep 12 17:47:34.820246 systemd-networkd[1147]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:47:34.830179 systemd-networkd[1147]: eth0: DHCPv4 address 172.31.16.223/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 12 17:47:35.047258 ignition[1091]: Ignition 2.21.0 Sep 12 17:47:35.047275 ignition[1091]: Stage: fetch-offline Sep 12 17:47:35.047441 ignition[1091]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:47:35.047449 ignition[1091]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:47:35.047968 ignition[1091]: Ignition finished successfully Sep 12 17:47:35.049100 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:47:35.051199 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 17:47:35.078783 ignition[1157]: Ignition 2.21.0 Sep 12 17:47:35.078806 ignition[1157]: Stage: fetch Sep 12 17:47:35.079214 ignition[1157]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:47:35.079228 ignition[1157]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:47:35.079343 ignition[1157]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:47:35.087520 ignition[1157]: PUT result: OK Sep 12 17:47:35.089758 ignition[1157]: parsed url from cmdline: "" Sep 12 17:47:35.089771 ignition[1157]: no config URL provided Sep 12 17:47:35.089783 ignition[1157]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:47:35.089799 ignition[1157]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:47:35.089831 ignition[1157]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:47:35.092011 ignition[1157]: PUT result: OK Sep 12 17:47:35.092487 ignition[1157]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 12 17:47:35.093262 ignition[1157]: GET result: OK Sep 12 17:47:35.094199 ignition[1157]: parsing config with SHA512: 3dbe07337eed2d4d54baaa02ab94c54b374ee99a1e368399785f6f5b1239d75c3db8f8b3771c776b2f186fc7e80805a9c45ad1a0f808e3afedcd36cc03a8db98 Sep 12 17:47:35.099245 unknown[1157]: fetched base config from "system" Sep 12 17:47:35.099760 unknown[1157]: fetched base config from "system" Sep 12 17:47:35.100187 ignition[1157]: fetch: fetch complete Sep 12 17:47:35.099771 unknown[1157]: fetched user config from "aws" Sep 12 17:47:35.100192 ignition[1157]: fetch: fetch passed Sep 12 17:47:35.100236 ignition[1157]: Ignition finished successfully Sep 12 17:47:35.103113 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 17:47:35.104867 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:47:35.136247 ignition[1164]: Ignition 2.21.0 Sep 12 17:47:35.136263 ignition[1164]: Stage: kargs Sep 12 17:47:35.136562 ignition[1164]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:47:35.136571 ignition[1164]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:47:35.136665 ignition[1164]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:47:35.137624 ignition[1164]: PUT result: OK Sep 12 17:47:35.142406 ignition[1164]: kargs: kargs passed Sep 12 17:47:35.142471 ignition[1164]: Ignition finished successfully Sep 12 17:47:35.144434 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:47:35.145981 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:47:35.203221 ignition[1171]: Ignition 2.21.0 Sep 12 17:47:35.203323 ignition[1171]: Stage: disks Sep 12 17:47:35.204435 ignition[1171]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:47:35.204455 ignition[1171]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:47:35.204538 ignition[1171]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:47:35.205401 ignition[1171]: PUT result: OK Sep 12 17:47:35.207676 ignition[1171]: disks: disks passed Sep 12 17:47:35.207732 ignition[1171]: Ignition finished successfully Sep 12 17:47:35.209571 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:47:35.210140 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:47:35.210447 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:47:35.211017 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:47:35.211549 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:47:35.212113 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:47:35.213772 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:47:35.248576 systemd-fsck[1180]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 12 17:47:35.251199 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:47:35.252890 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:47:35.413112 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 26739aba-b0be-4ce3-bfbd-ca4dbcbe2426 r/w with ordered data mode. Quota mode: none. Sep 12 17:47:35.414545 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:47:35.415647 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:47:35.418129 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:47:35.420678 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:47:35.424217 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 17:47:35.425454 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:47:35.425492 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:47:35.433767 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:47:35.435971 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:47:35.452134 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1199) Sep 12 17:47:35.459071 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5410dae6-8d31-4ea4-a4b4-868064445761 Sep 12 17:47:35.459146 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:47:35.467848 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 17:47:35.467914 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 12 17:47:35.470056 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:47:35.687560 initrd-setup-root[1223]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:47:35.693034 initrd-setup-root[1230]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:47:35.698045 initrd-setup-root[1237]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:47:35.703644 initrd-setup-root[1244]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:47:35.946469 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:47:35.948454 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:47:35.951488 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:47:35.966683 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:47:35.969907 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 5410dae6-8d31-4ea4-a4b4-868064445761 Sep 12 17:47:36.000704 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:47:36.006362 ignition[1312]: INFO : Ignition 2.21.0 Sep 12 17:47:36.006362 ignition[1312]: INFO : Stage: mount Sep 12 17:47:36.007891 ignition[1312]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:47:36.007891 ignition[1312]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:47:36.007891 ignition[1312]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:47:36.007891 ignition[1312]: INFO : PUT result: OK Sep 12 17:47:36.013268 ignition[1312]: INFO : mount: mount passed Sep 12 17:47:36.014759 ignition[1312]: INFO : Ignition finished successfully Sep 12 17:47:36.015667 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:47:36.017179 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:47:36.037866 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:47:36.067133 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1323) Sep 12 17:47:36.070348 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5410dae6-8d31-4ea4-a4b4-868064445761 Sep 12 17:47:36.070410 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:47:36.079561 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 17:47:36.079642 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 12 17:47:36.081617 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:47:36.109948 ignition[1339]: INFO : Ignition 2.21.0 Sep 12 17:47:36.109948 ignition[1339]: INFO : Stage: files Sep 12 17:47:36.111377 ignition[1339]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:47:36.111377 ignition[1339]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:47:36.111377 ignition[1339]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:47:36.111377 ignition[1339]: INFO : PUT result: OK Sep 12 17:47:36.114614 ignition[1339]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:47:36.117247 ignition[1339]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:47:36.117247 ignition[1339]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:47:36.122251 ignition[1339]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:47:36.123070 ignition[1339]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:47:36.123848 unknown[1339]: wrote ssh authorized keys file for user: core Sep 12 17:47:36.124351 ignition[1339]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:47:36.127066 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 17:47:36.127858 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 12 17:47:36.221632 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 17:47:36.746240 systemd-networkd[1147]: eth0: Gained IPv6LL Sep 12 17:47:36.752127 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 17:47:36.752127 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 17:47:36.753952 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 12 17:47:36.947371 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 12 17:47:37.072644 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 17:47:37.072644 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:47:37.072644 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:47:37.072644 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:47:37.072644 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:47:37.072644 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:47:37.072644 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:47:37.072644 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:47:37.072644 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:47:37.079879 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:47:37.079879 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:47:37.079879 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:47:37.082682 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:47:37.082682 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:47:37.082682 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 12 17:47:37.550054 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 12 17:47:37.960024 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:47:37.960024 ignition[1339]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 12 17:47:37.962566 ignition[1339]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:47:37.966847 ignition[1339]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:47:37.966847 ignition[1339]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 12 17:47:37.966847 ignition[1339]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:47:37.970549 ignition[1339]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:47:37.970549 ignition[1339]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:47:37.970549 ignition[1339]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:47:37.970549 ignition[1339]: INFO : files: files passed Sep 12 17:47:37.970549 ignition[1339]: INFO : Ignition finished successfully Sep 12 17:47:37.968791 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:47:37.970777 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:47:37.975253 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:47:37.988358 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:47:37.988454 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:47:37.994790 initrd-setup-root-after-ignition[1369]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:47:37.994790 initrd-setup-root-after-ignition[1369]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:47:37.996960 initrd-setup-root-after-ignition[1373]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:47:37.998173 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:47:37.998977 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:47:38.000356 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:47:38.050315 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:47:38.050431 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:47:38.051749 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:47:38.052402 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:47:38.053132 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:47:38.054047 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:47:38.092784 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:47:38.094968 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:47:38.115724 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:47:38.116381 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:47:38.117267 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:47:38.118292 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:47:38.118448 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:47:38.119428 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:47:38.120179 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:47:38.120911 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:47:38.121674 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:47:38.122395 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:47:38.123071 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 12 17:47:38.123780 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:47:38.124413 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:47:38.125181 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:47:38.126255 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:47:38.126955 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:47:38.127666 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:47:38.127791 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:47:38.128660 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:47:38.129501 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:47:38.130157 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:47:38.130826 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:47:38.131293 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:47:38.131415 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:47:38.132567 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:47:38.132713 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:47:38.133363 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:47:38.133633 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:47:38.136196 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:47:38.138266 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:47:38.138787 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:47:38.138940 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:47:38.141386 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:47:38.141553 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:47:38.148627 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:47:38.148722 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:47:38.163151 ignition[1393]: INFO : Ignition 2.21.0 Sep 12 17:47:38.164292 ignition[1393]: INFO : Stage: umount Sep 12 17:47:38.164292 ignition[1393]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:47:38.164292 ignition[1393]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:47:38.164292 ignition[1393]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:47:38.166641 ignition[1393]: INFO : PUT result: OK Sep 12 17:47:38.168217 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:47:38.170257 ignition[1393]: INFO : umount: umount passed Sep 12 17:47:38.170257 ignition[1393]: INFO : Ignition finished successfully Sep 12 17:47:38.171921 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:47:38.172029 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:47:38.173375 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:47:38.173549 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:47:38.174283 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:47:38.174338 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:47:38.175007 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 17:47:38.175056 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 17:47:38.175644 systemd[1]: Stopped target network.target - Network. Sep 12 17:47:38.176268 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:47:38.176321 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:47:38.176911 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:47:38.177566 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:47:38.181172 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:47:38.181699 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:47:38.182790 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:47:38.183470 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:47:38.183512 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:47:38.184163 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:47:38.184199 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:47:38.184739 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:47:38.184795 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:47:38.185354 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:47:38.185544 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:47:38.186180 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:47:38.186837 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:47:38.188075 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:47:38.188199 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:47:38.190276 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:47:38.190365 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:47:38.191399 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:47:38.191499 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:47:38.195045 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 12 17:47:38.195590 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:47:38.195659 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:47:38.197504 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 12 17:47:38.199674 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:47:38.199781 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:47:38.201686 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 12 17:47:38.201854 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 12 17:47:38.202651 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:47:38.202685 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:47:38.205168 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:47:38.205649 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:47:38.205700 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:47:38.206067 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:47:38.206119 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:47:38.208710 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:47:38.208750 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:47:38.209254 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:47:38.212380 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 17:47:38.217601 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:47:38.218331 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:47:38.219900 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:47:38.220203 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:47:38.221214 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:47:38.221257 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:47:38.221981 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:47:38.222029 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:47:38.223000 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:47:38.223042 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:47:38.224741 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:47:38.224794 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:47:38.226733 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:47:38.228140 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 12 17:47:38.228189 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:47:38.229162 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:47:38.229206 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:47:38.231316 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 12 17:47:38.231356 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:47:38.232206 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:47:38.232243 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:47:38.233535 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:47:38.233582 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:47:38.234695 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:47:38.238188 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:47:38.243865 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:47:38.243966 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:47:38.245345 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:47:38.246844 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:47:38.273752 systemd[1]: Switching root. Sep 12 17:47:38.307698 systemd-journald[206]: Journal stopped Sep 12 17:47:39.991822 systemd-journald[206]: Received SIGTERM from PID 1 (systemd). Sep 12 17:47:39.991904 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 17:47:39.991927 kernel: SELinux: policy capability open_perms=1 Sep 12 17:47:39.991946 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 17:47:39.991965 kernel: SELinux: policy capability always_check_network=0 Sep 12 17:47:39.991985 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 17:47:39.992003 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 17:47:39.992026 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 17:47:39.992045 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 17:47:39.992074 kernel: SELinux: policy capability userspace_initial_context=0 Sep 12 17:47:39.994147 kernel: audit: type=1403 audit(1757699258.715:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 17:47:39.994181 systemd[1]: Successfully loaded SELinux policy in 64.844ms. Sep 12 17:47:39.994210 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.225ms. Sep 12 17:47:39.994232 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 17:47:39.994253 systemd[1]: Detected virtualization amazon. Sep 12 17:47:39.994279 systemd[1]: Detected architecture x86-64. Sep 12 17:47:39.994299 systemd[1]: Detected first boot. Sep 12 17:47:39.994321 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:47:39.994341 zram_generator::config[1437]: No configuration found. Sep 12 17:47:39.994363 kernel: Guest personality initialized and is inactive Sep 12 17:47:39.994383 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 12 17:47:39.994401 kernel: Initialized host personality Sep 12 17:47:39.994419 kernel: NET: Registered PF_VSOCK protocol family Sep 12 17:47:39.994439 systemd[1]: Populated /etc with preset unit settings. Sep 12 17:47:39.994464 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 12 17:47:39.994484 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 17:47:39.994505 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 17:47:39.994525 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 17:47:39.994547 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 17:47:39.994567 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 17:47:39.994588 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 17:47:39.994608 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 17:47:39.994638 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 17:47:39.994660 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 17:47:39.994682 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 17:47:39.994701 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 17:47:39.994721 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:47:39.994743 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:47:39.994763 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 17:47:39.994783 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 17:47:39.994803 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 17:47:39.994827 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:47:39.994848 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 17:47:39.994868 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:47:39.994888 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:47:39.994908 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 17:47:39.994927 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 17:47:39.994948 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 17:47:39.994968 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 17:47:39.994991 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:47:39.995011 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:47:39.995032 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:47:39.995052 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:47:39.995072 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 17:47:39.995116 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 17:47:39.995137 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 12 17:47:39.995158 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:47:39.995179 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:47:39.995203 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:47:39.995224 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 17:47:39.995245 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 17:47:39.995266 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 17:47:39.995287 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 17:47:39.995307 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:47:39.995329 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 17:47:39.995350 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 17:47:39.995374 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 17:47:39.995396 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 17:47:39.995414 systemd[1]: Reached target machines.target - Containers. Sep 12 17:47:39.995433 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 17:47:39.995452 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:47:39.995471 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:47:39.995491 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 17:47:39.995510 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:47:39.995529 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:47:39.995551 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:47:39.995570 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 17:47:39.995589 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:47:39.995608 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:47:39.995628 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 17:47:39.995647 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 17:47:39.995666 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 17:47:39.995685 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 17:47:39.995707 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:47:39.995729 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:47:39.995748 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:47:39.995768 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:47:39.995793 kernel: loop: module loaded Sep 12 17:47:39.995813 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 17:47:39.995832 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 12 17:47:39.995854 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:47:39.995873 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 17:47:39.995893 systemd[1]: Stopped verity-setup.service. Sep 12 17:47:39.995914 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:47:39.995933 kernel: fuse: init (API version 7.41) Sep 12 17:47:39.995954 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 17:47:39.995973 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 17:47:39.995992 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 17:47:39.996011 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 17:47:39.996031 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 17:47:39.996051 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 17:47:39.996070 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:47:39.998145 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 17:47:39.998179 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 17:47:39.998197 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:47:39.998216 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:47:39.998235 kernel: ACPI: bus type drm_connector registered Sep 12 17:47:39.998254 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:47:39.998273 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:47:39.998292 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:47:39.998311 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:47:39.998333 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 17:47:39.998352 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 17:47:39.998371 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:47:39.998389 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:47:39.998408 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:47:39.998427 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 17:47:39.998447 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:47:39.998468 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:47:39.998490 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 17:47:39.998511 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 17:47:39.998530 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:47:39.998549 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:47:39.998569 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 12 17:47:39.998589 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 17:47:39.998613 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:47:39.998676 systemd-journald[1520]: Collecting audit messages is disabled. Sep 12 17:47:39.998722 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 17:47:39.998746 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:47:39.998770 systemd-journald[1520]: Journal started Sep 12 17:47:39.998822 systemd-journald[1520]: Runtime Journal (/run/log/journal/ec2da26bb19bcc873304e1dfc4a80109) is 4.8M, max 38.4M, 33.6M free. Sep 12 17:47:39.557159 systemd[1]: Queued start job for default target multi-user.target. Sep 12 17:47:39.581506 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 12 17:47:39.582779 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 17:47:40.005103 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 17:47:40.010107 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:47:40.027119 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:47:40.027202 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 17:47:40.038698 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:47:40.038781 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:47:40.049785 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 12 17:47:40.051265 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 17:47:40.054746 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 17:47:40.082181 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 17:47:40.103568 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 17:47:40.122423 kernel: loop0: detected capacity change from 0 to 128016 Sep 12 17:47:40.119279 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 17:47:40.121544 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 17:47:40.126309 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 12 17:47:40.144124 systemd-journald[1520]: Time spent on flushing to /var/log/journal/ec2da26bb19bcc873304e1dfc4a80109 is 102.714ms for 1024 entries. Sep 12 17:47:40.144124 systemd-journald[1520]: System Journal (/var/log/journal/ec2da26bb19bcc873304e1dfc4a80109) is 8M, max 195.6M, 187.6M free. Sep 12 17:47:40.270731 systemd-journald[1520]: Received client request to flush runtime journal. Sep 12 17:47:40.270811 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 17:47:40.270848 kernel: loop1: detected capacity change from 0 to 221472 Sep 12 17:47:40.151559 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:47:40.159385 systemd-tmpfiles[1550]: ACLs are not supported, ignoring. Sep 12 17:47:40.159407 systemd-tmpfiles[1550]: ACLs are not supported, ignoring. Sep 12 17:47:40.169002 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:47:40.180539 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:47:40.187815 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 17:47:40.272105 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 17:47:40.283227 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 12 17:47:40.287284 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 17:47:40.295262 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:47:40.337135 systemd-tmpfiles[1591]: ACLs are not supported, ignoring. Sep 12 17:47:40.337546 systemd-tmpfiles[1591]: ACLs are not supported, ignoring. Sep 12 17:47:40.347905 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:47:40.351121 kernel: loop2: detected capacity change from 0 to 111000 Sep 12 17:47:40.427113 kernel: loop3: detected capacity change from 0 to 72360 Sep 12 17:47:40.488119 kernel: loop4: detected capacity change from 0 to 128016 Sep 12 17:47:40.517119 kernel: loop5: detected capacity change from 0 to 221472 Sep 12 17:47:40.556119 kernel: loop6: detected capacity change from 0 to 111000 Sep 12 17:47:40.580122 kernel: loop7: detected capacity change from 0 to 72360 Sep 12 17:47:40.588110 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 17:47:40.610183 (sd-merge)[1597]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 12 17:47:40.610697 (sd-merge)[1597]: Merged extensions into '/usr'. Sep 12 17:47:40.618818 systemd[1]: Reload requested from client PID 1549 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 17:47:40.618967 systemd[1]: Reloading... Sep 12 17:47:40.739154 zram_generator::config[1623]: No configuration found. Sep 12 17:47:41.111376 systemd[1]: Reloading finished in 491 ms. Sep 12 17:47:41.135374 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 17:47:41.143354 systemd[1]: Starting ensure-sysext.service... Sep 12 17:47:41.148369 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:47:41.187433 systemd[1]: Reload requested from client PID 1674 ('systemctl') (unit ensure-sysext.service)... Sep 12 17:47:41.187711 systemd[1]: Reloading... Sep 12 17:47:41.214617 systemd-tmpfiles[1675]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 12 17:47:41.214670 systemd-tmpfiles[1675]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 12 17:47:41.217132 systemd-tmpfiles[1675]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 17:47:41.217738 systemd-tmpfiles[1675]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 17:47:41.219166 systemd-tmpfiles[1675]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 17:47:41.219711 systemd-tmpfiles[1675]: ACLs are not supported, ignoring. Sep 12 17:47:41.220196 systemd-tmpfiles[1675]: ACLs are not supported, ignoring. Sep 12 17:47:41.235939 systemd-tmpfiles[1675]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:47:41.235956 systemd-tmpfiles[1675]: Skipping /boot Sep 12 17:47:41.261880 systemd-tmpfiles[1675]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:47:41.261898 systemd-tmpfiles[1675]: Skipping /boot Sep 12 17:47:41.322132 zram_generator::config[1703]: No configuration found. Sep 12 17:47:41.414998 ldconfig[1545]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 17:47:41.549105 systemd[1]: Reloading finished in 360 ms. Sep 12 17:47:41.569006 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 17:47:41.569974 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 17:47:41.585934 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:47:41.597119 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 17:47:41.602463 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 17:47:41.610408 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 17:47:41.616607 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:47:41.620592 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:47:41.625406 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 17:47:41.631561 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:47:41.631847 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:47:41.635377 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:47:41.642444 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:47:41.646149 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:47:41.646901 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:47:41.647117 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:47:41.647264 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:47:41.653033 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:47:41.653351 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:47:41.653593 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:47:41.653738 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:47:41.653874 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:47:41.661878 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:47:41.662290 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:47:41.666196 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:47:41.666970 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:47:41.668189 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:47:41.668516 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 17:47:41.669267 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:47:41.687885 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 17:47:41.690186 systemd[1]: Finished ensure-sysext.service. Sep 12 17:47:41.699603 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:47:41.705995 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:47:41.707789 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:47:41.709382 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:47:41.711021 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:47:41.728317 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 17:47:41.731505 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:47:41.733174 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:47:41.735250 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:47:41.735518 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:47:41.745590 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:47:41.767238 systemd-udevd[1762]: Using default interface naming scheme 'v255'. Sep 12 17:47:41.776206 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 17:47:41.781812 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 17:47:41.789310 augenrules[1796]: No rules Sep 12 17:47:41.792989 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:47:41.793654 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 17:47:41.803961 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 17:47:41.818915 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 17:47:41.844663 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:47:41.851254 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:47:41.863278 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 17:47:41.865181 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:47:41.967734 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 17:47:41.981197 (udev-worker)[1809]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:47:42.036160 systemd-resolved[1761]: Positive Trust Anchors: Sep 12 17:47:42.036181 systemd-resolved[1761]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:47:42.036242 systemd-resolved[1761]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:47:42.042007 systemd-resolved[1761]: Defaulting to hostname 'linux'. Sep 12 17:47:42.045394 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:47:42.047247 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:47:42.047876 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:47:42.048512 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 17:47:42.050243 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 17:47:42.050795 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 12 17:47:42.051926 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 17:47:42.053311 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 17:47:42.053840 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 17:47:42.055177 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 17:47:42.055221 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:47:42.055689 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:47:42.058542 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 17:47:42.063134 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 17:47:42.071380 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 12 17:47:42.074021 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 12 17:47:42.075732 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 12 17:47:42.091140 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 17:47:42.092339 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 12 17:47:42.095259 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 17:47:42.097052 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:47:42.098820 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:47:42.099873 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:47:42.100013 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:47:42.105261 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 12 17:47:42.109347 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 17:47:42.114403 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 17:47:42.122140 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 17:47:42.127111 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Sep 12 17:47:42.127138 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 17:47:42.136145 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 17:47:42.136758 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 17:47:42.142485 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 12 17:47:42.151369 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 17:47:42.161256 systemd[1]: Started ntpd.service - Network Time Service. Sep 12 17:47:42.165319 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 17:47:42.171155 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 12 17:47:42.175154 systemd-networkd[1812]: lo: Link UP Sep 12 17:47:42.200495 jq[1847]: false Sep 12 17:47:42.181377 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 17:47:42.193863 systemd-networkd[1812]: lo: Gained carrier Sep 12 17:47:42.203345 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 17:47:42.212156 systemd-networkd[1812]: Enumeration completed Sep 12 17:47:42.212856 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 17:47:42.214500 oslogin_cache_refresh[1849]: Refreshing passwd entry cache Sep 12 17:47:42.218506 google_oslogin_nss_cache[1849]: oslogin_cache_refresh[1849]: Refreshing passwd entry cache Sep 12 17:47:42.219399 systemd-networkd[1812]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:47:42.219409 systemd-networkd[1812]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:47:42.228228 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 17:47:42.229376 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 17:47:42.233380 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 17:47:42.239798 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 17:47:42.244038 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:47:42.245193 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 17:47:42.248512 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 17:47:42.248795 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 17:47:42.256563 systemd-networkd[1812]: eth0: Link UP Sep 12 17:47:42.269952 oslogin_cache_refresh[1849]: Failure getting users, quitting Sep 12 17:47:42.270520 google_oslogin_nss_cache[1849]: oslogin_cache_refresh[1849]: Failure getting users, quitting Sep 12 17:47:42.270520 google_oslogin_nss_cache[1849]: oslogin_cache_refresh[1849]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 12 17:47:42.270520 google_oslogin_nss_cache[1849]: oslogin_cache_refresh[1849]: Refreshing group entry cache Sep 12 17:47:42.256745 systemd-networkd[1812]: eth0: Gained carrier Sep 12 17:47:42.269975 oslogin_cache_refresh[1849]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 12 17:47:42.256779 systemd-networkd[1812]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:47:42.270028 oslogin_cache_refresh[1849]: Refreshing group entry cache Sep 12 17:47:42.275191 jq[1863]: true Sep 12 17:47:42.280226 kernel: ACPI: button: Power Button [PWRF] Sep 12 17:47:42.280297 google_oslogin_nss_cache[1849]: oslogin_cache_refresh[1849]: Failure getting groups, quitting Sep 12 17:47:42.284177 systemd-networkd[1812]: eth0: DHCPv4 address 172.31.16.223/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 12 17:47:42.303867 google_oslogin_nss_cache[1849]: oslogin_cache_refresh[1849]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 12 17:47:42.297129 oslogin_cache_refresh[1849]: Failure getting groups, quitting Sep 12 17:47:42.298166 systemd[1]: Reached target network.target - Network. Sep 12 17:47:42.298760 oslogin_cache_refresh[1849]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 12 17:47:42.313846 jq[1871]: true Sep 12 17:47:42.310259 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 17:47:42.318369 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 12 17:47:42.332237 extend-filesystems[1848]: Found /dev/nvme0n1p6 Sep 12 17:47:42.339583 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 17:47:42.341073 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 12 17:47:42.344500 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 12 17:47:42.354180 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 17:47:42.362893 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Sep 12 17:47:42.366075 update_engine[1862]: I20250912 17:47:42.365967 1862 main.cc:92] Flatcar Update Engine starting Sep 12 17:47:42.367322 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 17:47:42.389817 extend-filesystems[1848]: Found /dev/nvme0n1p9 Sep 12 17:47:42.397185 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Sep 12 17:47:42.397996 extend-filesystems[1848]: Checking size of /dev/nvme0n1p9 Sep 12 17:47:42.404757 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 12 17:47:42.445576 kernel: ACPI: button: Sleep Button [SLPF] Sep 12 17:47:42.445663 tar[1869]: linux-amd64/helm Sep 12 17:47:42.447706 dbus-daemon[1845]: [system] SELinux support is enabled Sep 12 17:47:42.447918 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 17:47:42.454636 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 17:47:42.454983 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 17:47:42.456180 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 17:47:42.456461 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 17:47:42.476410 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 17:47:42.476712 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 17:47:42.485708 dbus-daemon[1845]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1812 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 12 17:47:42.485467 (ntainerd)[1912]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 17:47:42.512386 extend-filesystems[1848]: Resized partition /dev/nvme0n1p9 Sep 12 17:47:42.492697 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 12 17:47:42.517601 systemd[1]: Started update-engine.service - Update Engine. Sep 12 17:47:42.523906 update_engine[1862]: I20250912 17:47:42.522341 1862 update_check_scheduler.cc:74] Next update check in 5m55s Sep 12 17:47:42.525319 extend-filesystems[1922]: resize2fs 1.47.2 (1-Jan-2025) Sep 12 17:47:42.536857 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 12 17:47:42.539246 bash[1917]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:47:42.540791 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 17:47:42.542066 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 12 17:47:42.544262 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 17:47:42.559624 systemd[1]: Starting sshkeys.service... Sep 12 17:47:42.615681 ntpd[1851]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 14:59:08 UTC 2025 (1): Starting Sep 12 17:47:42.620816 ntpd[1851]: 12 Sep 17:47:42 ntpd[1851]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 14:59:08 UTC 2025 (1): Starting Sep 12 17:47:42.620816 ntpd[1851]: 12 Sep 17:47:42 ntpd[1851]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 12 17:47:42.620816 ntpd[1851]: 12 Sep 17:47:42 ntpd[1851]: ---------------------------------------------------- Sep 12 17:47:42.620816 ntpd[1851]: 12 Sep 17:47:42 ntpd[1851]: ntp-4 is maintained by Network Time Foundation, Sep 12 17:47:42.620816 ntpd[1851]: 12 Sep 17:47:42 ntpd[1851]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 12 17:47:42.620816 ntpd[1851]: 12 Sep 17:47:42 ntpd[1851]: corporation. Support and training for ntp-4 are Sep 12 17:47:42.620816 ntpd[1851]: 12 Sep 17:47:42 ntpd[1851]: available at https://www.nwtime.org/support Sep 12 17:47:42.620816 ntpd[1851]: 12 Sep 17:47:42 ntpd[1851]: ---------------------------------------------------- Sep 12 17:47:42.617894 ntpd[1851]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 12 17:47:42.617906 ntpd[1851]: ---------------------------------------------------- Sep 12 17:47:42.617916 ntpd[1851]: ntp-4 is maintained by Network Time Foundation, Sep 12 17:47:42.617926 ntpd[1851]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 12 17:47:42.617935 ntpd[1851]: corporation. Support and training for ntp-4 are Sep 12 17:47:42.617944 ntpd[1851]: available at https://www.nwtime.org/support Sep 12 17:47:42.617953 ntpd[1851]: ---------------------------------------------------- Sep 12 17:47:42.636200 ntpd[1851]: proto: precision = 0.098 usec (-23) Sep 12 17:47:42.636516 ntpd[1851]: 12 Sep 17:47:42 ntpd[1851]: proto: precision = 0.098 usec (-23) Sep 12 17:47:42.638393 ntpd[1851]: basedate set to 2025-08-31 Sep 12 17:47:42.638550 ntpd[1851]: 12 Sep 17:47:42 ntpd[1851]: basedate set to 2025-08-31 Sep 12 17:47:42.638603 ntpd[1851]: gps base set to 2025-08-31 (week 2382) Sep 12 17:47:42.638663 ntpd[1851]: 12 Sep 17:47:42 ntpd[1851]: gps base set to 2025-08-31 (week 2382) Sep 12 17:47:42.640995 ntpd[1851]: Listen and drop on 0 v6wildcard [::]:123 Sep 12 17:47:42.641144 ntpd[1851]: 12 Sep 17:47:42 ntpd[1851]: Listen and drop on 0 v6wildcard [::]:123 Sep 12 17:47:42.641231 ntpd[1851]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 12 17:47:42.641392 ntpd[1851]: 12 Sep 17:47:42 ntpd[1851]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 12 17:47:42.641634 ntpd[1851]: Listen normally on 2 lo 127.0.0.1:123 Sep 12 17:47:42.641717 ntpd[1851]: 12 Sep 17:47:42 ntpd[1851]: Listen normally on 2 lo 127.0.0.1:123 Sep 12 17:47:42.641798 ntpd[1851]: Listen normally on 3 eth0 172.31.16.223:123 Sep 12 17:47:42.641870 ntpd[1851]: 12 Sep 17:47:42 ntpd[1851]: Listen normally on 3 eth0 172.31.16.223:123 Sep 12 17:47:42.641956 ntpd[1851]: Listen normally on 4 lo [::1]:123 Sep 12 17:47:42.643114 ntpd[1851]: 12 Sep 17:47:42 ntpd[1851]: Listen normally on 4 lo [::1]:123 Sep 12 17:47:42.643114 ntpd[1851]: 12 Sep 17:47:42 ntpd[1851]: bind(21) AF_INET6 fe80::435:53ff:fea5:76d3%2#123 flags 0x11 failed: Cannot assign requested address Sep 12 17:47:42.643114 ntpd[1851]: 12 Sep 17:47:42 ntpd[1851]: unable to create socket on eth0 (5) for fe80::435:53ff:fea5:76d3%2#123 Sep 12 17:47:42.643114 ntpd[1851]: 12 Sep 17:47:42 ntpd[1851]: failed to init interface for address fe80::435:53ff:fea5:76d3%2 Sep 12 17:47:42.643114 ntpd[1851]: 12 Sep 17:47:42 ntpd[1851]: Listening on routing socket on fd #21 for interface updates Sep 12 17:47:42.642069 ntpd[1851]: bind(21) AF_INET6 fe80::435:53ff:fea5:76d3%2#123 flags 0x11 failed: Cannot assign requested address Sep 12 17:47:42.642137 ntpd[1851]: unable to create socket on eth0 (5) for fe80::435:53ff:fea5:76d3%2#123 Sep 12 17:47:42.642153 ntpd[1851]: failed to init interface for address fe80::435:53ff:fea5:76d3%2 Sep 12 17:47:42.642189 ntpd[1851]: Listening on routing socket on fd #21 for interface updates Sep 12 17:47:42.644146 ntpd[1851]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:47:42.644247 ntpd[1851]: 12 Sep 17:47:42 ntpd[1851]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:47:42.648525 ntpd[1851]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:47:42.651233 ntpd[1851]: 12 Sep 17:47:42 ntpd[1851]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:47:42.690729 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 12 17:47:42.696233 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 12 17:47:42.745728 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 12 17:47:42.749416 coreos-metadata[1844]: Sep 12 17:47:42.724 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 12 17:47:42.749416 coreos-metadata[1844]: Sep 12 17:47:42.727 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 12 17:47:42.749416 coreos-metadata[1844]: Sep 12 17:47:42.730 INFO Fetch successful Sep 12 17:47:42.749416 coreos-metadata[1844]: Sep 12 17:47:42.730 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 12 17:47:42.749416 coreos-metadata[1844]: Sep 12 17:47:42.733 INFO Fetch successful Sep 12 17:47:42.749416 coreos-metadata[1844]: Sep 12 17:47:42.733 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 12 17:47:42.749416 coreos-metadata[1844]: Sep 12 17:47:42.737 INFO Fetch successful Sep 12 17:47:42.749416 coreos-metadata[1844]: Sep 12 17:47:42.737 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 12 17:47:42.749416 coreos-metadata[1844]: Sep 12 17:47:42.738 INFO Fetch successful Sep 12 17:47:42.749416 coreos-metadata[1844]: Sep 12 17:47:42.738 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 12 17:47:42.749416 coreos-metadata[1844]: Sep 12 17:47:42.742 INFO Fetch failed with 404: resource not found Sep 12 17:47:42.749416 coreos-metadata[1844]: Sep 12 17:47:42.742 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 12 17:47:42.749416 coreos-metadata[1844]: Sep 12 17:47:42.743 INFO Fetch successful Sep 12 17:47:42.749416 coreos-metadata[1844]: Sep 12 17:47:42.743 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 12 17:47:42.749416 coreos-metadata[1844]: Sep 12 17:47:42.745 INFO Fetch successful Sep 12 17:47:42.749416 coreos-metadata[1844]: Sep 12 17:47:42.745 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 12 17:47:42.749416 coreos-metadata[1844]: Sep 12 17:47:42.747 INFO Fetch successful Sep 12 17:47:42.749416 coreos-metadata[1844]: Sep 12 17:47:42.748 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 12 17:47:42.750468 coreos-metadata[1844]: Sep 12 17:47:42.750 INFO Fetch successful Sep 12 17:47:42.752159 coreos-metadata[1844]: Sep 12 17:47:42.750 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 12 17:47:42.752255 extend-filesystems[1922]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 12 17:47:42.752255 extend-filesystems[1922]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 17:47:42.752255 extend-filesystems[1922]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 12 17:47:42.757388 extend-filesystems[1848]: Resized filesystem in /dev/nvme0n1p9 Sep 12 17:47:42.753024 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 17:47:42.754387 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 17:47:42.760843 coreos-metadata[1844]: Sep 12 17:47:42.760 INFO Fetch successful Sep 12 17:47:42.901948 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 12 17:47:42.903294 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 17:47:42.972059 sshd_keygen[1903]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 17:47:43.006399 locksmithd[1924]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 17:47:43.066219 systemd-logind[1858]: New seat seat0. Sep 12 17:47:43.067180 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 17:47:43.108590 coreos-metadata[1931]: Sep 12 17:47:43.108 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 12 17:47:43.110284 coreos-metadata[1931]: Sep 12 17:47:43.110 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 12 17:47:43.112599 coreos-metadata[1931]: Sep 12 17:47:43.112 INFO Fetch successful Sep 12 17:47:43.112685 coreos-metadata[1931]: Sep 12 17:47:43.112 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 12 17:47:43.115800 coreos-metadata[1931]: Sep 12 17:47:43.115 INFO Fetch successful Sep 12 17:47:43.123127 unknown[1931]: wrote ssh authorized keys file for user: core Sep 12 17:47:43.138026 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 17:47:43.150513 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 17:47:43.176567 update-ssh-keys[2007]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:47:43.179420 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 12 17:47:43.188692 systemd[1]: Finished sshkeys.service. Sep 12 17:47:43.195269 containerd[1912]: time="2025-09-12T17:47:43Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 12 17:47:43.203102 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 17:47:43.203419 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 17:47:43.212439 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 17:47:43.226108 containerd[1912]: time="2025-09-12T17:47:43.224312619Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 12 17:47:43.291984 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 17:47:43.296998 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 17:47:43.300311 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 17:47:43.301561 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 17:47:43.317256 containerd[1912]: time="2025-09-12T17:47:43.317208317Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.114µs" Sep 12 17:47:43.318478 containerd[1912]: time="2025-09-12T17:47:43.317962214Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 12 17:47:43.318478 containerd[1912]: time="2025-09-12T17:47:43.318008295Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 12 17:47:43.318478 containerd[1912]: time="2025-09-12T17:47:43.318231665Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 12 17:47:43.318478 containerd[1912]: time="2025-09-12T17:47:43.318257428Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 12 17:47:43.318478 containerd[1912]: time="2025-09-12T17:47:43.318291215Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 17:47:43.318478 containerd[1912]: time="2025-09-12T17:47:43.318363068Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 17:47:43.318478 containerd[1912]: time="2025-09-12T17:47:43.318379714Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 17:47:43.335142 containerd[1912]: time="2025-09-12T17:47:43.334136869Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 17:47:43.335142 containerd[1912]: time="2025-09-12T17:47:43.334181472Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 17:47:43.335142 containerd[1912]: time="2025-09-12T17:47:43.334216285Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 17:47:43.335142 containerd[1912]: time="2025-09-12T17:47:43.334228973Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 12 17:47:43.335142 containerd[1912]: time="2025-09-12T17:47:43.334372968Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 12 17:47:43.335142 containerd[1912]: time="2025-09-12T17:47:43.334618865Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 17:47:43.335142 containerd[1912]: time="2025-09-12T17:47:43.334663049Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 17:47:43.335142 containerd[1912]: time="2025-09-12T17:47:43.334680969Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 12 17:47:43.335142 containerd[1912]: time="2025-09-12T17:47:43.334733073Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 12 17:47:43.341053 containerd[1912]: time="2025-09-12T17:47:43.340530881Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 12 17:47:43.341053 containerd[1912]: time="2025-09-12T17:47:43.340678172Z" level=info msg="metadata content store policy set" policy=shared Sep 12 17:47:43.344493 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 12 17:47:43.350813 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 17:47:43.357447 containerd[1912]: time="2025-09-12T17:47:43.356650805Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 12 17:47:43.357447 containerd[1912]: time="2025-09-12T17:47:43.356729234Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 12 17:47:43.357447 containerd[1912]: time="2025-09-12T17:47:43.356751567Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 12 17:47:43.357447 containerd[1912]: time="2025-09-12T17:47:43.356770233Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 12 17:47:43.357447 containerd[1912]: time="2025-09-12T17:47:43.356788706Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 12 17:47:43.357447 containerd[1912]: time="2025-09-12T17:47:43.356804927Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 12 17:47:43.357447 containerd[1912]: time="2025-09-12T17:47:43.356824623Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 12 17:47:43.357447 containerd[1912]: time="2025-09-12T17:47:43.356842023Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 12 17:47:43.357447 containerd[1912]: time="2025-09-12T17:47:43.356859602Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 12 17:47:43.357447 containerd[1912]: time="2025-09-12T17:47:43.356875114Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 12 17:47:43.357447 containerd[1912]: time="2025-09-12T17:47:43.356889015Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 12 17:47:43.357447 containerd[1912]: time="2025-09-12T17:47:43.356907049Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 12 17:47:43.357447 containerd[1912]: time="2025-09-12T17:47:43.357075995Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 12 17:47:43.357447 containerd[1912]: time="2025-09-12T17:47:43.357154967Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 12 17:47:43.357975 containerd[1912]: time="2025-09-12T17:47:43.357176237Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 12 17:47:43.357975 containerd[1912]: time="2025-09-12T17:47:43.357281649Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 12 17:47:43.357975 containerd[1912]: time="2025-09-12T17:47:43.357298934Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 12 17:47:43.357975 containerd[1912]: time="2025-09-12T17:47:43.357313967Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 12 17:47:43.357975 containerd[1912]: time="2025-09-12T17:47:43.357331952Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 12 17:47:43.357975 containerd[1912]: time="2025-09-12T17:47:43.357380237Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 12 17:47:43.357975 containerd[1912]: time="2025-09-12T17:47:43.357398254Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 12 17:47:43.357975 containerd[1912]: time="2025-09-12T17:47:43.357416869Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 12 17:47:43.357975 containerd[1912]: time="2025-09-12T17:47:43.357450573Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 12 17:47:43.357975 containerd[1912]: time="2025-09-12T17:47:43.357550119Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 12 17:47:43.357975 containerd[1912]: time="2025-09-12T17:47:43.357569253Z" level=info msg="Start snapshots syncer" Sep 12 17:47:43.357975 containerd[1912]: time="2025-09-12T17:47:43.357628775Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 12 17:47:43.363140 containerd[1912]: time="2025-09-12T17:47:43.359394449Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 12 17:47:43.363140 containerd[1912]: time="2025-09-12T17:47:43.359511649Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 12 17:47:43.365279 containerd[1912]: time="2025-09-12T17:47:43.365230231Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 12 17:47:43.370724 containerd[1912]: time="2025-09-12T17:47:43.367306209Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 12 17:47:43.370724 containerd[1912]: time="2025-09-12T17:47:43.367383652Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 12 17:47:43.370724 containerd[1912]: time="2025-09-12T17:47:43.367404003Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 12 17:47:43.370724 containerd[1912]: time="2025-09-12T17:47:43.367435864Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 12 17:47:43.370724 containerd[1912]: time="2025-09-12T17:47:43.367456738Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 12 17:47:43.370724 containerd[1912]: time="2025-09-12T17:47:43.367472547Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 12 17:47:43.370724 containerd[1912]: time="2025-09-12T17:47:43.367488279Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 12 17:47:43.370724 containerd[1912]: time="2025-09-12T17:47:43.367543659Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 12 17:47:43.370724 containerd[1912]: time="2025-09-12T17:47:43.367591115Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 12 17:47:43.370724 containerd[1912]: time="2025-09-12T17:47:43.367618214Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 12 17:47:43.370724 containerd[1912]: time="2025-09-12T17:47:43.367698928Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 17:47:43.370724 containerd[1912]: time="2025-09-12T17:47:43.367722807Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 17:47:43.370724 containerd[1912]: time="2025-09-12T17:47:43.367805143Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 17:47:43.371247 containerd[1912]: time="2025-09-12T17:47:43.367837112Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 17:47:43.371247 containerd[1912]: time="2025-09-12T17:47:43.367852746Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 12 17:47:43.371247 containerd[1912]: time="2025-09-12T17:47:43.367870348Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 12 17:47:43.371247 containerd[1912]: time="2025-09-12T17:47:43.367901207Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 12 17:47:43.371247 containerd[1912]: time="2025-09-12T17:47:43.367928558Z" level=info msg="runtime interface created" Sep 12 17:47:43.371247 containerd[1912]: time="2025-09-12T17:47:43.367937602Z" level=info msg="created NRI interface" Sep 12 17:47:43.371247 containerd[1912]: time="2025-09-12T17:47:43.367952234Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 12 17:47:43.371247 containerd[1912]: time="2025-09-12T17:47:43.367992099Z" level=info msg="Connect containerd service" Sep 12 17:47:43.371247 containerd[1912]: time="2025-09-12T17:47:43.368032884Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 17:47:43.371247 containerd[1912]: time="2025-09-12T17:47:43.370935798Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:47:43.426914 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 17:47:43.487525 systemd-networkd[1812]: eth0: Gained IPv6LL Sep 12 17:47:43.496654 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 17:47:43.499709 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 17:47:43.506181 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 12 17:47:43.513807 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:47:43.523272 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 17:47:43.625341 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 12 17:47:43.631301 dbus-daemon[1845]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 12 17:47:43.637975 dbus-daemon[1845]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1921 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 12 17:47:43.652190 systemd[1]: Starting polkit.service - Authorization Manager... Sep 12 17:47:43.662808 containerd[1912]: time="2025-09-12T17:47:43.662764265Z" level=info msg="Start subscribing containerd event" Sep 12 17:47:43.662915 containerd[1912]: time="2025-09-12T17:47:43.662838851Z" level=info msg="Start recovering state" Sep 12 17:47:43.664965 containerd[1912]: time="2025-09-12T17:47:43.663180023Z" level=info msg="Start event monitor" Sep 12 17:47:43.664965 containerd[1912]: time="2025-09-12T17:47:43.663213411Z" level=info msg="Start cni network conf syncer for default" Sep 12 17:47:43.664965 containerd[1912]: time="2025-09-12T17:47:43.663224200Z" level=info msg="Start streaming server" Sep 12 17:47:43.664965 containerd[1912]: time="2025-09-12T17:47:43.663235953Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 12 17:47:43.664965 containerd[1912]: time="2025-09-12T17:47:43.663246374Z" level=info msg="runtime interface starting up..." Sep 12 17:47:43.664965 containerd[1912]: time="2025-09-12T17:47:43.663255862Z" level=info msg="starting plugins..." Sep 12 17:47:43.664965 containerd[1912]: time="2025-09-12T17:47:43.663273036Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 12 17:47:43.664965 containerd[1912]: time="2025-09-12T17:47:43.663744923Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 17:47:43.664965 containerd[1912]: time="2025-09-12T17:47:43.663807330Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 17:47:43.664640 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 17:47:43.667981 containerd[1912]: time="2025-09-12T17:47:43.665718430Z" level=info msg="containerd successfully booted in 0.471003s" Sep 12 17:47:43.685652 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 17:47:43.797937 amazon-ssm-agent[2070]: Initializing new seelog logger Sep 12 17:47:43.799434 amazon-ssm-agent[2070]: New Seelog Logger Creation Complete Sep 12 17:47:43.799616 amazon-ssm-agent[2070]: 2025/09/12 17:47:43 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:47:43.799668 amazon-ssm-agent[2070]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:47:43.801049 amazon-ssm-agent[2070]: 2025/09/12 17:47:43 processing appconfig overrides Sep 12 17:47:43.801802 systemd-logind[1858]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 17:47:43.802074 amazon-ssm-agent[2070]: 2025/09/12 17:47:43 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:47:43.803432 amazon-ssm-agent[2070]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:47:43.803432 amazon-ssm-agent[2070]: 2025/09/12 17:47:43 processing appconfig overrides Sep 12 17:47:43.803432 amazon-ssm-agent[2070]: 2025/09/12 17:47:43 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:47:43.803432 amazon-ssm-agent[2070]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:47:43.803432 amazon-ssm-agent[2070]: 2025/09/12 17:47:43 processing appconfig overrides Sep 12 17:47:43.807193 amazon-ssm-agent[2070]: 2025-09-12 17:47:43.8015 INFO Proxy environment variables: Sep 12 17:47:43.811808 amazon-ssm-agent[2070]: 2025/09/12 17:47:43 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:47:43.811808 amazon-ssm-agent[2070]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:47:43.811808 amazon-ssm-agent[2070]: 2025/09/12 17:47:43 processing appconfig overrides Sep 12 17:47:43.818305 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:47:43.840495 systemd-logind[1858]: Watching system buttons on /dev/input/event3 (Sleep Button) Sep 12 17:47:43.846041 systemd-logind[1858]: Watching system buttons on /dev/input/event2 (Power Button) Sep 12 17:47:43.858184 tar[1869]: linux-amd64/LICENSE Sep 12 17:47:43.860782 tar[1869]: linux-amd64/README.md Sep 12 17:47:43.891675 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:47:43.892005 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:47:43.897641 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:47:43.910186 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 17:47:43.912303 amazon-ssm-agent[2070]: 2025-09-12 17:47:43.8020 INFO http_proxy: Sep 12 17:47:44.011125 amazon-ssm-agent[2070]: 2025-09-12 17:47:43.8020 INFO no_proxy: Sep 12 17:47:44.078315 polkitd[2090]: Started polkitd version 126 Sep 12 17:47:44.092039 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:47:44.092845 polkitd[2090]: Loading rules from directory /etc/polkit-1/rules.d Sep 12 17:47:44.094748 polkitd[2090]: Loading rules from directory /run/polkit-1/rules.d Sep 12 17:47:44.094894 polkitd[2090]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 12 17:47:44.095424 polkitd[2090]: Loading rules from directory /usr/local/share/polkit-1/rules.d Sep 12 17:47:44.095529 polkitd[2090]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 12 17:47:44.096775 polkitd[2090]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 12 17:47:44.097629 polkitd[2090]: Finished loading, compiling and executing 2 rules Sep 12 17:47:44.099201 systemd[1]: Started polkit.service - Authorization Manager. Sep 12 17:47:44.102697 dbus-daemon[1845]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 12 17:47:44.103252 polkitd[2090]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 12 17:47:44.110625 amazon-ssm-agent[2070]: 2025-09-12 17:47:43.8020 INFO https_proxy: Sep 12 17:47:44.126557 systemd-resolved[1761]: System hostname changed to 'ip-172-31-16-223'. Sep 12 17:47:44.126926 systemd-hostnamed[1921]: Hostname set to (transient) Sep 12 17:47:44.210565 amazon-ssm-agent[2070]: 2025-09-12 17:47:43.8025 INFO Checking if agent identity type OnPrem can be assumed Sep 12 17:47:44.309884 amazon-ssm-agent[2070]: 2025-09-12 17:47:43.8027 INFO Checking if agent identity type EC2 can be assumed Sep 12 17:47:44.408520 amazon-ssm-agent[2070]: 2025-09-12 17:47:43.9642 INFO Agent will take identity from EC2 Sep 12 17:47:44.463520 amazon-ssm-agent[2070]: 2025/09/12 17:47:44 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:47:44.463520 amazon-ssm-agent[2070]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:47:44.463659 amazon-ssm-agent[2070]: 2025/09/12 17:47:44 processing appconfig overrides Sep 12 17:47:44.491101 amazon-ssm-agent[2070]: 2025-09-12 17:47:43.9703 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Sep 12 17:47:44.491101 amazon-ssm-agent[2070]: 2025-09-12 17:47:43.9712 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Sep 12 17:47:44.491101 amazon-ssm-agent[2070]: 2025-09-12 17:47:43.9712 INFO [amazon-ssm-agent] Starting Core Agent Sep 12 17:47:44.491101 amazon-ssm-agent[2070]: 2025-09-12 17:47:43.9712 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Sep 12 17:47:44.491101 amazon-ssm-agent[2070]: 2025-09-12 17:47:43.9712 INFO [Registrar] Starting registrar module Sep 12 17:47:44.491296 amazon-ssm-agent[2070]: 2025-09-12 17:47:43.9811 INFO [EC2Identity] Checking disk for registration info Sep 12 17:47:44.491296 amazon-ssm-agent[2070]: 2025-09-12 17:47:43.9812 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Sep 12 17:47:44.491296 amazon-ssm-agent[2070]: 2025-09-12 17:47:43.9812 INFO [EC2Identity] Generating registration keypair Sep 12 17:47:44.491296 amazon-ssm-agent[2070]: 2025-09-12 17:47:44.4077 INFO [EC2Identity] Checking write access before registering Sep 12 17:47:44.491296 amazon-ssm-agent[2070]: 2025-09-12 17:47:44.4081 INFO [EC2Identity] Registering EC2 instance with Systems Manager Sep 12 17:47:44.491296 amazon-ssm-agent[2070]: 2025-09-12 17:47:44.4632 INFO [EC2Identity] EC2 registration was successful. Sep 12 17:47:44.491296 amazon-ssm-agent[2070]: 2025-09-12 17:47:44.4632 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Sep 12 17:47:44.491296 amazon-ssm-agent[2070]: 2025-09-12 17:47:44.4633 INFO [CredentialRefresher] credentialRefresher has started Sep 12 17:47:44.491296 amazon-ssm-agent[2070]: 2025-09-12 17:47:44.4633 INFO [CredentialRefresher] Starting credentials refresher loop Sep 12 17:47:44.491296 amazon-ssm-agent[2070]: 2025-09-12 17:47:44.4908 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 12 17:47:44.491296 amazon-ssm-agent[2070]: 2025-09-12 17:47:44.4910 INFO [CredentialRefresher] Credentials ready Sep 12 17:47:44.507122 amazon-ssm-agent[2070]: 2025-09-12 17:47:44.4911 INFO [CredentialRefresher] Next credential rotation will be in 29.999994042683333 minutes Sep 12 17:47:45.500994 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 17:47:45.503924 systemd[1]: Started sshd@0-172.31.16.223:22-139.178.68.195:42402.service - OpenSSH per-connection server daemon (139.178.68.195:42402). Sep 12 17:47:45.510792 amazon-ssm-agent[2070]: 2025-09-12 17:47:45.5087 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 12 17:47:45.611704 amazon-ssm-agent[2070]: 2025-09-12 17:47:45.5129 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2143) started Sep 12 17:47:45.619067 ntpd[1851]: Listen normally on 6 eth0 [fe80::435:53ff:fea5:76d3%2]:123 Sep 12 17:47:45.619852 ntpd[1851]: 12 Sep 17:47:45 ntpd[1851]: Listen normally on 6 eth0 [fe80::435:53ff:fea5:76d3%2]:123 Sep 12 17:47:45.713175 amazon-ssm-agent[2070]: 2025-09-12 17:47:45.5130 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 12 17:47:45.768609 sshd[2142]: Accepted publickey for core from 139.178.68.195 port 42402 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:47:45.770869 sshd-session[2142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:47:45.777759 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 17:47:45.780302 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 17:47:45.790205 systemd-logind[1858]: New session 1 of user core. Sep 12 17:47:45.800908 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 17:47:45.804579 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 17:47:45.817880 (systemd)[2160]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 17:47:45.821165 systemd-logind[1858]: New session c1 of user core. Sep 12 17:47:45.918681 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:47:45.919953 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 17:47:45.932689 (kubelet)[2170]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:47:45.974603 systemd[2160]: Queued start job for default target default.target. Sep 12 17:47:45.982504 systemd[2160]: Created slice app.slice - User Application Slice. Sep 12 17:47:45.982547 systemd[2160]: Reached target paths.target - Paths. Sep 12 17:47:45.982748 systemd[2160]: Reached target timers.target - Timers. Sep 12 17:47:45.984044 systemd[2160]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 17:47:46.004676 systemd[2160]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 17:47:46.004784 systemd[2160]: Reached target sockets.target - Sockets. Sep 12 17:47:46.004828 systemd[2160]: Reached target basic.target - Basic System. Sep 12 17:47:46.004864 systemd[2160]: Reached target default.target - Main User Target. Sep 12 17:47:46.004893 systemd[2160]: Startup finished in 176ms. Sep 12 17:47:46.005029 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 17:47:46.016338 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 17:47:46.018297 systemd[1]: Startup finished in 2.756s (kernel) + 7.018s (initrd) + 7.367s (userspace) = 17.142s. Sep 12 17:47:46.172070 systemd[1]: Started sshd@1-172.31.16.223:22-139.178.68.195:42410.service - OpenSSH per-connection server daemon (139.178.68.195:42410). Sep 12 17:47:46.345042 sshd[2181]: Accepted publickey for core from 139.178.68.195 port 42410 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:47:46.346695 sshd-session[2181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:47:46.352232 systemd-logind[1858]: New session 2 of user core. Sep 12 17:47:46.362321 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 17:47:46.478048 sshd[2188]: Connection closed by 139.178.68.195 port 42410 Sep 12 17:47:46.478654 sshd-session[2181]: pam_unix(sshd:session): session closed for user core Sep 12 17:47:46.483468 systemd[1]: sshd@1-172.31.16.223:22-139.178.68.195:42410.service: Deactivated successfully. Sep 12 17:47:46.485062 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 17:47:46.488148 systemd-logind[1858]: Session 2 logged out. Waiting for processes to exit. Sep 12 17:47:46.489417 systemd-logind[1858]: Removed session 2. Sep 12 17:47:46.508860 systemd[1]: Started sshd@2-172.31.16.223:22-139.178.68.195:42420.service - OpenSSH per-connection server daemon (139.178.68.195:42420). Sep 12 17:47:46.672927 sshd[2194]: Accepted publickey for core from 139.178.68.195 port 42420 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:47:46.674700 sshd-session[2194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:47:46.681735 systemd-logind[1858]: New session 3 of user core. Sep 12 17:47:46.687295 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 17:47:46.800945 sshd[2197]: Connection closed by 139.178.68.195 port 42420 Sep 12 17:47:46.801765 sshd-session[2194]: pam_unix(sshd:session): session closed for user core Sep 12 17:47:46.807355 systemd[1]: sshd@2-172.31.16.223:22-139.178.68.195:42420.service: Deactivated successfully. Sep 12 17:47:46.809811 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 17:47:46.811852 systemd-logind[1858]: Session 3 logged out. Waiting for processes to exit. Sep 12 17:47:46.814141 systemd-logind[1858]: Removed session 3. Sep 12 17:47:46.833192 systemd[1]: Started sshd@3-172.31.16.223:22-139.178.68.195:42432.service - OpenSSH per-connection server daemon (139.178.68.195:42432). Sep 12 17:47:46.987017 kubelet[2170]: E0912 17:47:46.986916 2170 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:47:46.988707 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:47:46.988852 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:47:46.989521 systemd[1]: kubelet.service: Consumed 1.036s CPU time, 263.2M memory peak. Sep 12 17:47:47.006056 sshd[2203]: Accepted publickey for core from 139.178.68.195 port 42432 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:47:47.007548 sshd-session[2203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:47:47.012619 systemd-logind[1858]: New session 4 of user core. Sep 12 17:47:47.016261 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 17:47:47.141645 sshd[2208]: Connection closed by 139.178.68.195 port 42432 Sep 12 17:47:47.142344 sshd-session[2203]: pam_unix(sshd:session): session closed for user core Sep 12 17:47:47.147942 systemd[1]: sshd@3-172.31.16.223:22-139.178.68.195:42432.service: Deactivated successfully. Sep 12 17:47:47.149871 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 17:47:47.151394 systemd-logind[1858]: Session 4 logged out. Waiting for processes to exit. Sep 12 17:47:47.152809 systemd-logind[1858]: Removed session 4. Sep 12 17:47:47.171295 systemd[1]: Started sshd@4-172.31.16.223:22-139.178.68.195:42434.service - OpenSSH per-connection server daemon (139.178.68.195:42434). Sep 12 17:47:47.335188 sshd[2214]: Accepted publickey for core from 139.178.68.195 port 42434 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:47:47.336387 sshd-session[2214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:47:47.342155 systemd-logind[1858]: New session 5 of user core. Sep 12 17:47:47.351303 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 17:47:47.481273 sudo[2218]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 17:47:47.481734 sudo[2218]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:47:47.497120 sudo[2218]: pam_unix(sudo:session): session closed for user root Sep 12 17:47:47.519140 sshd[2217]: Connection closed by 139.178.68.195 port 42434 Sep 12 17:47:47.519873 sshd-session[2214]: pam_unix(sshd:session): session closed for user core Sep 12 17:47:47.525464 systemd[1]: sshd@4-172.31.16.223:22-139.178.68.195:42434.service: Deactivated successfully. Sep 12 17:47:47.527607 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 17:47:47.528819 systemd-logind[1858]: Session 5 logged out. Waiting for processes to exit. Sep 12 17:47:47.530672 systemd-logind[1858]: Removed session 5. Sep 12 17:47:47.557199 systemd[1]: Started sshd@5-172.31.16.223:22-139.178.68.195:42442.service - OpenSSH per-connection server daemon (139.178.68.195:42442). Sep 12 17:47:47.725691 sshd[2224]: Accepted publickey for core from 139.178.68.195 port 42442 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:47:47.727010 sshd-session[2224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:47:47.732579 systemd-logind[1858]: New session 6 of user core. Sep 12 17:47:47.737453 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 17:47:47.837871 sudo[2229]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 17:47:47.838245 sudo[2229]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:47:47.850582 sudo[2229]: pam_unix(sudo:session): session closed for user root Sep 12 17:47:47.856852 sudo[2228]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 12 17:47:47.857593 sudo[2228]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:47:47.869541 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 17:47:47.912237 augenrules[2251]: No rules Sep 12 17:47:47.913916 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:47:47.914220 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 17:47:47.916350 sudo[2228]: pam_unix(sudo:session): session closed for user root Sep 12 17:47:47.938916 sshd[2227]: Connection closed by 139.178.68.195 port 42442 Sep 12 17:47:47.939597 sshd-session[2224]: pam_unix(sshd:session): session closed for user core Sep 12 17:47:47.944417 systemd[1]: sshd@5-172.31.16.223:22-139.178.68.195:42442.service: Deactivated successfully. Sep 12 17:47:47.946427 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 17:47:47.947394 systemd-logind[1858]: Session 6 logged out. Waiting for processes to exit. Sep 12 17:47:47.949281 systemd-logind[1858]: Removed session 6. Sep 12 17:47:47.974885 systemd[1]: Started sshd@6-172.31.16.223:22-139.178.68.195:42450.service - OpenSSH per-connection server daemon (139.178.68.195:42450). Sep 12 17:47:48.148193 sshd[2260]: Accepted publickey for core from 139.178.68.195 port 42450 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:47:48.149641 sshd-session[2260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:47:48.155127 systemd-logind[1858]: New session 7 of user core. Sep 12 17:47:48.161267 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 17:47:48.259563 sudo[2264]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 17:47:48.259836 sudo[2264]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:47:48.824478 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 17:47:48.843581 (dockerd)[2283]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 17:47:49.275308 dockerd[2283]: time="2025-09-12T17:47:49.275247994Z" level=info msg="Starting up" Sep 12 17:47:49.276397 dockerd[2283]: time="2025-09-12T17:47:49.276364874Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 12 17:47:49.291493 dockerd[2283]: time="2025-09-12T17:47:49.291446117Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 12 17:47:49.420957 dockerd[2283]: time="2025-09-12T17:47:49.420911490Z" level=info msg="Loading containers: start." Sep 12 17:47:49.434108 kernel: Initializing XFRM netlink socket Sep 12 17:47:50.102666 systemd-resolved[1761]: Clock change detected. Flushing caches. Sep 12 17:47:50.179089 (udev-worker)[2305]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:47:50.234196 systemd-networkd[1812]: docker0: Link UP Sep 12 17:47:50.240366 dockerd[2283]: time="2025-09-12T17:47:50.240319154Z" level=info msg="Loading containers: done." Sep 12 17:47:50.258865 dockerd[2283]: time="2025-09-12T17:47:50.258793192Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 17:47:50.259074 dockerd[2283]: time="2025-09-12T17:47:50.258906102Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 12 17:47:50.259074 dockerd[2283]: time="2025-09-12T17:47:50.259022272Z" level=info msg="Initializing buildkit" Sep 12 17:47:50.286756 dockerd[2283]: time="2025-09-12T17:47:50.286680527Z" level=info msg="Completed buildkit initialization" Sep 12 17:47:50.294300 dockerd[2283]: time="2025-09-12T17:47:50.294252969Z" level=info msg="Daemon has completed initialization" Sep 12 17:47:50.294431 dockerd[2283]: time="2025-09-12T17:47:50.294318460Z" level=info msg="API listen on /run/docker.sock" Sep 12 17:47:50.294703 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 17:47:51.514928 containerd[1912]: time="2025-09-12T17:47:51.514887848Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 12 17:47:52.040254 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount259857561.mount: Deactivated successfully. Sep 12 17:47:53.481901 containerd[1912]: time="2025-09-12T17:47:53.481831723Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:53.482944 containerd[1912]: time="2025-09-12T17:47:53.482750303Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=28117124" Sep 12 17:47:53.484356 containerd[1912]: time="2025-09-12T17:47:53.484321343Z" level=info msg="ImageCreate event name:\"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:53.487299 containerd[1912]: time="2025-09-12T17:47:53.487264814Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:53.488179 containerd[1912]: time="2025-09-12T17:47:53.488136953Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"28113723\" in 1.973211023s" Sep 12 17:47:53.488290 containerd[1912]: time="2025-09-12T17:47:53.488273597Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 12 17:47:53.488830 containerd[1912]: time="2025-09-12T17:47:53.488803007Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 12 17:47:55.131658 containerd[1912]: time="2025-09-12T17:47:55.131610308Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:55.134060 containerd[1912]: time="2025-09-12T17:47:55.133761404Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=24716632" Sep 12 17:47:55.136454 containerd[1912]: time="2025-09-12T17:47:55.136411085Z" level=info msg="ImageCreate event name:\"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:55.140755 containerd[1912]: time="2025-09-12T17:47:55.140710038Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:55.141854 containerd[1912]: time="2025-09-12T17:47:55.141821853Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"26351311\" in 1.652987092s" Sep 12 17:47:55.142046 containerd[1912]: time="2025-09-12T17:47:55.141962728Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 12 17:47:55.142643 containerd[1912]: time="2025-09-12T17:47:55.142575517Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 12 17:47:56.465592 containerd[1912]: time="2025-09-12T17:47:56.465529863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:56.471284 containerd[1912]: time="2025-09-12T17:47:56.471232160Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=18787698" Sep 12 17:47:56.476832 containerd[1912]: time="2025-09-12T17:47:56.476760635Z" level=info msg="ImageCreate event name:\"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:56.484116 containerd[1912]: time="2025-09-12T17:47:56.484034153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:56.484934 containerd[1912]: time="2025-09-12T17:47:56.484796921Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"20422395\" in 1.342192087s" Sep 12 17:47:56.484934 containerd[1912]: time="2025-09-12T17:47:56.484824902Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 12 17:47:56.485562 containerd[1912]: time="2025-09-12T17:47:56.485423718Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 12 17:47:57.487243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount404054994.mount: Deactivated successfully. Sep 12 17:47:57.490221 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 17:47:57.494407 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:47:57.758309 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:47:57.770207 (kubelet)[2575]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:47:57.851363 kubelet[2575]: E0912 17:47:57.851317 2575 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:47:57.856968 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:47:57.857161 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:47:57.857627 systemd[1]: kubelet.service: Consumed 213ms CPU time, 110.9M memory peak. Sep 12 17:47:58.207990 containerd[1912]: time="2025-09-12T17:47:58.207947375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:58.212371 containerd[1912]: time="2025-09-12T17:47:58.212317905Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=30410252" Sep 12 17:47:58.220058 containerd[1912]: time="2025-09-12T17:47:58.218960939Z" level=info msg="ImageCreate event name:\"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:58.229005 containerd[1912]: time="2025-09-12T17:47:58.228960579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:58.229724 containerd[1912]: time="2025-09-12T17:47:58.229695069Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"30409271\" in 1.744059255s" Sep 12 17:47:58.229816 containerd[1912]: time="2025-09-12T17:47:58.229802957Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 12 17:47:58.230275 containerd[1912]: time="2025-09-12T17:47:58.230246907Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 17:47:58.786939 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1999889704.mount: Deactivated successfully. Sep 12 17:47:59.786995 containerd[1912]: time="2025-09-12T17:47:59.786945316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:59.789434 containerd[1912]: time="2025-09-12T17:47:59.789070031Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 12 17:47:59.792246 containerd[1912]: time="2025-09-12T17:47:59.792101047Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:59.796766 containerd[1912]: time="2025-09-12T17:47:59.796715847Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:59.798329 containerd[1912]: time="2025-09-12T17:47:59.798291394Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.568010205s" Sep 12 17:47:59.798616 containerd[1912]: time="2025-09-12T17:47:59.798592567Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 12 17:47:59.799733 containerd[1912]: time="2025-09-12T17:47:59.799507397Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 17:48:00.296809 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount599872981.mount: Deactivated successfully. Sep 12 17:48:00.311412 containerd[1912]: time="2025-09-12T17:48:00.311354294Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:48:00.319215 containerd[1912]: time="2025-09-12T17:48:00.318091956Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:48:00.319215 containerd[1912]: time="2025-09-12T17:48:00.318299565Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 12 17:48:00.321818 containerd[1912]: time="2025-09-12T17:48:00.321665977Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:48:00.323419 containerd[1912]: time="2025-09-12T17:48:00.322850781Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 523.302959ms" Sep 12 17:48:00.323419 containerd[1912]: time="2025-09-12T17:48:00.322899014Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 17:48:00.323622 containerd[1912]: time="2025-09-12T17:48:00.323553581Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 12 17:48:00.886341 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1063756500.mount: Deactivated successfully. Sep 12 17:48:04.074742 containerd[1912]: time="2025-09-12T17:48:04.074662465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:48:04.076427 containerd[1912]: time="2025-09-12T17:48:04.076373943Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 12 17:48:04.078911 containerd[1912]: time="2025-09-12T17:48:04.078832962Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:48:04.082680 containerd[1912]: time="2025-09-12T17:48:04.082620855Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:48:04.084068 containerd[1912]: time="2025-09-12T17:48:04.083686769Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.76010125s" Sep 12 17:48:04.084068 containerd[1912]: time="2025-09-12T17:48:04.083732034Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 12 17:48:06.621254 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:48:06.621509 systemd[1]: kubelet.service: Consumed 213ms CPU time, 110.9M memory peak. Sep 12 17:48:06.624612 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:48:06.657782 systemd[1]: Reload requested from client PID 2720 ('systemctl') (unit session-7.scope)... Sep 12 17:48:06.657801 systemd[1]: Reloading... Sep 12 17:48:06.799203 zram_generator::config[2768]: No configuration found. Sep 12 17:48:07.084581 systemd[1]: Reloading finished in 425 ms. Sep 12 17:48:07.142659 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 17:48:07.142742 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 17:48:07.143006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:48:07.143058 systemd[1]: kubelet.service: Consumed 139ms CPU time, 98.2M memory peak. Sep 12 17:48:07.144901 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:48:07.391662 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:48:07.405772 (kubelet)[2828]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:48:07.452708 kubelet[2828]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:48:07.452708 kubelet[2828]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 17:48:07.452708 kubelet[2828]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:48:07.454875 kubelet[2828]: I0912 17:48:07.454814 2828 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:48:08.099378 kubelet[2828]: I0912 17:48:08.099336 2828 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 17:48:08.099378 kubelet[2828]: I0912 17:48:08.099367 2828 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:48:08.099703 kubelet[2828]: I0912 17:48:08.099679 2828 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 17:48:08.143464 kubelet[2828]: E0912 17:48:08.143193 2828 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.16.223:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.16.223:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:48:08.143586 kubelet[2828]: I0912 17:48:08.143531 2828 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:48:08.160849 kubelet[2828]: I0912 17:48:08.160814 2828 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 17:48:08.165084 kubelet[2828]: I0912 17:48:08.165039 2828 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:48:08.168801 kubelet[2828]: I0912 17:48:08.168747 2828 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 17:48:08.169004 kubelet[2828]: I0912 17:48:08.168964 2828 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:48:08.169325 kubelet[2828]: I0912 17:48:08.168999 2828 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-223","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:48:08.169325 kubelet[2828]: I0912 17:48:08.169303 2828 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:48:08.169325 kubelet[2828]: I0912 17:48:08.169319 2828 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 17:48:08.169516 kubelet[2828]: I0912 17:48:08.169427 2828 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:48:08.176124 kubelet[2828]: I0912 17:48:08.176064 2828 kubelet.go:408] "Attempting to sync node with API server" Sep 12 17:48:08.176124 kubelet[2828]: I0912 17:48:08.176117 2828 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:48:08.178998 kubelet[2828]: I0912 17:48:08.178691 2828 kubelet.go:314] "Adding apiserver pod source" Sep 12 17:48:08.178998 kubelet[2828]: I0912 17:48:08.178742 2828 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:48:08.184937 kubelet[2828]: I0912 17:48:08.184904 2828 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 17:48:08.190202 kubelet[2828]: I0912 17:48:08.189570 2828 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:48:08.190202 kubelet[2828]: W0912 17:48:08.189640 2828 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 17:48:08.194198 kubelet[2828]: I0912 17:48:08.192295 2828 server.go:1274] "Started kubelet" Sep 12 17:48:08.194198 kubelet[2828]: W0912 17:48:08.192980 2828 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.16.223:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-223&limit=500&resourceVersion=0": dial tcp 172.31.16.223:6443: connect: connection refused Sep 12 17:48:08.194198 kubelet[2828]: E0912 17:48:08.193038 2828 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.16.223:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-223&limit=500&resourceVersion=0\": dial tcp 172.31.16.223:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:48:08.195285 kubelet[2828]: W0912 17:48:08.195230 2828 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.16.223:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.16.223:6443: connect: connection refused Sep 12 17:48:08.195285 kubelet[2828]: E0912 17:48:08.195295 2828 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.16.223:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.223:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:48:08.195425 kubelet[2828]: I0912 17:48:08.195366 2828 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:48:08.207794 kubelet[2828]: I0912 17:48:08.207746 2828 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:48:08.208313 kubelet[2828]: I0912 17:48:08.208282 2828 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:48:08.210762 kubelet[2828]: E0912 17:48:08.208747 2828 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.16.223:6443/api/v1/namespaces/default/events\": dial tcp 172.31.16.223:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-16-223.18649a309ea837b1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-223,UID:ip-172-31-16-223,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-223,},FirstTimestamp:2025-09-12 17:48:08.192268209 +0000 UTC m=+0.782547607,LastTimestamp:2025-09-12 17:48:08.192268209 +0000 UTC m=+0.782547607,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-223,}" Sep 12 17:48:08.212738 kubelet[2828]: I0912 17:48:08.212718 2828 server.go:449] "Adding debug handlers to kubelet server" Sep 12 17:48:08.213829 kubelet[2828]: I0912 17:48:08.213799 2828 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:48:08.215783 kubelet[2828]: I0912 17:48:08.215735 2828 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:48:08.219336 kubelet[2828]: I0912 17:48:08.219307 2828 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 17:48:08.219438 kubelet[2828]: I0912 17:48:08.219426 2828 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 17:48:08.219587 kubelet[2828]: I0912 17:48:08.219476 2828 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:48:08.219829 kubelet[2828]: W0912 17:48:08.219793 2828 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.16.223:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.223:6443: connect: connection refused Sep 12 17:48:08.219968 kubelet[2828]: E0912 17:48:08.219841 2828 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.16.223:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.223:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:48:08.220011 kubelet[2828]: E0912 17:48:08.219995 2828 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-16-223\" not found" Sep 12 17:48:08.220073 kubelet[2828]: E0912 17:48:08.220054 2828 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.223:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-223?timeout=10s\": dial tcp 172.31.16.223:6443: connect: connection refused" interval="200ms" Sep 12 17:48:08.226053 kubelet[2828]: I0912 17:48:08.226024 2828 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:48:08.226141 kubelet[2828]: I0912 17:48:08.226128 2828 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:48:08.230878 kubelet[2828]: I0912 17:48:08.230850 2828 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:48:08.241552 kubelet[2828]: I0912 17:48:08.241509 2828 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:48:08.244252 kubelet[2828]: I0912 17:48:08.243833 2828 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:48:08.244252 kubelet[2828]: I0912 17:48:08.243860 2828 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 17:48:08.244252 kubelet[2828]: I0912 17:48:08.243880 2828 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 17:48:08.244252 kubelet[2828]: E0912 17:48:08.243935 2828 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:48:08.258031 kubelet[2828]: W0912 17:48:08.257976 2828 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.16.223:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.223:6443: connect: connection refused Sep 12 17:48:08.258242 kubelet[2828]: E0912 17:48:08.258220 2828 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.16.223:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.223:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:48:08.263022 kubelet[2828]: I0912 17:48:08.262971 2828 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 17:48:08.263022 kubelet[2828]: I0912 17:48:08.262984 2828 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 17:48:08.263319 kubelet[2828]: I0912 17:48:08.263000 2828 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:48:08.269110 kubelet[2828]: I0912 17:48:08.268987 2828 policy_none.go:49] "None policy: Start" Sep 12 17:48:08.270640 kubelet[2828]: I0912 17:48:08.270598 2828 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 17:48:08.270640 kubelet[2828]: I0912 17:48:08.270623 2828 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:48:08.295663 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 17:48:08.309105 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 17:48:08.313490 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 17:48:08.320156 kubelet[2828]: E0912 17:48:08.320121 2828 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-16-223\" not found" Sep 12 17:48:08.320310 kubelet[2828]: I0912 17:48:08.320152 2828 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:48:08.320545 kubelet[2828]: I0912 17:48:08.320530 2828 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:48:08.320631 kubelet[2828]: I0912 17:48:08.320543 2828 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:48:08.320899 kubelet[2828]: I0912 17:48:08.320876 2828 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:48:08.323084 kubelet[2828]: E0912 17:48:08.323047 2828 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-16-223\" not found" Sep 12 17:48:08.357729 systemd[1]: Created slice kubepods-burstable-pod12ce029cbab989309f94faa0f0566044.slice - libcontainer container kubepods-burstable-pod12ce029cbab989309f94faa0f0566044.slice. Sep 12 17:48:08.370194 systemd[1]: Created slice kubepods-burstable-podc45314ef6aa022fe9671e7fcdfc6fd67.slice - libcontainer container kubepods-burstable-podc45314ef6aa022fe9671e7fcdfc6fd67.slice. Sep 12 17:48:08.376130 systemd[1]: Created slice kubepods-burstable-pod084c19b70a33cbd8f93755c5111f4d53.slice - libcontainer container kubepods-burstable-pod084c19b70a33cbd8f93755c5111f4d53.slice. Sep 12 17:48:08.420818 kubelet[2828]: I0912 17:48:08.420774 2828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/12ce029cbab989309f94faa0f0566044-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-223\" (UID: \"12ce029cbab989309f94faa0f0566044\") " pod="kube-system/kube-apiserver-ip-172-31-16-223" Sep 12 17:48:08.420818 kubelet[2828]: I0912 17:48:08.420808 2828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c45314ef6aa022fe9671e7fcdfc6fd67-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-223\" (UID: \"c45314ef6aa022fe9671e7fcdfc6fd67\") " pod="kube-system/kube-controller-manager-ip-172-31-16-223" Sep 12 17:48:08.420818 kubelet[2828]: I0912 17:48:08.420825 2828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c45314ef6aa022fe9671e7fcdfc6fd67-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-223\" (UID: \"c45314ef6aa022fe9671e7fcdfc6fd67\") " pod="kube-system/kube-controller-manager-ip-172-31-16-223" Sep 12 17:48:08.421061 kubelet[2828]: I0912 17:48:08.420839 2828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/084c19b70a33cbd8f93755c5111f4d53-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-223\" (UID: \"084c19b70a33cbd8f93755c5111f4d53\") " pod="kube-system/kube-scheduler-ip-172-31-16-223" Sep 12 17:48:08.421061 kubelet[2828]: I0912 17:48:08.420859 2828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/12ce029cbab989309f94faa0f0566044-ca-certs\") pod \"kube-apiserver-ip-172-31-16-223\" (UID: \"12ce029cbab989309f94faa0f0566044\") " pod="kube-system/kube-apiserver-ip-172-31-16-223" Sep 12 17:48:08.421061 kubelet[2828]: I0912 17:48:08.420874 2828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/12ce029cbab989309f94faa0f0566044-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-223\" (UID: \"12ce029cbab989309f94faa0f0566044\") " pod="kube-system/kube-apiserver-ip-172-31-16-223" Sep 12 17:48:08.421061 kubelet[2828]: I0912 17:48:08.420895 2828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c45314ef6aa022fe9671e7fcdfc6fd67-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-223\" (UID: \"c45314ef6aa022fe9671e7fcdfc6fd67\") " pod="kube-system/kube-controller-manager-ip-172-31-16-223" Sep 12 17:48:08.421061 kubelet[2828]: I0912 17:48:08.420910 2828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c45314ef6aa022fe9671e7fcdfc6fd67-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-223\" (UID: \"c45314ef6aa022fe9671e7fcdfc6fd67\") " pod="kube-system/kube-controller-manager-ip-172-31-16-223" Sep 12 17:48:08.421364 kubelet[2828]: I0912 17:48:08.420924 2828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c45314ef6aa022fe9671e7fcdfc6fd67-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-223\" (UID: \"c45314ef6aa022fe9671e7fcdfc6fd67\") " pod="kube-system/kube-controller-manager-ip-172-31-16-223" Sep 12 17:48:08.421364 kubelet[2828]: E0912 17:48:08.421051 2828 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.223:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-223?timeout=10s\": dial tcp 172.31.16.223:6443: connect: connection refused" interval="400ms" Sep 12 17:48:08.423514 kubelet[2828]: I0912 17:48:08.423196 2828 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-16-223" Sep 12 17:48:08.423514 kubelet[2828]: E0912 17:48:08.423486 2828 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.16.223:6443/api/v1/nodes\": dial tcp 172.31.16.223:6443: connect: connection refused" node="ip-172-31-16-223" Sep 12 17:48:08.625353 kubelet[2828]: I0912 17:48:08.625066 2828 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-16-223" Sep 12 17:48:08.626270 kubelet[2828]: E0912 17:48:08.625581 2828 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.16.223:6443/api/v1/nodes\": dial tcp 172.31.16.223:6443: connect: connection refused" node="ip-172-31-16-223" Sep 12 17:48:08.667984 containerd[1912]: time="2025-09-12T17:48:08.667944728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-223,Uid:12ce029cbab989309f94faa0f0566044,Namespace:kube-system,Attempt:0,}" Sep 12 17:48:08.674709 containerd[1912]: time="2025-09-12T17:48:08.674653751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-223,Uid:c45314ef6aa022fe9671e7fcdfc6fd67,Namespace:kube-system,Attempt:0,}" Sep 12 17:48:08.679368 containerd[1912]: time="2025-09-12T17:48:08.679197714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-223,Uid:084c19b70a33cbd8f93755c5111f4d53,Namespace:kube-system,Attempt:0,}" Sep 12 17:48:08.805721 containerd[1912]: time="2025-09-12T17:48:08.805677901Z" level=info msg="connecting to shim d4916f1ae2a10b928f470040df9024ece750697e15e071f14b1aa45936b24b58" address="unix:///run/containerd/s/0c92f897bf657727a56e9cbad76f2eace9a0ff3e954ba1e955d72c7e4ea5e50a" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:48:08.809252 containerd[1912]: time="2025-09-12T17:48:08.809197028Z" level=info msg="connecting to shim ad3243827cfc478b7b8d2f682af9612836a6004243d3e35b068b93f7c3f50d58" address="unix:///run/containerd/s/9cc7a4b6d00e5ed0c7066d0d3d50f9a01d77b7cbf9e2fc4c33a4240b7893d032" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:48:08.812528 containerd[1912]: time="2025-09-12T17:48:08.812236511Z" level=info msg="connecting to shim 8921c1171d47572f5984bde41279f421d18f1b44595e29468271da3d4239c2e5" address="unix:///run/containerd/s/5c4beef987c603f8916542054c1a1a9afe85b33c7586b4724508c556aca15a70" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:48:08.822449 kubelet[2828]: E0912 17:48:08.822395 2828 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.223:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-223?timeout=10s\": dial tcp 172.31.16.223:6443: connect: connection refused" interval="800ms" Sep 12 17:48:08.908422 systemd[1]: Started cri-containerd-8921c1171d47572f5984bde41279f421d18f1b44595e29468271da3d4239c2e5.scope - libcontainer container 8921c1171d47572f5984bde41279f421d18f1b44595e29468271da3d4239c2e5. Sep 12 17:48:08.911849 systemd[1]: Started cri-containerd-ad3243827cfc478b7b8d2f682af9612836a6004243d3e35b068b93f7c3f50d58.scope - libcontainer container ad3243827cfc478b7b8d2f682af9612836a6004243d3e35b068b93f7c3f50d58. Sep 12 17:48:08.914839 systemd[1]: Started cri-containerd-d4916f1ae2a10b928f470040df9024ece750697e15e071f14b1aa45936b24b58.scope - libcontainer container d4916f1ae2a10b928f470040df9024ece750697e15e071f14b1aa45936b24b58. Sep 12 17:48:09.026955 containerd[1912]: time="2025-09-12T17:48:09.026910196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-223,Uid:12ce029cbab989309f94faa0f0566044,Namespace:kube-system,Attempt:0,} returns sandbox id \"d4916f1ae2a10b928f470040df9024ece750697e15e071f14b1aa45936b24b58\"" Sep 12 17:48:09.029762 kubelet[2828]: I0912 17:48:09.029733 2828 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-16-223" Sep 12 17:48:09.030142 kubelet[2828]: E0912 17:48:09.030108 2828 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.16.223:6443/api/v1/nodes\": dial tcp 172.31.16.223:6443: connect: connection refused" node="ip-172-31-16-223" Sep 12 17:48:09.035608 containerd[1912]: time="2025-09-12T17:48:09.035552663Z" level=info msg="CreateContainer within sandbox \"d4916f1ae2a10b928f470040df9024ece750697e15e071f14b1aa45936b24b58\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 17:48:09.065481 containerd[1912]: time="2025-09-12T17:48:09.065444812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-223,Uid:c45314ef6aa022fe9671e7fcdfc6fd67,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad3243827cfc478b7b8d2f682af9612836a6004243d3e35b068b93f7c3f50d58\"" Sep 12 17:48:09.068768 containerd[1912]: time="2025-09-12T17:48:09.068363638Z" level=info msg="Container 8f1bb7949929b7fe84ca321dd152281637000ddf8218b53a65a20774475b82d4: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:48:09.071560 containerd[1912]: time="2025-09-12T17:48:09.071440551Z" level=info msg="CreateContainer within sandbox \"ad3243827cfc478b7b8d2f682af9612836a6004243d3e35b068b93f7c3f50d58\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 17:48:09.075215 containerd[1912]: time="2025-09-12T17:48:09.073954610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-223,Uid:084c19b70a33cbd8f93755c5111f4d53,Namespace:kube-system,Attempt:0,} returns sandbox id \"8921c1171d47572f5984bde41279f421d18f1b44595e29468271da3d4239c2e5\"" Sep 12 17:48:09.081829 containerd[1912]: time="2025-09-12T17:48:09.080459634Z" level=info msg="CreateContainer within sandbox \"8921c1171d47572f5984bde41279f421d18f1b44595e29468271da3d4239c2e5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 17:48:09.098759 containerd[1912]: time="2025-09-12T17:48:09.098059266Z" level=info msg="Container 89246adef84e91c88f88e9c20b7feaad67c3a1ebd5e68330aff87b23d3e07728: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:48:09.102465 containerd[1912]: time="2025-09-12T17:48:09.102432269Z" level=info msg="Container 545c6b16cec757051fb3aac64b496fff705343c09fcb52c5c51c63cb3b7b80de: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:48:09.110032 containerd[1912]: time="2025-09-12T17:48:09.110000698Z" level=info msg="CreateContainer within sandbox \"d4916f1ae2a10b928f470040df9024ece750697e15e071f14b1aa45936b24b58\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8f1bb7949929b7fe84ca321dd152281637000ddf8218b53a65a20774475b82d4\"" Sep 12 17:48:09.111052 containerd[1912]: time="2025-09-12T17:48:09.110997697Z" level=info msg="StartContainer for \"8f1bb7949929b7fe84ca321dd152281637000ddf8218b53a65a20774475b82d4\"" Sep 12 17:48:09.114045 containerd[1912]: time="2025-09-12T17:48:09.114016256Z" level=info msg="connecting to shim 8f1bb7949929b7fe84ca321dd152281637000ddf8218b53a65a20774475b82d4" address="unix:///run/containerd/s/0c92f897bf657727a56e9cbad76f2eace9a0ff3e954ba1e955d72c7e4ea5e50a" protocol=ttrpc version=3 Sep 12 17:48:09.124320 containerd[1912]: time="2025-09-12T17:48:09.124286964Z" level=info msg="CreateContainer within sandbox \"8921c1171d47572f5984bde41279f421d18f1b44595e29468271da3d4239c2e5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"545c6b16cec757051fb3aac64b496fff705343c09fcb52c5c51c63cb3b7b80de\"" Sep 12 17:48:09.125343 containerd[1912]: time="2025-09-12T17:48:09.125117692Z" level=info msg="StartContainer for \"545c6b16cec757051fb3aac64b496fff705343c09fcb52c5c51c63cb3b7b80de\"" Sep 12 17:48:09.127993 containerd[1912]: time="2025-09-12T17:48:09.127956352Z" level=info msg="connecting to shim 545c6b16cec757051fb3aac64b496fff705343c09fcb52c5c51c63cb3b7b80de" address="unix:///run/containerd/s/5c4beef987c603f8916542054c1a1a9afe85b33c7586b4724508c556aca15a70" protocol=ttrpc version=3 Sep 12 17:48:09.128384 containerd[1912]: time="2025-09-12T17:48:09.128359844Z" level=info msg="CreateContainer within sandbox \"ad3243827cfc478b7b8d2f682af9612836a6004243d3e35b068b93f7c3f50d58\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"89246adef84e91c88f88e9c20b7feaad67c3a1ebd5e68330aff87b23d3e07728\"" Sep 12 17:48:09.129342 containerd[1912]: time="2025-09-12T17:48:09.129234878Z" level=info msg="StartContainer for \"89246adef84e91c88f88e9c20b7feaad67c3a1ebd5e68330aff87b23d3e07728\"" Sep 12 17:48:09.137063 containerd[1912]: time="2025-09-12T17:48:09.137014390Z" level=info msg="connecting to shim 89246adef84e91c88f88e9c20b7feaad67c3a1ebd5e68330aff87b23d3e07728" address="unix:///run/containerd/s/9cc7a4b6d00e5ed0c7066d0d3d50f9a01d77b7cbf9e2fc4c33a4240b7893d032" protocol=ttrpc version=3 Sep 12 17:48:09.139827 kubelet[2828]: W0912 17:48:09.139651 2828 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.16.223:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-223&limit=500&resourceVersion=0": dial tcp 172.31.16.223:6443: connect: connection refused Sep 12 17:48:09.140184 kubelet[2828]: E0912 17:48:09.140072 2828 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.16.223:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-223&limit=500&resourceVersion=0\": dial tcp 172.31.16.223:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:48:09.149399 systemd[1]: Started cri-containerd-8f1bb7949929b7fe84ca321dd152281637000ddf8218b53a65a20774475b82d4.scope - libcontainer container 8f1bb7949929b7fe84ca321dd152281637000ddf8218b53a65a20774475b82d4. Sep 12 17:48:09.167466 systemd[1]: Started cri-containerd-89246adef84e91c88f88e9c20b7feaad67c3a1ebd5e68330aff87b23d3e07728.scope - libcontainer container 89246adef84e91c88f88e9c20b7feaad67c3a1ebd5e68330aff87b23d3e07728. Sep 12 17:48:09.185845 systemd[1]: Started cri-containerd-545c6b16cec757051fb3aac64b496fff705343c09fcb52c5c51c63cb3b7b80de.scope - libcontainer container 545c6b16cec757051fb3aac64b496fff705343c09fcb52c5c51c63cb3b7b80de. Sep 12 17:48:09.239380 kubelet[2828]: W0912 17:48:09.239220 2828 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.16.223:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.223:6443: connect: connection refused Sep 12 17:48:09.239380 kubelet[2828]: E0912 17:48:09.239322 2828 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.16.223:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.223:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:48:09.261341 containerd[1912]: time="2025-09-12T17:48:09.261293955Z" level=info msg="StartContainer for \"8f1bb7949929b7fe84ca321dd152281637000ddf8218b53a65a20774475b82d4\" returns successfully" Sep 12 17:48:09.305980 containerd[1912]: time="2025-09-12T17:48:09.305935866Z" level=info msg="StartContainer for \"89246adef84e91c88f88e9c20b7feaad67c3a1ebd5e68330aff87b23d3e07728\" returns successfully" Sep 12 17:48:09.335658 containerd[1912]: time="2025-09-12T17:48:09.335603494Z" level=info msg="StartContainer for \"545c6b16cec757051fb3aac64b496fff705343c09fcb52c5c51c63cb3b7b80de\" returns successfully" Sep 12 17:48:09.506041 kubelet[2828]: W0912 17:48:09.505872 2828 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.16.223:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.223:6443: connect: connection refused Sep 12 17:48:09.506041 kubelet[2828]: E0912 17:48:09.505965 2828 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.16.223:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.223:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:48:09.624178 kubelet[2828]: E0912 17:48:09.624114 2828 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.223:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-223?timeout=10s\": dial tcp 172.31.16.223:6443: connect: connection refused" interval="1.6s" Sep 12 17:48:09.677869 kubelet[2828]: W0912 17:48:09.677802 2828 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.16.223:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.16.223:6443: connect: connection refused Sep 12 17:48:09.678589 kubelet[2828]: E0912 17:48:09.677887 2828 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.16.223:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.223:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:48:09.832807 kubelet[2828]: I0912 17:48:09.832699 2828 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-16-223" Sep 12 17:48:09.834373 kubelet[2828]: E0912 17:48:09.834338 2828 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.16.223:6443/api/v1/nodes\": dial tcp 172.31.16.223:6443: connect: connection refused" node="ip-172-31-16-223" Sep 12 17:48:11.437796 kubelet[2828]: I0912 17:48:11.436847 2828 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-16-223" Sep 12 17:48:12.211210 kubelet[2828]: I0912 17:48:12.211162 2828 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-16-223" Sep 12 17:48:13.192671 kubelet[2828]: I0912 17:48:13.192448 2828 apiserver.go:52] "Watching apiserver" Sep 12 17:48:13.220223 kubelet[2828]: I0912 17:48:13.220185 2828 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 17:48:14.362013 systemd[1]: Reload requested from client PID 3100 ('systemctl') (unit session-7.scope)... Sep 12 17:48:14.362031 systemd[1]: Reloading... Sep 12 17:48:14.504197 zram_generator::config[3148]: No configuration found. Sep 12 17:48:14.824394 systemd[1]: Reloading finished in 461 ms. Sep 12 17:48:14.837724 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 12 17:48:14.850387 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:48:14.859865 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:48:14.860114 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:48:14.860190 systemd[1]: kubelet.service: Consumed 1.202s CPU time, 129.2M memory peak. Sep 12 17:48:14.862727 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:48:15.180900 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:48:15.193296 (kubelet)[3208]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:48:15.266611 kubelet[3208]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:48:15.266611 kubelet[3208]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 17:48:15.266611 kubelet[3208]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:48:15.266611 kubelet[3208]: I0912 17:48:15.266157 3208 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:48:15.274100 kubelet[3208]: I0912 17:48:15.274059 3208 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 17:48:15.274100 kubelet[3208]: I0912 17:48:15.274088 3208 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:48:15.274523 kubelet[3208]: I0912 17:48:15.274392 3208 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 17:48:15.275698 kubelet[3208]: I0912 17:48:15.275677 3208 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 17:48:15.279199 kubelet[3208]: I0912 17:48:15.278906 3208 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:48:15.283526 kubelet[3208]: I0912 17:48:15.283508 3208 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 17:48:15.286538 kubelet[3208]: I0912 17:48:15.286502 3208 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:48:15.286720 kubelet[3208]: I0912 17:48:15.286637 3208 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 17:48:15.286885 kubelet[3208]: I0912 17:48:15.286818 3208 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:48:15.287054 kubelet[3208]: I0912 17:48:15.286849 3208 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-223","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:48:15.287209 kubelet[3208]: I0912 17:48:15.287064 3208 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:48:15.287209 kubelet[3208]: I0912 17:48:15.287079 3208 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 17:48:15.287209 kubelet[3208]: I0912 17:48:15.287115 3208 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:48:15.287379 kubelet[3208]: I0912 17:48:15.287363 3208 kubelet.go:408] "Attempting to sync node with API server" Sep 12 17:48:15.287379 kubelet[3208]: I0912 17:48:15.287389 3208 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:48:15.287620 kubelet[3208]: I0912 17:48:15.287434 3208 kubelet.go:314] "Adding apiserver pod source" Sep 12 17:48:15.287620 kubelet[3208]: I0912 17:48:15.287499 3208 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:48:15.291976 kubelet[3208]: I0912 17:48:15.291932 3208 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 17:48:15.293817 kubelet[3208]: I0912 17:48:15.293775 3208 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:48:15.294513 kubelet[3208]: I0912 17:48:15.294493 3208 server.go:1274] "Started kubelet" Sep 12 17:48:15.300278 kubelet[3208]: I0912 17:48:15.300251 3208 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:48:15.305704 kubelet[3208]: I0912 17:48:15.305673 3208 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:48:15.307801 kubelet[3208]: I0912 17:48:15.307782 3208 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 17:48:15.308083 kubelet[3208]: E0912 17:48:15.308038 3208 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-16-223\" not found" Sep 12 17:48:15.309283 kubelet[3208]: I0912 17:48:15.308826 3208 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 17:48:15.309283 kubelet[3208]: I0912 17:48:15.308966 3208 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:48:15.315672 kubelet[3208]: I0912 17:48:15.315624 3208 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:48:15.317884 kubelet[3208]: I0912 17:48:15.316803 3208 server.go:449] "Adding debug handlers to kubelet server" Sep 12 17:48:15.317986 kubelet[3208]: I0912 17:48:15.317916 3208 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:48:15.318214 kubelet[3208]: I0912 17:48:15.318127 3208 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:48:15.322536 kubelet[3208]: I0912 17:48:15.322498 3208 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:48:15.324263 kubelet[3208]: I0912 17:48:15.324096 3208 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:48:15.324263 kubelet[3208]: I0912 17:48:15.324128 3208 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 17:48:15.324263 kubelet[3208]: I0912 17:48:15.324148 3208 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 17:48:15.324263 kubelet[3208]: E0912 17:48:15.324217 3208 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:48:15.338920 kubelet[3208]: I0912 17:48:15.337539 3208 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:48:15.338920 kubelet[3208]: I0912 17:48:15.337560 3208 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:48:15.338920 kubelet[3208]: I0912 17:48:15.337676 3208 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:48:15.400972 sudo[3239]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 12 17:48:15.401513 sudo[3239]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 12 17:48:15.424671 kubelet[3208]: I0912 17:48:15.424398 3208 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 17:48:15.424671 kubelet[3208]: I0912 17:48:15.424417 3208 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 17:48:15.424671 kubelet[3208]: I0912 17:48:15.424439 3208 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:48:15.424900 kubelet[3208]: I0912 17:48:15.424620 3208 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 17:48:15.424990 kubelet[3208]: I0912 17:48:15.424960 3208 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 17:48:15.425051 kubelet[3208]: I0912 17:48:15.425044 3208 policy_none.go:49] "None policy: Start" Sep 12 17:48:15.425779 kubelet[3208]: E0912 17:48:15.425578 3208 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 17:48:15.427216 kubelet[3208]: I0912 17:48:15.426092 3208 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 17:48:15.427216 kubelet[3208]: I0912 17:48:15.426115 3208 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:48:15.427216 kubelet[3208]: I0912 17:48:15.426330 3208 state_mem.go:75] "Updated machine memory state" Sep 12 17:48:15.435625 kubelet[3208]: I0912 17:48:15.435428 3208 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:48:15.435731 kubelet[3208]: I0912 17:48:15.435626 3208 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:48:15.435731 kubelet[3208]: I0912 17:48:15.435638 3208 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:48:15.436836 kubelet[3208]: I0912 17:48:15.436814 3208 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:48:15.558808 kubelet[3208]: I0912 17:48:15.557595 3208 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-16-223" Sep 12 17:48:15.576957 kubelet[3208]: I0912 17:48:15.576856 3208 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-16-223" Sep 12 17:48:15.577447 kubelet[3208]: I0912 17:48:15.577303 3208 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-16-223" Sep 12 17:48:15.713564 kubelet[3208]: I0912 17:48:15.713461 3208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c45314ef6aa022fe9671e7fcdfc6fd67-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-223\" (UID: \"c45314ef6aa022fe9671e7fcdfc6fd67\") " pod="kube-system/kube-controller-manager-ip-172-31-16-223" Sep 12 17:48:15.714089 kubelet[3208]: I0912 17:48:15.713946 3208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/084c19b70a33cbd8f93755c5111f4d53-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-223\" (UID: \"084c19b70a33cbd8f93755c5111f4d53\") " pod="kube-system/kube-scheduler-ip-172-31-16-223" Sep 12 17:48:15.714089 kubelet[3208]: I0912 17:48:15.714028 3208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/12ce029cbab989309f94faa0f0566044-ca-certs\") pod \"kube-apiserver-ip-172-31-16-223\" (UID: \"12ce029cbab989309f94faa0f0566044\") " pod="kube-system/kube-apiserver-ip-172-31-16-223" Sep 12 17:48:15.714089 kubelet[3208]: I0912 17:48:15.714048 3208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/12ce029cbab989309f94faa0f0566044-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-223\" (UID: \"12ce029cbab989309f94faa0f0566044\") " pod="kube-system/kube-apiserver-ip-172-31-16-223" Sep 12 17:48:15.714520 kubelet[3208]: I0912 17:48:15.714064 3208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/12ce029cbab989309f94faa0f0566044-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-223\" (UID: \"12ce029cbab989309f94faa0f0566044\") " pod="kube-system/kube-apiserver-ip-172-31-16-223" Sep 12 17:48:15.714520 kubelet[3208]: I0912 17:48:15.714322 3208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c45314ef6aa022fe9671e7fcdfc6fd67-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-223\" (UID: \"c45314ef6aa022fe9671e7fcdfc6fd67\") " pod="kube-system/kube-controller-manager-ip-172-31-16-223" Sep 12 17:48:15.714520 kubelet[3208]: I0912 17:48:15.714464 3208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c45314ef6aa022fe9671e7fcdfc6fd67-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-223\" (UID: \"c45314ef6aa022fe9671e7fcdfc6fd67\") " pod="kube-system/kube-controller-manager-ip-172-31-16-223" Sep 12 17:48:15.714713 kubelet[3208]: I0912 17:48:15.714578 3208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c45314ef6aa022fe9671e7fcdfc6fd67-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-223\" (UID: \"c45314ef6aa022fe9671e7fcdfc6fd67\") " pod="kube-system/kube-controller-manager-ip-172-31-16-223" Sep 12 17:48:15.714713 kubelet[3208]: I0912 17:48:15.714602 3208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c45314ef6aa022fe9671e7fcdfc6fd67-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-223\" (UID: \"c45314ef6aa022fe9671e7fcdfc6fd67\") " pod="kube-system/kube-controller-manager-ip-172-31-16-223" Sep 12 17:48:15.848477 sudo[3239]: pam_unix(sudo:session): session closed for user root Sep 12 17:48:16.288293 kubelet[3208]: I0912 17:48:16.288247 3208 apiserver.go:52] "Watching apiserver" Sep 12 17:48:16.309489 kubelet[3208]: I0912 17:48:16.309386 3208 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 17:48:16.407326 kubelet[3208]: E0912 17:48:16.407286 3208 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-16-223\" already exists" pod="kube-system/kube-apiserver-ip-172-31-16-223" Sep 12 17:48:16.408722 kubelet[3208]: E0912 17:48:16.408699 3208 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-16-223\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-16-223" Sep 12 17:48:16.437317 kubelet[3208]: I0912 17:48:16.436969 3208 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-16-223" podStartSLOduration=1.436934119 podStartE2EDuration="1.436934119s" podCreationTimestamp="2025-09-12 17:48:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:48:16.422687615 +0000 UTC m=+1.220958384" watchObservedRunningTime="2025-09-12 17:48:16.436934119 +0000 UTC m=+1.235204885" Sep 12 17:48:16.438047 kubelet[3208]: I0912 17:48:16.438014 3208 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-16-223" podStartSLOduration=1.437917309 podStartE2EDuration="1.437917309s" podCreationTimestamp="2025-09-12 17:48:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:48:16.437920777 +0000 UTC m=+1.236191531" watchObservedRunningTime="2025-09-12 17:48:16.437917309 +0000 UTC m=+1.236188081" Sep 12 17:48:16.449544 kubelet[3208]: I0912 17:48:16.449471 3208 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-16-223" podStartSLOduration=1.44942192 podStartE2EDuration="1.44942192s" podCreationTimestamp="2025-09-12 17:48:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:48:16.448461043 +0000 UTC m=+1.246731816" watchObservedRunningTime="2025-09-12 17:48:16.44942192 +0000 UTC m=+1.247692692" Sep 12 17:48:17.478140 sudo[2264]: pam_unix(sudo:session): session closed for user root Sep 12 17:48:17.500077 sshd[2263]: Connection closed by 139.178.68.195 port 42450 Sep 12 17:48:17.501313 sshd-session[2260]: pam_unix(sshd:session): session closed for user core Sep 12 17:48:17.505751 systemd[1]: sshd@6-172.31.16.223:22-139.178.68.195:42450.service: Deactivated successfully. Sep 12 17:48:17.508095 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 17:48:17.508344 systemd[1]: session-7.scope: Consumed 4.252s CPU time, 212.8M memory peak. Sep 12 17:48:17.510245 systemd-logind[1858]: Session 7 logged out. Waiting for processes to exit. Sep 12 17:48:17.512008 systemd-logind[1858]: Removed session 7. Sep 12 17:48:21.079851 kubelet[3208]: I0912 17:48:21.079789 3208 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 17:48:21.081388 kubelet[3208]: I0912 17:48:21.080845 3208 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 17:48:21.081430 containerd[1912]: time="2025-09-12T17:48:21.080143928Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 17:48:21.949488 systemd[1]: Created slice kubepods-besteffort-pod126c2a87_73f8_4b1b_af75_b2b3188c9619.slice - libcontainer container kubepods-besteffort-pod126c2a87_73f8_4b1b_af75_b2b3188c9619.slice. Sep 12 17:48:21.952494 kubelet[3208]: W0912 17:48:21.952456 3208 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-16-223" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-16-223' and this object Sep 12 17:48:21.952584 kubelet[3208]: E0912 17:48:21.952503 3208 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ip-172-31-16-223\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-16-223' and this object" logger="UnhandledError" Sep 12 17:48:21.955358 kubelet[3208]: I0912 17:48:21.955327 3208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/126c2a87-73f8-4b1b-af75-b2b3188c9619-kube-proxy\") pod \"kube-proxy-cpmmm\" (UID: \"126c2a87-73f8-4b1b-af75-b2b3188c9619\") " pod="kube-system/kube-proxy-cpmmm" Sep 12 17:48:21.955358 kubelet[3208]: I0912 17:48:21.955359 3208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7e886de-a052-4260-8ede-050d4d6994fa-xtables-lock\") pod \"cilium-qjnkn\" (UID: \"f7e886de-a052-4260-8ede-050d4d6994fa\") " pod="kube-system/cilium-qjnkn" Sep 12 17:48:21.955358 kubelet[3208]: I0912 17:48:21.955377 3208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f7e886de-a052-4260-8ede-050d4d6994fa-cilium-cgroup\") pod \"cilium-qjnkn\" (UID: \"f7e886de-a052-4260-8ede-050d4d6994fa\") " pod="kube-system/cilium-qjnkn" Sep 12 17:48:21.955547 kubelet[3208]: I0912 17:48:21.955392 3208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f7e886de-a052-4260-8ede-050d4d6994fa-clustermesh-secrets\") pod \"cilium-qjnkn\" (UID: \"f7e886de-a052-4260-8ede-050d4d6994fa\") " pod="kube-system/cilium-qjnkn" Sep 12 17:48:21.955547 kubelet[3208]: I0912 17:48:21.955409 3208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f7e886de-a052-4260-8ede-050d4d6994fa-host-proc-sys-kernel\") pod \"cilium-qjnkn\" (UID: \"f7e886de-a052-4260-8ede-050d4d6994fa\") " pod="kube-system/cilium-qjnkn" Sep 12 17:48:21.955547 kubelet[3208]: I0912 17:48:21.955425 3208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f7e886de-a052-4260-8ede-050d4d6994fa-hostproc\") pod \"cilium-qjnkn\" (UID: \"f7e886de-a052-4260-8ede-050d4d6994fa\") " pod="kube-system/cilium-qjnkn" Sep 12 17:48:21.955547 kubelet[3208]: I0912 17:48:21.955440 3208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f7e886de-a052-4260-8ede-050d4d6994fa-etc-cni-netd\") pod \"cilium-qjnkn\" (UID: \"f7e886de-a052-4260-8ede-050d4d6994fa\") " pod="kube-system/cilium-qjnkn" Sep 12 17:48:21.955547 kubelet[3208]: I0912 17:48:21.955467 3208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f7e886de-a052-4260-8ede-050d4d6994fa-host-proc-sys-net\") pod \"cilium-qjnkn\" (UID: \"f7e886de-a052-4260-8ede-050d4d6994fa\") " pod="kube-system/cilium-qjnkn" Sep 12 17:48:21.955547 kubelet[3208]: I0912 17:48:21.955483 3208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f7e886de-a052-4260-8ede-050d4d6994fa-bpf-maps\") pod \"cilium-qjnkn\" (UID: \"f7e886de-a052-4260-8ede-050d4d6994fa\") " pod="kube-system/cilium-qjnkn" Sep 12 17:48:21.955699 kubelet[3208]: I0912 17:48:21.955496 3208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f7e886de-a052-4260-8ede-050d4d6994fa-cilium-config-path\") pod \"cilium-qjnkn\" (UID: \"f7e886de-a052-4260-8ede-050d4d6994fa\") " pod="kube-system/cilium-qjnkn" Sep 12 17:48:21.955699 kubelet[3208]: I0912 17:48:21.955511 3208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzdlh\" (UniqueName: \"kubernetes.io/projected/f7e886de-a052-4260-8ede-050d4d6994fa-kube-api-access-wzdlh\") pod \"cilium-qjnkn\" (UID: \"f7e886de-a052-4260-8ede-050d4d6994fa\") " pod="kube-system/cilium-qjnkn" Sep 12 17:48:21.955699 kubelet[3208]: I0912 17:48:21.955527 3208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfgbv\" (UniqueName: \"kubernetes.io/projected/126c2a87-73f8-4b1b-af75-b2b3188c9619-kube-api-access-cfgbv\") pod \"kube-proxy-cpmmm\" (UID: \"126c2a87-73f8-4b1b-af75-b2b3188c9619\") " pod="kube-system/kube-proxy-cpmmm" Sep 12 17:48:21.955699 kubelet[3208]: I0912 17:48:21.955547 3208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f7e886de-a052-4260-8ede-050d4d6994fa-cni-path\") pod \"cilium-qjnkn\" (UID: \"f7e886de-a052-4260-8ede-050d4d6994fa\") " pod="kube-system/cilium-qjnkn" Sep 12 17:48:21.955699 kubelet[3208]: I0912 17:48:21.955572 3208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7e886de-a052-4260-8ede-050d4d6994fa-lib-modules\") pod \"cilium-qjnkn\" (UID: \"f7e886de-a052-4260-8ede-050d4d6994fa\") " pod="kube-system/cilium-qjnkn" Sep 12 17:48:21.955828 kubelet[3208]: I0912 17:48:21.955588 3208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/126c2a87-73f8-4b1b-af75-b2b3188c9619-xtables-lock\") pod \"kube-proxy-cpmmm\" (UID: \"126c2a87-73f8-4b1b-af75-b2b3188c9619\") " pod="kube-system/kube-proxy-cpmmm" Sep 12 17:48:21.955828 kubelet[3208]: I0912 17:48:21.955603 3208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/126c2a87-73f8-4b1b-af75-b2b3188c9619-lib-modules\") pod \"kube-proxy-cpmmm\" (UID: \"126c2a87-73f8-4b1b-af75-b2b3188c9619\") " pod="kube-system/kube-proxy-cpmmm" Sep 12 17:48:21.955828 kubelet[3208]: I0912 17:48:21.955617 3208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f7e886de-a052-4260-8ede-050d4d6994fa-cilium-run\") pod \"cilium-qjnkn\" (UID: \"f7e886de-a052-4260-8ede-050d4d6994fa\") " pod="kube-system/cilium-qjnkn" Sep 12 17:48:21.955828 kubelet[3208]: I0912 17:48:21.955632 3208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f7e886de-a052-4260-8ede-050d4d6994fa-hubble-tls\") pod \"cilium-qjnkn\" (UID: \"f7e886de-a052-4260-8ede-050d4d6994fa\") " pod="kube-system/cilium-qjnkn" Sep 12 17:48:21.962395 systemd[1]: Created slice kubepods-burstable-podf7e886de_a052_4260_8ede_050d4d6994fa.slice - libcontainer container kubepods-burstable-podf7e886de_a052_4260_8ede_050d4d6994fa.slice. Sep 12 17:48:22.182945 systemd[1]: Created slice kubepods-besteffort-podc1cd7ce3_ea60_4cf7_ba05_6bd6bb8f52b7.slice - libcontainer container kubepods-besteffort-podc1cd7ce3_ea60_4cf7_ba05_6bd6bb8f52b7.slice. Sep 12 17:48:22.259960 kubelet[3208]: I0912 17:48:22.259323 3208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv6kk\" (UniqueName: \"kubernetes.io/projected/c1cd7ce3-ea60-4cf7-ba05-6bd6bb8f52b7-kube-api-access-pv6kk\") pod \"cilium-operator-5d85765b45-dwzt2\" (UID: \"c1cd7ce3-ea60-4cf7-ba05-6bd6bb8f52b7\") " pod="kube-system/cilium-operator-5d85765b45-dwzt2" Sep 12 17:48:22.259960 kubelet[3208]: I0912 17:48:22.259396 3208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c1cd7ce3-ea60-4cf7-ba05-6bd6bb8f52b7-cilium-config-path\") pod \"cilium-operator-5d85765b45-dwzt2\" (UID: \"c1cd7ce3-ea60-4cf7-ba05-6bd6bb8f52b7\") " pod="kube-system/cilium-operator-5d85765b45-dwzt2" Sep 12 17:48:23.088831 containerd[1912]: time="2025-09-12T17:48:23.088793080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-dwzt2,Uid:c1cd7ce3-ea60-4cf7-ba05-6bd6bb8f52b7,Namespace:kube-system,Attempt:0,}" Sep 12 17:48:23.131674 containerd[1912]: time="2025-09-12T17:48:23.131567449Z" level=info msg="connecting to shim 6184645073ea128a34cf646b1aeb9e38687f9b112c6996ad4f1950b4ab3e50b8" address="unix:///run/containerd/s/77b2639d3007da5b615f1673acfde426d8f984c5bbf223a70b477b990da9f38f" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:48:23.159943 containerd[1912]: time="2025-09-12T17:48:23.159895037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cpmmm,Uid:126c2a87-73f8-4b1b-af75-b2b3188c9619,Namespace:kube-system,Attempt:0,}" Sep 12 17:48:23.166359 containerd[1912]: time="2025-09-12T17:48:23.166292043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qjnkn,Uid:f7e886de-a052-4260-8ede-050d4d6994fa,Namespace:kube-system,Attempt:0,}" Sep 12 17:48:23.176525 systemd[1]: Started cri-containerd-6184645073ea128a34cf646b1aeb9e38687f9b112c6996ad4f1950b4ab3e50b8.scope - libcontainer container 6184645073ea128a34cf646b1aeb9e38687f9b112c6996ad4f1950b4ab3e50b8. Sep 12 17:48:23.251848 containerd[1912]: time="2025-09-12T17:48:23.251310803Z" level=info msg="connecting to shim 57f4346835d6f09e5e5b76ba46720b976efe33b556ad4ae7873442cf7927f051" address="unix:///run/containerd/s/52c0c93c684d1958a984b97edcd06360b6de0507865b4139d81abd129080d516" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:48:23.264548 containerd[1912]: time="2025-09-12T17:48:23.264477604Z" level=info msg="connecting to shim 69fcc6dbaf13904bc0fdca0a7372470fb618642328052e80131428315bdd1f90" address="unix:///run/containerd/s/0b6af2a10c37639be1d94834eb9cf17540241cdc6df6d9706907c62a2213ec2e" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:48:23.285652 containerd[1912]: time="2025-09-12T17:48:23.285599116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-dwzt2,Uid:c1cd7ce3-ea60-4cf7-ba05-6bd6bb8f52b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"6184645073ea128a34cf646b1aeb9e38687f9b112c6996ad4f1950b4ab3e50b8\"" Sep 12 17:48:23.294775 containerd[1912]: time="2025-09-12T17:48:23.294737924Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 12 17:48:23.311479 systemd[1]: Started cri-containerd-57f4346835d6f09e5e5b76ba46720b976efe33b556ad4ae7873442cf7927f051.scope - libcontainer container 57f4346835d6f09e5e5b76ba46720b976efe33b556ad4ae7873442cf7927f051. Sep 12 17:48:23.316874 systemd[1]: Started cri-containerd-69fcc6dbaf13904bc0fdca0a7372470fb618642328052e80131428315bdd1f90.scope - libcontainer container 69fcc6dbaf13904bc0fdca0a7372470fb618642328052e80131428315bdd1f90. Sep 12 17:48:23.361690 containerd[1912]: time="2025-09-12T17:48:23.361567565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cpmmm,Uid:126c2a87-73f8-4b1b-af75-b2b3188c9619,Namespace:kube-system,Attempt:0,} returns sandbox id \"57f4346835d6f09e5e5b76ba46720b976efe33b556ad4ae7873442cf7927f051\"" Sep 12 17:48:23.365505 containerd[1912]: time="2025-09-12T17:48:23.365460199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qjnkn,Uid:f7e886de-a052-4260-8ede-050d4d6994fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"69fcc6dbaf13904bc0fdca0a7372470fb618642328052e80131428315bdd1f90\"" Sep 12 17:48:23.368687 containerd[1912]: time="2025-09-12T17:48:23.368470111Z" level=info msg="CreateContainer within sandbox \"57f4346835d6f09e5e5b76ba46720b976efe33b556ad4ae7873442cf7927f051\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 17:48:23.385985 containerd[1912]: time="2025-09-12T17:48:23.385937538Z" level=info msg="Container 164290639609b8a9a0d123007f9f2ceb0e3bf811d64ac4f541779a02429a4cff: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:48:23.401915 containerd[1912]: time="2025-09-12T17:48:23.401860283Z" level=info msg="CreateContainer within sandbox \"57f4346835d6f09e5e5b76ba46720b976efe33b556ad4ae7873442cf7927f051\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"164290639609b8a9a0d123007f9f2ceb0e3bf811d64ac4f541779a02429a4cff\"" Sep 12 17:48:23.403593 containerd[1912]: time="2025-09-12T17:48:23.402774602Z" level=info msg="StartContainer for \"164290639609b8a9a0d123007f9f2ceb0e3bf811d64ac4f541779a02429a4cff\"" Sep 12 17:48:23.404009 containerd[1912]: time="2025-09-12T17:48:23.403980408Z" level=info msg="connecting to shim 164290639609b8a9a0d123007f9f2ceb0e3bf811d64ac4f541779a02429a4cff" address="unix:///run/containerd/s/52c0c93c684d1958a984b97edcd06360b6de0507865b4139d81abd129080d516" protocol=ttrpc version=3 Sep 12 17:48:23.433518 systemd[1]: Started cri-containerd-164290639609b8a9a0d123007f9f2ceb0e3bf811d64ac4f541779a02429a4cff.scope - libcontainer container 164290639609b8a9a0d123007f9f2ceb0e3bf811d64ac4f541779a02429a4cff. Sep 12 17:48:23.473041 containerd[1912]: time="2025-09-12T17:48:23.473000188Z" level=info msg="StartContainer for \"164290639609b8a9a0d123007f9f2ceb0e3bf811d64ac4f541779a02429a4cff\" returns successfully" Sep 12 17:48:24.435294 kubelet[3208]: I0912 17:48:24.434983 3208 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cpmmm" podStartSLOduration=3.434961945 podStartE2EDuration="3.434961945s" podCreationTimestamp="2025-09-12 17:48:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:48:24.434890813 +0000 UTC m=+9.233161585" watchObservedRunningTime="2025-09-12 17:48:24.434961945 +0000 UTC m=+9.233232717" Sep 12 17:48:24.539915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount192140561.mount: Deactivated successfully. Sep 12 17:48:25.308879 containerd[1912]: time="2025-09-12T17:48:25.308816933Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:48:25.310734 containerd[1912]: time="2025-09-12T17:48:25.310681238Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 12 17:48:25.313037 containerd[1912]: time="2025-09-12T17:48:25.312980779Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:48:25.314490 containerd[1912]: time="2025-09-12T17:48:25.314360011Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.01938953s" Sep 12 17:48:25.314490 containerd[1912]: time="2025-09-12T17:48:25.314397094Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 12 17:48:25.315984 containerd[1912]: time="2025-09-12T17:48:25.315960310Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 12 17:48:25.316675 containerd[1912]: time="2025-09-12T17:48:25.316642052Z" level=info msg="CreateContainer within sandbox \"6184645073ea128a34cf646b1aeb9e38687f9b112c6996ad4f1950b4ab3e50b8\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 12 17:48:25.335739 containerd[1912]: time="2025-09-12T17:48:25.334698231Z" level=info msg="Container 664ef6fd311b5c6a9679ab76f59f4ed7d397def0166957e823411739648b4360: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:48:25.349640 containerd[1912]: time="2025-09-12T17:48:25.349598620Z" level=info msg="CreateContainer within sandbox \"6184645073ea128a34cf646b1aeb9e38687f9b112c6996ad4f1950b4ab3e50b8\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"664ef6fd311b5c6a9679ab76f59f4ed7d397def0166957e823411739648b4360\"" Sep 12 17:48:25.350995 containerd[1912]: time="2025-09-12T17:48:25.350196973Z" level=info msg="StartContainer for \"664ef6fd311b5c6a9679ab76f59f4ed7d397def0166957e823411739648b4360\"" Sep 12 17:48:25.350995 containerd[1912]: time="2025-09-12T17:48:25.350917284Z" level=info msg="connecting to shim 664ef6fd311b5c6a9679ab76f59f4ed7d397def0166957e823411739648b4360" address="unix:///run/containerd/s/77b2639d3007da5b615f1673acfde426d8f984c5bbf223a70b477b990da9f38f" protocol=ttrpc version=3 Sep 12 17:48:25.380437 systemd[1]: Started cri-containerd-664ef6fd311b5c6a9679ab76f59f4ed7d397def0166957e823411739648b4360.scope - libcontainer container 664ef6fd311b5c6a9679ab76f59f4ed7d397def0166957e823411739648b4360. Sep 12 17:48:25.429030 containerd[1912]: time="2025-09-12T17:48:25.428926499Z" level=info msg="StartContainer for \"664ef6fd311b5c6a9679ab76f59f4ed7d397def0166957e823411739648b4360\" returns successfully" Sep 12 17:48:28.363731 update_engine[1862]: I20250912 17:48:28.363646 1862 update_attempter.cc:509] Updating boot flags... Sep 12 17:48:31.331048 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount776121083.mount: Deactivated successfully. Sep 12 17:48:33.945396 containerd[1912]: time="2025-09-12T17:48:33.945318577Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:48:33.948144 containerd[1912]: time="2025-09-12T17:48:33.948101095Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 12 17:48:33.950878 containerd[1912]: time="2025-09-12T17:48:33.949976654Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:48:33.951291 containerd[1912]: time="2025-09-12T17:48:33.951256783Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.635267878s" Sep 12 17:48:33.951371 containerd[1912]: time="2025-09-12T17:48:33.951298718Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 12 17:48:33.953946 containerd[1912]: time="2025-09-12T17:48:33.953732622Z" level=info msg="CreateContainer within sandbox \"69fcc6dbaf13904bc0fdca0a7372470fb618642328052e80131428315bdd1f90\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 17:48:34.001452 containerd[1912]: time="2025-09-12T17:48:34.001406124Z" level=info msg="Container 6389f6a7d3b82f8d37ac1677a2f5994aae988db6f46df0adadaa69ba9256d025: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:48:34.009606 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1712328243.mount: Deactivated successfully. Sep 12 17:48:34.038446 containerd[1912]: time="2025-09-12T17:48:34.038385622Z" level=info msg="CreateContainer within sandbox \"69fcc6dbaf13904bc0fdca0a7372470fb618642328052e80131428315bdd1f90\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6389f6a7d3b82f8d37ac1677a2f5994aae988db6f46df0adadaa69ba9256d025\"" Sep 12 17:48:34.039014 containerd[1912]: time="2025-09-12T17:48:34.038993469Z" level=info msg="StartContainer for \"6389f6a7d3b82f8d37ac1677a2f5994aae988db6f46df0adadaa69ba9256d025\"" Sep 12 17:48:34.039908 containerd[1912]: time="2025-09-12T17:48:34.039879165Z" level=info msg="connecting to shim 6389f6a7d3b82f8d37ac1677a2f5994aae988db6f46df0adadaa69ba9256d025" address="unix:///run/containerd/s/0b6af2a10c37639be1d94834eb9cf17540241cdc6df6d9706907c62a2213ec2e" protocol=ttrpc version=3 Sep 12 17:48:34.074403 systemd[1]: Started cri-containerd-6389f6a7d3b82f8d37ac1677a2f5994aae988db6f46df0adadaa69ba9256d025.scope - libcontainer container 6389f6a7d3b82f8d37ac1677a2f5994aae988db6f46df0adadaa69ba9256d025. Sep 12 17:48:34.124390 containerd[1912]: time="2025-09-12T17:48:34.124328994Z" level=info msg="StartContainer for \"6389f6a7d3b82f8d37ac1677a2f5994aae988db6f46df0adadaa69ba9256d025\" returns successfully" Sep 12 17:48:34.137877 systemd[1]: cri-containerd-6389f6a7d3b82f8d37ac1677a2f5994aae988db6f46df0adadaa69ba9256d025.scope: Deactivated successfully. Sep 12 17:48:34.171522 containerd[1912]: time="2025-09-12T17:48:34.171465946Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6389f6a7d3b82f8d37ac1677a2f5994aae988db6f46df0adadaa69ba9256d025\" id:\"6389f6a7d3b82f8d37ac1677a2f5994aae988db6f46df0adadaa69ba9256d025\" pid:3939 exited_at:{seconds:1757699314 nanos:139113342}" Sep 12 17:48:34.172571 containerd[1912]: time="2025-09-12T17:48:34.172539695Z" level=info msg="received exit event container_id:\"6389f6a7d3b82f8d37ac1677a2f5994aae988db6f46df0adadaa69ba9256d025\" id:\"6389f6a7d3b82f8d37ac1677a2f5994aae988db6f46df0adadaa69ba9256d025\" pid:3939 exited_at:{seconds:1757699314 nanos:139113342}" Sep 12 17:48:34.209482 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6389f6a7d3b82f8d37ac1677a2f5994aae988db6f46df0adadaa69ba9256d025-rootfs.mount: Deactivated successfully. Sep 12 17:48:34.474196 containerd[1912]: time="2025-09-12T17:48:34.473810971Z" level=info msg="CreateContainer within sandbox \"69fcc6dbaf13904bc0fdca0a7372470fb618642328052e80131428315bdd1f90\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 17:48:34.488364 containerd[1912]: time="2025-09-12T17:48:34.488242154Z" level=info msg="Container edb0bd4e54bada3c3164e60e249a42256bbc3cc2dbda2d42bdc213a26aecc01b: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:48:34.499584 containerd[1912]: time="2025-09-12T17:48:34.499544572Z" level=info msg="CreateContainer within sandbox \"69fcc6dbaf13904bc0fdca0a7372470fb618642328052e80131428315bdd1f90\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"edb0bd4e54bada3c3164e60e249a42256bbc3cc2dbda2d42bdc213a26aecc01b\"" Sep 12 17:48:34.501124 containerd[1912]: time="2025-09-12T17:48:34.501064169Z" level=info msg="StartContainer for \"edb0bd4e54bada3c3164e60e249a42256bbc3cc2dbda2d42bdc213a26aecc01b\"" Sep 12 17:48:34.503339 containerd[1912]: time="2025-09-12T17:48:34.503301342Z" level=info msg="connecting to shim edb0bd4e54bada3c3164e60e249a42256bbc3cc2dbda2d42bdc213a26aecc01b" address="unix:///run/containerd/s/0b6af2a10c37639be1d94834eb9cf17540241cdc6df6d9706907c62a2213ec2e" protocol=ttrpc version=3 Sep 12 17:48:34.509999 kubelet[3208]: I0912 17:48:34.508999 3208 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-dwzt2" podStartSLOduration=10.483746753 podStartE2EDuration="12.508968997s" podCreationTimestamp="2025-09-12 17:48:22 +0000 UTC" firstStartedPulling="2025-09-12 17:48:23.290145969 +0000 UTC m=+8.088416727" lastFinishedPulling="2025-09-12 17:48:25.315368217 +0000 UTC m=+10.113638971" observedRunningTime="2025-09-12 17:48:26.457368359 +0000 UTC m=+11.255639142" watchObservedRunningTime="2025-09-12 17:48:34.508968997 +0000 UTC m=+19.307239780" Sep 12 17:48:34.530581 systemd[1]: Started cri-containerd-edb0bd4e54bada3c3164e60e249a42256bbc3cc2dbda2d42bdc213a26aecc01b.scope - libcontainer container edb0bd4e54bada3c3164e60e249a42256bbc3cc2dbda2d42bdc213a26aecc01b. Sep 12 17:48:34.571731 containerd[1912]: time="2025-09-12T17:48:34.571594123Z" level=info msg="StartContainer for \"edb0bd4e54bada3c3164e60e249a42256bbc3cc2dbda2d42bdc213a26aecc01b\" returns successfully" Sep 12 17:48:34.588108 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:48:34.588464 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:48:34.589805 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:48:34.594574 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:48:34.600652 systemd[1]: cri-containerd-edb0bd4e54bada3c3164e60e249a42256bbc3cc2dbda2d42bdc213a26aecc01b.scope: Deactivated successfully. Sep 12 17:48:34.603510 containerd[1912]: time="2025-09-12T17:48:34.602638145Z" level=info msg="received exit event container_id:\"edb0bd4e54bada3c3164e60e249a42256bbc3cc2dbda2d42bdc213a26aecc01b\" id:\"edb0bd4e54bada3c3164e60e249a42256bbc3cc2dbda2d42bdc213a26aecc01b\" pid:3983 exited_at:{seconds:1757699314 nanos:602394564}" Sep 12 17:48:34.603510 containerd[1912]: time="2025-09-12T17:48:34.602926279Z" level=info msg="TaskExit event in podsandbox handler container_id:\"edb0bd4e54bada3c3164e60e249a42256bbc3cc2dbda2d42bdc213a26aecc01b\" id:\"edb0bd4e54bada3c3164e60e249a42256bbc3cc2dbda2d42bdc213a26aecc01b\" pid:3983 exited_at:{seconds:1757699314 nanos:602394564}" Sep 12 17:48:34.647895 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:48:35.479297 containerd[1912]: time="2025-09-12T17:48:35.479153037Z" level=info msg="CreateContainer within sandbox \"69fcc6dbaf13904bc0fdca0a7372470fb618642328052e80131428315bdd1f90\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 17:48:35.496402 containerd[1912]: time="2025-09-12T17:48:35.495878753Z" level=info msg="Container e8136547ad44bc8a6c4f0ea912d8b7bef65dc7a99fc349aab6c46dad9a3d16e2: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:48:35.506770 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount918784853.mount: Deactivated successfully. Sep 12 17:48:35.519585 containerd[1912]: time="2025-09-12T17:48:35.519539663Z" level=info msg="CreateContainer within sandbox \"69fcc6dbaf13904bc0fdca0a7372470fb618642328052e80131428315bdd1f90\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e8136547ad44bc8a6c4f0ea912d8b7bef65dc7a99fc349aab6c46dad9a3d16e2\"" Sep 12 17:48:35.521595 containerd[1912]: time="2025-09-12T17:48:35.521456514Z" level=info msg="StartContainer for \"e8136547ad44bc8a6c4f0ea912d8b7bef65dc7a99fc349aab6c46dad9a3d16e2\"" Sep 12 17:48:35.525541 containerd[1912]: time="2025-09-12T17:48:35.525472792Z" level=info msg="connecting to shim e8136547ad44bc8a6c4f0ea912d8b7bef65dc7a99fc349aab6c46dad9a3d16e2" address="unix:///run/containerd/s/0b6af2a10c37639be1d94834eb9cf17540241cdc6df6d9706907c62a2213ec2e" protocol=ttrpc version=3 Sep 12 17:48:35.562429 systemd[1]: Started cri-containerd-e8136547ad44bc8a6c4f0ea912d8b7bef65dc7a99fc349aab6c46dad9a3d16e2.scope - libcontainer container e8136547ad44bc8a6c4f0ea912d8b7bef65dc7a99fc349aab6c46dad9a3d16e2. Sep 12 17:48:35.608128 containerd[1912]: time="2025-09-12T17:48:35.608089508Z" level=info msg="StartContainer for \"e8136547ad44bc8a6c4f0ea912d8b7bef65dc7a99fc349aab6c46dad9a3d16e2\" returns successfully" Sep 12 17:48:35.617948 systemd[1]: cri-containerd-e8136547ad44bc8a6c4f0ea912d8b7bef65dc7a99fc349aab6c46dad9a3d16e2.scope: Deactivated successfully. Sep 12 17:48:35.618486 systemd[1]: cri-containerd-e8136547ad44bc8a6c4f0ea912d8b7bef65dc7a99fc349aab6c46dad9a3d16e2.scope: Consumed 25ms CPU time, 4.2M memory peak, 1.2M read from disk. Sep 12 17:48:35.620346 containerd[1912]: time="2025-09-12T17:48:35.620315132Z" level=info msg="received exit event container_id:\"e8136547ad44bc8a6c4f0ea912d8b7bef65dc7a99fc349aab6c46dad9a3d16e2\" id:\"e8136547ad44bc8a6c4f0ea912d8b7bef65dc7a99fc349aab6c46dad9a3d16e2\" pid:4030 exited_at:{seconds:1757699315 nanos:620091906}" Sep 12 17:48:35.620955 containerd[1912]: time="2025-09-12T17:48:35.620703585Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e8136547ad44bc8a6c4f0ea912d8b7bef65dc7a99fc349aab6c46dad9a3d16e2\" id:\"e8136547ad44bc8a6c4f0ea912d8b7bef65dc7a99fc349aab6c46dad9a3d16e2\" pid:4030 exited_at:{seconds:1757699315 nanos:620091906}" Sep 12 17:48:35.643444 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8136547ad44bc8a6c4f0ea912d8b7bef65dc7a99fc349aab6c46dad9a3d16e2-rootfs.mount: Deactivated successfully. Sep 12 17:48:36.482540 containerd[1912]: time="2025-09-12T17:48:36.482497294Z" level=info msg="CreateContainer within sandbox \"69fcc6dbaf13904bc0fdca0a7372470fb618642328052e80131428315bdd1f90\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 17:48:36.502088 containerd[1912]: time="2025-09-12T17:48:36.500795537Z" level=info msg="Container 4389b29257bc08a9c496bfaf776b1142708c0cdd92e80ef0cf561872b3564dc4: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:48:36.524867 containerd[1912]: time="2025-09-12T17:48:36.524825868Z" level=info msg="CreateContainer within sandbox \"69fcc6dbaf13904bc0fdca0a7372470fb618642328052e80131428315bdd1f90\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4389b29257bc08a9c496bfaf776b1142708c0cdd92e80ef0cf561872b3564dc4\"" Sep 12 17:48:36.526622 containerd[1912]: time="2025-09-12T17:48:36.526593308Z" level=info msg="StartContainer for \"4389b29257bc08a9c496bfaf776b1142708c0cdd92e80ef0cf561872b3564dc4\"" Sep 12 17:48:36.527862 containerd[1912]: time="2025-09-12T17:48:36.527826760Z" level=info msg="connecting to shim 4389b29257bc08a9c496bfaf776b1142708c0cdd92e80ef0cf561872b3564dc4" address="unix:///run/containerd/s/0b6af2a10c37639be1d94834eb9cf17540241cdc6df6d9706907c62a2213ec2e" protocol=ttrpc version=3 Sep 12 17:48:36.548353 systemd[1]: Started cri-containerd-4389b29257bc08a9c496bfaf776b1142708c0cdd92e80ef0cf561872b3564dc4.scope - libcontainer container 4389b29257bc08a9c496bfaf776b1142708c0cdd92e80ef0cf561872b3564dc4. Sep 12 17:48:36.580145 systemd[1]: cri-containerd-4389b29257bc08a9c496bfaf776b1142708c0cdd92e80ef0cf561872b3564dc4.scope: Deactivated successfully. Sep 12 17:48:36.582210 containerd[1912]: time="2025-09-12T17:48:36.582122983Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4389b29257bc08a9c496bfaf776b1142708c0cdd92e80ef0cf561872b3564dc4\" id:\"4389b29257bc08a9c496bfaf776b1142708c0cdd92e80ef0cf561872b3564dc4\" pid:4068 exited_at:{seconds:1757699316 nanos:581603597}" Sep 12 17:48:36.583710 containerd[1912]: time="2025-09-12T17:48:36.583602113Z" level=info msg="received exit event container_id:\"4389b29257bc08a9c496bfaf776b1142708c0cdd92e80ef0cf561872b3564dc4\" id:\"4389b29257bc08a9c496bfaf776b1142708c0cdd92e80ef0cf561872b3564dc4\" pid:4068 exited_at:{seconds:1757699316 nanos:581603597}" Sep 12 17:48:36.586688 containerd[1912]: time="2025-09-12T17:48:36.586659472Z" level=info msg="StartContainer for \"4389b29257bc08a9c496bfaf776b1142708c0cdd92e80ef0cf561872b3564dc4\" returns successfully" Sep 12 17:48:36.605939 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4389b29257bc08a9c496bfaf776b1142708c0cdd92e80ef0cf561872b3564dc4-rootfs.mount: Deactivated successfully. Sep 12 17:48:37.491814 containerd[1912]: time="2025-09-12T17:48:37.491750506Z" level=info msg="CreateContainer within sandbox \"69fcc6dbaf13904bc0fdca0a7372470fb618642328052e80131428315bdd1f90\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 17:48:37.537273 containerd[1912]: time="2025-09-12T17:48:37.536564517Z" level=info msg="Container 2a758e212865efdc271e39752da15fb0ce2a29d36cecffe2d31db5ae0f4ab0a7: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:48:37.538550 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2820561889.mount: Deactivated successfully. Sep 12 17:48:37.549862 containerd[1912]: time="2025-09-12T17:48:37.549825664Z" level=info msg="CreateContainer within sandbox \"69fcc6dbaf13904bc0fdca0a7372470fb618642328052e80131428315bdd1f90\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2a758e212865efdc271e39752da15fb0ce2a29d36cecffe2d31db5ae0f4ab0a7\"" Sep 12 17:48:37.551985 containerd[1912]: time="2025-09-12T17:48:37.550934669Z" level=info msg="StartContainer for \"2a758e212865efdc271e39752da15fb0ce2a29d36cecffe2d31db5ae0f4ab0a7\"" Sep 12 17:48:37.552994 containerd[1912]: time="2025-09-12T17:48:37.552941671Z" level=info msg="connecting to shim 2a758e212865efdc271e39752da15fb0ce2a29d36cecffe2d31db5ae0f4ab0a7" address="unix:///run/containerd/s/0b6af2a10c37639be1d94834eb9cf17540241cdc6df6d9706907c62a2213ec2e" protocol=ttrpc version=3 Sep 12 17:48:37.597667 systemd[1]: Started cri-containerd-2a758e212865efdc271e39752da15fb0ce2a29d36cecffe2d31db5ae0f4ab0a7.scope - libcontainer container 2a758e212865efdc271e39752da15fb0ce2a29d36cecffe2d31db5ae0f4ab0a7. Sep 12 17:48:37.654383 containerd[1912]: time="2025-09-12T17:48:37.654344048Z" level=info msg="StartContainer for \"2a758e212865efdc271e39752da15fb0ce2a29d36cecffe2d31db5ae0f4ab0a7\" returns successfully" Sep 12 17:48:37.802507 containerd[1912]: time="2025-09-12T17:48:37.801676983Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2a758e212865efdc271e39752da15fb0ce2a29d36cecffe2d31db5ae0f4ab0a7\" id:\"793dbf4d9f16ed423e747dd71ea46353d4d3331124a8163a8369e9ac31d8f4f8\" pid:4138 exited_at:{seconds:1757699317 nanos:801350610}" Sep 12 17:48:37.821858 kubelet[3208]: I0912 17:48:37.821829 3208 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 12 17:48:37.870734 systemd[1]: Created slice kubepods-burstable-pod9f167b10_6c7a_4105_bf8c_c701b8254121.slice - libcontainer container kubepods-burstable-pod9f167b10_6c7a_4105_bf8c_c701b8254121.slice. Sep 12 17:48:37.884976 systemd[1]: Created slice kubepods-burstable-pod12a2525b_0361_4366_acd4_7dbb90a2e955.slice - libcontainer container kubepods-burstable-pod12a2525b_0361_4366_acd4_7dbb90a2e955.slice. Sep 12 17:48:37.979118 kubelet[3208]: I0912 17:48:37.979077 3208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6r99\" (UniqueName: \"kubernetes.io/projected/9f167b10-6c7a-4105-bf8c-c701b8254121-kube-api-access-z6r99\") pod \"coredns-7c65d6cfc9-g89v5\" (UID: \"9f167b10-6c7a-4105-bf8c-c701b8254121\") " pod="kube-system/coredns-7c65d6cfc9-g89v5" Sep 12 17:48:37.979118 kubelet[3208]: I0912 17:48:37.979123 3208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12a2525b-0361-4366-acd4-7dbb90a2e955-config-volume\") pod \"coredns-7c65d6cfc9-mtt9x\" (UID: \"12a2525b-0361-4366-acd4-7dbb90a2e955\") " pod="kube-system/coredns-7c65d6cfc9-mtt9x" Sep 12 17:48:37.979310 kubelet[3208]: I0912 17:48:37.979141 3208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m66bm\" (UniqueName: \"kubernetes.io/projected/12a2525b-0361-4366-acd4-7dbb90a2e955-kube-api-access-m66bm\") pod \"coredns-7c65d6cfc9-mtt9x\" (UID: \"12a2525b-0361-4366-acd4-7dbb90a2e955\") " pod="kube-system/coredns-7c65d6cfc9-mtt9x" Sep 12 17:48:37.979310 kubelet[3208]: I0912 17:48:37.979187 3208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f167b10-6c7a-4105-bf8c-c701b8254121-config-volume\") pod \"coredns-7c65d6cfc9-g89v5\" (UID: \"9f167b10-6c7a-4105-bf8c-c701b8254121\") " pod="kube-system/coredns-7c65d6cfc9-g89v5" Sep 12 17:48:38.181257 containerd[1912]: time="2025-09-12T17:48:38.181149780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-g89v5,Uid:9f167b10-6c7a-4105-bf8c-c701b8254121,Namespace:kube-system,Attempt:0,}" Sep 12 17:48:38.192661 containerd[1912]: time="2025-09-12T17:48:38.192317210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mtt9x,Uid:12a2525b-0361-4366-acd4-7dbb90a2e955,Namespace:kube-system,Attempt:0,}" Sep 12 17:48:38.525680 kubelet[3208]: I0912 17:48:38.525550 3208 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qjnkn" podStartSLOduration=6.9408572280000005 podStartE2EDuration="17.525532312s" podCreationTimestamp="2025-09-12 17:48:21 +0000 UTC" firstStartedPulling="2025-09-12 17:48:23.367585871 +0000 UTC m=+8.165856635" lastFinishedPulling="2025-09-12 17:48:33.952260967 +0000 UTC m=+18.750531719" observedRunningTime="2025-09-12 17:48:38.525466417 +0000 UTC m=+23.323737191" watchObservedRunningTime="2025-09-12 17:48:38.525532312 +0000 UTC m=+23.323803087" Sep 12 17:48:40.223998 systemd-networkd[1812]: cilium_host: Link UP Sep 12 17:48:40.224205 systemd-networkd[1812]: cilium_net: Link UP Sep 12 17:48:40.224416 systemd-networkd[1812]: cilium_net: Gained carrier Sep 12 17:48:40.224622 systemd-networkd[1812]: cilium_host: Gained carrier Sep 12 17:48:40.229046 (udev-worker)[4196]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:48:40.230514 (udev-worker)[4231]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:48:40.267754 systemd-networkd[1812]: cilium_net: Gained IPv6LL Sep 12 17:48:40.357595 (udev-worker)[4245]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:48:40.368583 systemd-networkd[1812]: cilium_vxlan: Link UP Sep 12 17:48:40.368599 systemd-networkd[1812]: cilium_vxlan: Gained carrier Sep 12 17:48:40.582332 systemd-networkd[1812]: cilium_host: Gained IPv6LL Sep 12 17:48:40.912192 kernel: NET: Registered PF_ALG protocol family Sep 12 17:48:41.680361 systemd-networkd[1812]: lxc_health: Link UP Sep 12 17:48:41.708568 systemd-networkd[1812]: lxc_health: Gained carrier Sep 12 17:48:41.934543 systemd-networkd[1812]: cilium_vxlan: Gained IPv6LL Sep 12 17:48:42.300323 kernel: eth0: renamed from tmp2457f Sep 12 17:48:42.304095 systemd-networkd[1812]: lxc9cca3701d266: Link UP Sep 12 17:48:42.308718 (udev-worker)[4560]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:48:42.311771 systemd-networkd[1812]: lxc9cca3701d266: Gained carrier Sep 12 17:48:42.316157 systemd-networkd[1812]: lxc753903227414: Link UP Sep 12 17:48:42.326305 kernel: eth0: renamed from tmpa0a6b Sep 12 17:48:42.328488 systemd-networkd[1812]: lxc753903227414: Gained carrier Sep 12 17:48:43.406469 systemd-networkd[1812]: lxc9cca3701d266: Gained IPv6LL Sep 12 17:48:43.534489 systemd-networkd[1812]: lxc_health: Gained IPv6LL Sep 12 17:48:43.662357 systemd-networkd[1812]: lxc753903227414: Gained IPv6LL Sep 12 17:48:46.103869 ntpd[1851]: Listen normally on 7 cilium_host 192.168.0.37:123 Sep 12 17:48:46.104300 ntpd[1851]: 12 Sep 17:48:46 ntpd[1851]: Listen normally on 7 cilium_host 192.168.0.37:123 Sep 12 17:48:46.106280 ntpd[1851]: Listen normally on 8 cilium_net [fe80::985e:daff:fe9d:4b93%4]:123 Sep 12 17:48:46.106682 ntpd[1851]: 12 Sep 17:48:46 ntpd[1851]: Listen normally on 8 cilium_net [fe80::985e:daff:fe9d:4b93%4]:123 Sep 12 17:48:46.106682 ntpd[1851]: 12 Sep 17:48:46 ntpd[1851]: Listen normally on 9 cilium_host [fe80::5410:5fff:fed0:89b5%5]:123 Sep 12 17:48:46.106682 ntpd[1851]: 12 Sep 17:48:46 ntpd[1851]: Listen normally on 10 cilium_vxlan [fe80::cc4f:24ff:fe18:1ae4%6]:123 Sep 12 17:48:46.106682 ntpd[1851]: 12 Sep 17:48:46 ntpd[1851]: Listen normally on 11 lxc_health [fe80::48bb:9bff:fe01:b8ae%8]:123 Sep 12 17:48:46.106682 ntpd[1851]: 12 Sep 17:48:46 ntpd[1851]: Listen normally on 12 lxc753903227414 [fe80::901b:f7ff:fe0d:69a6%10]:123 Sep 12 17:48:46.106682 ntpd[1851]: 12 Sep 17:48:46 ntpd[1851]: Listen normally on 13 lxc9cca3701d266 [fe80::ec8d:2dff:fe24:506a%12]:123 Sep 12 17:48:46.106361 ntpd[1851]: Listen normally on 9 cilium_host [fe80::5410:5fff:fed0:89b5%5]:123 Sep 12 17:48:46.106398 ntpd[1851]: Listen normally on 10 cilium_vxlan [fe80::cc4f:24ff:fe18:1ae4%6]:123 Sep 12 17:48:46.106437 ntpd[1851]: Listen normally on 11 lxc_health [fe80::48bb:9bff:fe01:b8ae%8]:123 Sep 12 17:48:46.106474 ntpd[1851]: Listen normally on 12 lxc753903227414 [fe80::901b:f7ff:fe0d:69a6%10]:123 Sep 12 17:48:46.106511 ntpd[1851]: Listen normally on 13 lxc9cca3701d266 [fe80::ec8d:2dff:fe24:506a%12]:123 Sep 12 17:48:46.533835 kubelet[3208]: I0912 17:48:46.533336 3208 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:48:46.650483 containerd[1912]: time="2025-09-12T17:48:46.650327156Z" level=info msg="connecting to shim 2457f8c64f9ce4dea52e6dba7b9452de7a1331771da271227b7f790f51d71f26" address="unix:///run/containerd/s/47d0a51f30d13d9b1a26e3a0cd76796ee800704c94ace197848f34b25264a72b" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:48:46.686990 containerd[1912]: time="2025-09-12T17:48:46.686884212Z" level=info msg="connecting to shim a0a6bf02c9437d66d473f493a6ae2824245096a1bca20dca14465b4742d4113b" address="unix:///run/containerd/s/bea13f630507006b83010df5046a4f9655342976299bff3143ef8717273b295e" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:48:46.733795 systemd[1]: Started cri-containerd-2457f8c64f9ce4dea52e6dba7b9452de7a1331771da271227b7f790f51d71f26.scope - libcontainer container 2457f8c64f9ce4dea52e6dba7b9452de7a1331771da271227b7f790f51d71f26. Sep 12 17:48:46.755568 systemd[1]: Started cri-containerd-a0a6bf02c9437d66d473f493a6ae2824245096a1bca20dca14465b4742d4113b.scope - libcontainer container a0a6bf02c9437d66d473f493a6ae2824245096a1bca20dca14465b4742d4113b. Sep 12 17:48:46.846848 containerd[1912]: time="2025-09-12T17:48:46.846774667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-g89v5,Uid:9f167b10-6c7a-4105-bf8c-c701b8254121,Namespace:kube-system,Attempt:0,} returns sandbox id \"2457f8c64f9ce4dea52e6dba7b9452de7a1331771da271227b7f790f51d71f26\"" Sep 12 17:48:46.856222 containerd[1912]: time="2025-09-12T17:48:46.856173275Z" level=info msg="CreateContainer within sandbox \"2457f8c64f9ce4dea52e6dba7b9452de7a1331771da271227b7f790f51d71f26\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:48:46.875130 containerd[1912]: time="2025-09-12T17:48:46.875060891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mtt9x,Uid:12a2525b-0361-4366-acd4-7dbb90a2e955,Namespace:kube-system,Attempt:0,} returns sandbox id \"a0a6bf02c9437d66d473f493a6ae2824245096a1bca20dca14465b4742d4113b\"" Sep 12 17:48:46.882597 containerd[1912]: time="2025-09-12T17:48:46.882547068Z" level=info msg="CreateContainer within sandbox \"a0a6bf02c9437d66d473f493a6ae2824245096a1bca20dca14465b4742d4113b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:48:46.900490 containerd[1912]: time="2025-09-12T17:48:46.900295003Z" level=info msg="Container 6ff23f6734f8ecd011135271b1eab66c9f5aba686837f184144069becba0b321: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:48:46.902506 containerd[1912]: time="2025-09-12T17:48:46.902475471Z" level=info msg="Container ee9d1d7eef34f290fecb60902003ffd2eead416ce04d7e8d5fceec3c66ce545f: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:48:46.915882 containerd[1912]: time="2025-09-12T17:48:46.915843218Z" level=info msg="CreateContainer within sandbox \"2457f8c64f9ce4dea52e6dba7b9452de7a1331771da271227b7f790f51d71f26\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6ff23f6734f8ecd011135271b1eab66c9f5aba686837f184144069becba0b321\"" Sep 12 17:48:46.916742 containerd[1912]: time="2025-09-12T17:48:46.916640317Z" level=info msg="StartContainer for \"6ff23f6734f8ecd011135271b1eab66c9f5aba686837f184144069becba0b321\"" Sep 12 17:48:46.926804 containerd[1912]: time="2025-09-12T17:48:46.926731245Z" level=info msg="connecting to shim 6ff23f6734f8ecd011135271b1eab66c9f5aba686837f184144069becba0b321" address="unix:///run/containerd/s/47d0a51f30d13d9b1a26e3a0cd76796ee800704c94ace197848f34b25264a72b" protocol=ttrpc version=3 Sep 12 17:48:46.927204 containerd[1912]: time="2025-09-12T17:48:46.926946292Z" level=info msg="CreateContainer within sandbox \"a0a6bf02c9437d66d473f493a6ae2824245096a1bca20dca14465b4742d4113b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ee9d1d7eef34f290fecb60902003ffd2eead416ce04d7e8d5fceec3c66ce545f\"" Sep 12 17:48:46.928029 containerd[1912]: time="2025-09-12T17:48:46.928002563Z" level=info msg="StartContainer for \"ee9d1d7eef34f290fecb60902003ffd2eead416ce04d7e8d5fceec3c66ce545f\"" Sep 12 17:48:46.948192 containerd[1912]: time="2025-09-12T17:48:46.948134010Z" level=info msg="connecting to shim ee9d1d7eef34f290fecb60902003ffd2eead416ce04d7e8d5fceec3c66ce545f" address="unix:///run/containerd/s/bea13f630507006b83010df5046a4f9655342976299bff3143ef8717273b295e" protocol=ttrpc version=3 Sep 12 17:48:46.959423 systemd[1]: Started cri-containerd-6ff23f6734f8ecd011135271b1eab66c9f5aba686837f184144069becba0b321.scope - libcontainer container 6ff23f6734f8ecd011135271b1eab66c9f5aba686837f184144069becba0b321. Sep 12 17:48:46.990451 systemd[1]: Started cri-containerd-ee9d1d7eef34f290fecb60902003ffd2eead416ce04d7e8d5fceec3c66ce545f.scope - libcontainer container ee9d1d7eef34f290fecb60902003ffd2eead416ce04d7e8d5fceec3c66ce545f. Sep 12 17:48:47.048320 containerd[1912]: time="2025-09-12T17:48:47.048207391Z" level=info msg="StartContainer for \"6ff23f6734f8ecd011135271b1eab66c9f5aba686837f184144069becba0b321\" returns successfully" Sep 12 17:48:47.057819 containerd[1912]: time="2025-09-12T17:48:47.057776924Z" level=info msg="StartContainer for \"ee9d1d7eef34f290fecb60902003ffd2eead416ce04d7e8d5fceec3c66ce545f\" returns successfully" Sep 12 17:48:47.541761 kubelet[3208]: I0912 17:48:47.541700 3208 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-g89v5" podStartSLOduration=25.541683714 podStartE2EDuration="25.541683714s" podCreationTimestamp="2025-09-12 17:48:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:48:47.539532289 +0000 UTC m=+32.337803062" watchObservedRunningTime="2025-09-12 17:48:47.541683714 +0000 UTC m=+32.339954482" Sep 12 17:48:47.590360 kubelet[3208]: I0912 17:48:47.590285 3208 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-mtt9x" podStartSLOduration=25.590235549 podStartE2EDuration="25.590235549s" podCreationTimestamp="2025-09-12 17:48:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:48:47.58944379 +0000 UTC m=+32.387714563" watchObservedRunningTime="2025-09-12 17:48:47.590235549 +0000 UTC m=+32.388506334" Sep 12 17:48:47.633872 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2568960926.mount: Deactivated successfully. Sep 12 17:48:49.498269 systemd[1]: Started sshd@7-172.31.16.223:22-139.178.68.195:49318.service - OpenSSH per-connection server daemon (139.178.68.195:49318). Sep 12 17:48:49.712639 sshd[4768]: Accepted publickey for core from 139.178.68.195 port 49318 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:48:49.714795 sshd-session[4768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:48:49.720142 systemd-logind[1858]: New session 8 of user core. Sep 12 17:48:49.725339 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 17:48:50.566522 sshd[4771]: Connection closed by 139.178.68.195 port 49318 Sep 12 17:48:50.566960 sshd-session[4768]: pam_unix(sshd:session): session closed for user core Sep 12 17:48:50.574451 systemd-logind[1858]: Session 8 logged out. Waiting for processes to exit. Sep 12 17:48:50.574646 systemd[1]: sshd@7-172.31.16.223:22-139.178.68.195:49318.service: Deactivated successfully. Sep 12 17:48:50.577049 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 17:48:50.580317 systemd-logind[1858]: Removed session 8. Sep 12 17:48:55.601974 systemd[1]: Started sshd@8-172.31.16.223:22-139.178.68.195:36308.service - OpenSSH per-connection server daemon (139.178.68.195:36308). Sep 12 17:48:55.782355 sshd[4787]: Accepted publickey for core from 139.178.68.195 port 36308 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:48:55.783859 sshd-session[4787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:48:55.789046 systemd-logind[1858]: New session 9 of user core. Sep 12 17:48:55.795349 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 17:48:55.998597 sshd[4793]: Connection closed by 139.178.68.195 port 36308 Sep 12 17:48:56.000452 sshd-session[4787]: pam_unix(sshd:session): session closed for user core Sep 12 17:48:56.005129 systemd[1]: sshd@8-172.31.16.223:22-139.178.68.195:36308.service: Deactivated successfully. Sep 12 17:48:56.008910 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 17:48:56.011016 systemd-logind[1858]: Session 9 logged out. Waiting for processes to exit. Sep 12 17:48:56.013105 systemd-logind[1858]: Removed session 9. Sep 12 17:49:01.036409 systemd[1]: Started sshd@9-172.31.16.223:22-139.178.68.195:52560.service - OpenSSH per-connection server daemon (139.178.68.195:52560). Sep 12 17:49:01.222438 sshd[4806]: Accepted publickey for core from 139.178.68.195 port 52560 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:49:01.226712 sshd-session[4806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:49:01.234396 systemd-logind[1858]: New session 10 of user core. Sep 12 17:49:01.242404 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 17:49:01.439455 sshd[4809]: Connection closed by 139.178.68.195 port 52560 Sep 12 17:49:01.440537 sshd-session[4806]: pam_unix(sshd:session): session closed for user core Sep 12 17:49:01.459819 systemd[1]: sshd@9-172.31.16.223:22-139.178.68.195:52560.service: Deactivated successfully. Sep 12 17:49:01.466933 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 17:49:01.470783 systemd-logind[1858]: Session 10 logged out. Waiting for processes to exit. Sep 12 17:49:01.472578 systemd-logind[1858]: Removed session 10. Sep 12 17:49:06.479595 systemd[1]: Started sshd@10-172.31.16.223:22-139.178.68.195:52570.service - OpenSSH per-connection server daemon (139.178.68.195:52570). Sep 12 17:49:06.645584 sshd[4822]: Accepted publickey for core from 139.178.68.195 port 52570 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:49:06.651238 sshd-session[4822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:49:06.656404 systemd-logind[1858]: New session 11 of user core. Sep 12 17:49:06.664430 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 17:49:06.844026 sshd[4825]: Connection closed by 139.178.68.195 port 52570 Sep 12 17:49:06.844666 sshd-session[4822]: pam_unix(sshd:session): session closed for user core Sep 12 17:49:06.848222 systemd[1]: sshd@10-172.31.16.223:22-139.178.68.195:52570.service: Deactivated successfully. Sep 12 17:49:06.849903 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 17:49:06.853241 systemd-logind[1858]: Session 11 logged out. Waiting for processes to exit. Sep 12 17:49:06.854580 systemd-logind[1858]: Removed session 11. Sep 12 17:49:06.875317 systemd[1]: Started sshd@11-172.31.16.223:22-139.178.68.195:52576.service - OpenSSH per-connection server daemon (139.178.68.195:52576). Sep 12 17:49:07.049467 sshd[4838]: Accepted publickey for core from 139.178.68.195 port 52576 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:49:07.050884 sshd-session[4838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:49:07.055475 systemd-logind[1858]: New session 12 of user core. Sep 12 17:49:07.062374 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 17:49:07.338463 sshd[4841]: Connection closed by 139.178.68.195 port 52576 Sep 12 17:49:07.340306 sshd-session[4838]: pam_unix(sshd:session): session closed for user core Sep 12 17:49:07.349384 systemd-logind[1858]: Session 12 logged out. Waiting for processes to exit. Sep 12 17:49:07.349924 systemd[1]: sshd@11-172.31.16.223:22-139.178.68.195:52576.service: Deactivated successfully. Sep 12 17:49:07.354980 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 17:49:07.374383 systemd-logind[1858]: Removed session 12. Sep 12 17:49:07.376122 systemd[1]: Started sshd@12-172.31.16.223:22-139.178.68.195:52584.service - OpenSSH per-connection server daemon (139.178.68.195:52584). Sep 12 17:49:07.591311 sshd[4851]: Accepted publickey for core from 139.178.68.195 port 52584 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:49:07.592541 sshd-session[4851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:49:07.598586 systemd-logind[1858]: New session 13 of user core. Sep 12 17:49:07.608893 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 17:49:07.823041 sshd[4854]: Connection closed by 139.178.68.195 port 52584 Sep 12 17:49:07.823617 sshd-session[4851]: pam_unix(sshd:session): session closed for user core Sep 12 17:49:07.827598 systemd[1]: sshd@12-172.31.16.223:22-139.178.68.195:52584.service: Deactivated successfully. Sep 12 17:49:07.829536 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 17:49:07.830591 systemd-logind[1858]: Session 13 logged out. Waiting for processes to exit. Sep 12 17:49:07.832308 systemd-logind[1858]: Removed session 13. Sep 12 17:49:12.859322 systemd[1]: Started sshd@13-172.31.16.223:22-139.178.68.195:48850.service - OpenSSH per-connection server daemon (139.178.68.195:48850). Sep 12 17:49:13.034058 sshd[4867]: Accepted publickey for core from 139.178.68.195 port 48850 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:49:13.035585 sshd-session[4867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:49:13.041830 systemd-logind[1858]: New session 14 of user core. Sep 12 17:49:13.047393 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 17:49:13.245003 sshd[4870]: Connection closed by 139.178.68.195 port 48850 Sep 12 17:49:13.245592 sshd-session[4867]: pam_unix(sshd:session): session closed for user core Sep 12 17:49:13.249594 systemd-logind[1858]: Session 14 logged out. Waiting for processes to exit. Sep 12 17:49:13.250285 systemd[1]: sshd@13-172.31.16.223:22-139.178.68.195:48850.service: Deactivated successfully. Sep 12 17:49:13.252115 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 17:49:13.254142 systemd-logind[1858]: Removed session 14. Sep 12 17:49:18.279531 systemd[1]: Started sshd@14-172.31.16.223:22-139.178.68.195:48866.service - OpenSSH per-connection server daemon (139.178.68.195:48866). Sep 12 17:49:18.450518 sshd[4886]: Accepted publickey for core from 139.178.68.195 port 48866 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:49:18.451989 sshd-session[4886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:49:18.457501 systemd-logind[1858]: New session 15 of user core. Sep 12 17:49:18.462451 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 17:49:18.667349 sshd[4889]: Connection closed by 139.178.68.195 port 48866 Sep 12 17:49:18.667945 sshd-session[4886]: pam_unix(sshd:session): session closed for user core Sep 12 17:49:18.672056 systemd[1]: sshd@14-172.31.16.223:22-139.178.68.195:48866.service: Deactivated successfully. Sep 12 17:49:18.674271 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 17:49:18.675019 systemd-logind[1858]: Session 15 logged out. Waiting for processes to exit. Sep 12 17:49:18.676528 systemd-logind[1858]: Removed session 15. Sep 12 17:49:18.703319 systemd[1]: Started sshd@15-172.31.16.223:22-139.178.68.195:48868.service - OpenSSH per-connection server daemon (139.178.68.195:48868). Sep 12 17:49:18.880808 sshd[4901]: Accepted publickey for core from 139.178.68.195 port 48868 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:49:18.882401 sshd-session[4901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:49:18.888109 systemd-logind[1858]: New session 16 of user core. Sep 12 17:49:18.892370 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 17:49:19.498252 sshd[4904]: Connection closed by 139.178.68.195 port 48868 Sep 12 17:49:19.499317 sshd-session[4901]: pam_unix(sshd:session): session closed for user core Sep 12 17:49:19.510462 systemd[1]: sshd@15-172.31.16.223:22-139.178.68.195:48868.service: Deactivated successfully. Sep 12 17:49:19.513015 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 17:49:19.516077 systemd-logind[1858]: Session 16 logged out. Waiting for processes to exit. Sep 12 17:49:19.517555 systemd-logind[1858]: Removed session 16. Sep 12 17:49:19.532113 systemd[1]: Started sshd@16-172.31.16.223:22-139.178.68.195:48880.service - OpenSSH per-connection server daemon (139.178.68.195:48880). Sep 12 17:49:19.745204 sshd[4914]: Accepted publickey for core from 139.178.68.195 port 48880 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:49:19.745980 sshd-session[4914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:49:19.752023 systemd-logind[1858]: New session 17 of user core. Sep 12 17:49:19.757390 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 17:49:21.411187 sshd[4917]: Connection closed by 139.178.68.195 port 48880 Sep 12 17:49:21.412330 sshd-session[4914]: pam_unix(sshd:session): session closed for user core Sep 12 17:49:21.432231 systemd-logind[1858]: Session 17 logged out. Waiting for processes to exit. Sep 12 17:49:21.443133 systemd[1]: sshd@16-172.31.16.223:22-139.178.68.195:48880.service: Deactivated successfully. Sep 12 17:49:21.448673 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 17:49:21.449893 systemd-logind[1858]: Removed session 17. Sep 12 17:49:21.455325 systemd[1]: Started sshd@17-172.31.16.223:22-139.178.68.195:48834.service - OpenSSH per-connection server daemon (139.178.68.195:48834). Sep 12 17:49:21.636833 sshd[4934]: Accepted publickey for core from 139.178.68.195 port 48834 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:49:21.638373 sshd-session[4934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:49:21.645300 systemd-logind[1858]: New session 18 of user core. Sep 12 17:49:21.649348 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 17:49:21.997714 sshd[4937]: Connection closed by 139.178.68.195 port 48834 Sep 12 17:49:21.998547 sshd-session[4934]: pam_unix(sshd:session): session closed for user core Sep 12 17:49:22.003552 systemd-logind[1858]: Session 18 logged out. Waiting for processes to exit. Sep 12 17:49:22.004562 systemd[1]: sshd@17-172.31.16.223:22-139.178.68.195:48834.service: Deactivated successfully. Sep 12 17:49:22.007316 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 17:49:22.009818 systemd-logind[1858]: Removed session 18. Sep 12 17:49:22.034745 systemd[1]: Started sshd@18-172.31.16.223:22-139.178.68.195:48852.service - OpenSSH per-connection server daemon (139.178.68.195:48852). Sep 12 17:49:22.205046 sshd[4946]: Accepted publickey for core from 139.178.68.195 port 48852 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:49:22.206408 sshd-session[4946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:49:22.212497 systemd-logind[1858]: New session 19 of user core. Sep 12 17:49:22.219354 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 17:49:22.402119 sshd[4949]: Connection closed by 139.178.68.195 port 48852 Sep 12 17:49:22.402679 sshd-session[4946]: pam_unix(sshd:session): session closed for user core Sep 12 17:49:22.406564 systemd-logind[1858]: Session 19 logged out. Waiting for processes to exit. Sep 12 17:49:22.407338 systemd[1]: sshd@18-172.31.16.223:22-139.178.68.195:48852.service: Deactivated successfully. Sep 12 17:49:22.409367 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 17:49:22.410982 systemd-logind[1858]: Removed session 19. Sep 12 17:49:27.441354 systemd[1]: Started sshd@19-172.31.16.223:22-139.178.68.195:48864.service - OpenSSH per-connection server daemon (139.178.68.195:48864). Sep 12 17:49:27.611740 sshd[4966]: Accepted publickey for core from 139.178.68.195 port 48864 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:49:27.613186 sshd-session[4966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:49:27.617805 systemd-logind[1858]: New session 20 of user core. Sep 12 17:49:27.623392 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 17:49:27.808347 sshd[4969]: Connection closed by 139.178.68.195 port 48864 Sep 12 17:49:27.809288 sshd-session[4966]: pam_unix(sshd:session): session closed for user core Sep 12 17:49:27.813568 systemd-logind[1858]: Session 20 logged out. Waiting for processes to exit. Sep 12 17:49:27.813688 systemd[1]: sshd@19-172.31.16.223:22-139.178.68.195:48864.service: Deactivated successfully. Sep 12 17:49:27.815558 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 17:49:27.817478 systemd-logind[1858]: Removed session 20. Sep 12 17:49:32.852582 systemd[1]: Started sshd@20-172.31.16.223:22-139.178.68.195:55690.service - OpenSSH per-connection server daemon (139.178.68.195:55690). Sep 12 17:49:33.018955 sshd[4981]: Accepted publickey for core from 139.178.68.195 port 55690 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:49:33.020466 sshd-session[4981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:49:33.026334 systemd-logind[1858]: New session 21 of user core. Sep 12 17:49:33.033424 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 17:49:33.224398 sshd[4984]: Connection closed by 139.178.68.195 port 55690 Sep 12 17:49:33.225230 sshd-session[4981]: pam_unix(sshd:session): session closed for user core Sep 12 17:49:33.229385 systemd[1]: sshd@20-172.31.16.223:22-139.178.68.195:55690.service: Deactivated successfully. Sep 12 17:49:33.231408 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 17:49:33.233118 systemd-logind[1858]: Session 21 logged out. Waiting for processes to exit. Sep 12 17:49:33.235033 systemd-logind[1858]: Removed session 21. Sep 12 17:49:38.258338 systemd[1]: Started sshd@21-172.31.16.223:22-139.178.68.195:55698.service - OpenSSH per-connection server daemon (139.178.68.195:55698). Sep 12 17:49:38.429806 sshd[4996]: Accepted publickey for core from 139.178.68.195 port 55698 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:49:38.431308 sshd-session[4996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:49:38.436239 systemd-logind[1858]: New session 22 of user core. Sep 12 17:49:38.451422 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 17:49:38.630460 sshd[4999]: Connection closed by 139.178.68.195 port 55698 Sep 12 17:49:38.631270 sshd-session[4996]: pam_unix(sshd:session): session closed for user core Sep 12 17:49:38.635311 systemd[1]: sshd@21-172.31.16.223:22-139.178.68.195:55698.service: Deactivated successfully. Sep 12 17:49:38.637331 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 17:49:38.638303 systemd-logind[1858]: Session 22 logged out. Waiting for processes to exit. Sep 12 17:49:38.639809 systemd-logind[1858]: Removed session 22. Sep 12 17:49:38.663576 systemd[1]: Started sshd@22-172.31.16.223:22-139.178.68.195:55710.service - OpenSSH per-connection server daemon (139.178.68.195:55710). Sep 12 17:49:38.833842 sshd[5011]: Accepted publickey for core from 139.178.68.195 port 55710 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:49:38.835222 sshd-session[5011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:49:38.840931 systemd-logind[1858]: New session 23 of user core. Sep 12 17:49:38.845351 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 17:49:40.591181 containerd[1912]: time="2025-09-12T17:49:40.591124484Z" level=info msg="StopContainer for \"664ef6fd311b5c6a9679ab76f59f4ed7d397def0166957e823411739648b4360\" with timeout 30 (s)" Sep 12 17:49:40.607954 containerd[1912]: time="2025-09-12T17:49:40.607849215Z" level=info msg="Stop container \"664ef6fd311b5c6a9679ab76f59f4ed7d397def0166957e823411739648b4360\" with signal terminated" Sep 12 17:49:40.649223 systemd[1]: cri-containerd-664ef6fd311b5c6a9679ab76f59f4ed7d397def0166957e823411739648b4360.scope: Deactivated successfully. Sep 12 17:49:40.652111 systemd[1]: cri-containerd-664ef6fd311b5c6a9679ab76f59f4ed7d397def0166957e823411739648b4360.scope: Consumed 425ms CPU time, 37.3M memory peak, 15.3M read from disk, 4K written to disk. Sep 12 17:49:40.654621 containerd[1912]: time="2025-09-12T17:49:40.654070794Z" level=info msg="received exit event container_id:\"664ef6fd311b5c6a9679ab76f59f4ed7d397def0166957e823411739648b4360\" id:\"664ef6fd311b5c6a9679ab76f59f4ed7d397def0166957e823411739648b4360\" pid:3608 exited_at:{seconds:1757699380 nanos:653520415}" Sep 12 17:49:40.655115 containerd[1912]: time="2025-09-12T17:49:40.655074802Z" level=info msg="TaskExit event in podsandbox handler container_id:\"664ef6fd311b5c6a9679ab76f59f4ed7d397def0166957e823411739648b4360\" id:\"664ef6fd311b5c6a9679ab76f59f4ed7d397def0166957e823411739648b4360\" pid:3608 exited_at:{seconds:1757699380 nanos:653520415}" Sep 12 17:49:40.688825 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-664ef6fd311b5c6a9679ab76f59f4ed7d397def0166957e823411739648b4360-rootfs.mount: Deactivated successfully. Sep 12 17:49:40.691706 containerd[1912]: time="2025-09-12T17:49:40.691630464Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:49:40.696355 containerd[1912]: time="2025-09-12T17:49:40.696305244Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2a758e212865efdc271e39752da15fb0ce2a29d36cecffe2d31db5ae0f4ab0a7\" id:\"586453d6c2425d3c2651a5d2d651c2fa484c1d779e952ba3b4e99a77cdaa83b3\" pid:5040 exited_at:{seconds:1757699380 nanos:695442021}" Sep 12 17:49:40.700799 containerd[1912]: time="2025-09-12T17:49:40.700753679Z" level=info msg="StopContainer for \"2a758e212865efdc271e39752da15fb0ce2a29d36cecffe2d31db5ae0f4ab0a7\" with timeout 2 (s)" Sep 12 17:49:40.701233 containerd[1912]: time="2025-09-12T17:49:40.701203798Z" level=info msg="Stop container \"2a758e212865efdc271e39752da15fb0ce2a29d36cecffe2d31db5ae0f4ab0a7\" with signal terminated" Sep 12 17:49:40.716756 systemd-networkd[1812]: lxc_health: Link DOWN Sep 12 17:49:40.716767 systemd-networkd[1812]: lxc_health: Lost carrier Sep 12 17:49:40.739871 systemd[1]: cri-containerd-2a758e212865efdc271e39752da15fb0ce2a29d36cecffe2d31db5ae0f4ab0a7.scope: Deactivated successfully. Sep 12 17:49:40.740260 systemd[1]: cri-containerd-2a758e212865efdc271e39752da15fb0ce2a29d36cecffe2d31db5ae0f4ab0a7.scope: Consumed 7.822s CPU time, 195.8M memory peak, 76.4M read from disk, 13.3M written to disk. Sep 12 17:49:40.742837 containerd[1912]: time="2025-09-12T17:49:40.742524354Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2a758e212865efdc271e39752da15fb0ce2a29d36cecffe2d31db5ae0f4ab0a7\" id:\"2a758e212865efdc271e39752da15fb0ce2a29d36cecffe2d31db5ae0f4ab0a7\" pid:4107 exited_at:{seconds:1757699380 nanos:742131987}" Sep 12 17:49:40.742837 containerd[1912]: time="2025-09-12T17:49:40.742631858Z" level=info msg="received exit event container_id:\"2a758e212865efdc271e39752da15fb0ce2a29d36cecffe2d31db5ae0f4ab0a7\" id:\"2a758e212865efdc271e39752da15fb0ce2a29d36cecffe2d31db5ae0f4ab0a7\" pid:4107 exited_at:{seconds:1757699380 nanos:742131987}" Sep 12 17:49:40.748204 containerd[1912]: time="2025-09-12T17:49:40.748148628Z" level=info msg="StopContainer for \"664ef6fd311b5c6a9679ab76f59f4ed7d397def0166957e823411739648b4360\" returns successfully" Sep 12 17:49:40.781609 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a758e212865efdc271e39752da15fb0ce2a29d36cecffe2d31db5ae0f4ab0a7-rootfs.mount: Deactivated successfully. Sep 12 17:49:40.788858 containerd[1912]: time="2025-09-12T17:49:40.788579958Z" level=info msg="StopPodSandbox for \"6184645073ea128a34cf646b1aeb9e38687f9b112c6996ad4f1950b4ab3e50b8\"" Sep 12 17:49:40.810789 containerd[1912]: time="2025-09-12T17:49:40.810126229Z" level=info msg="StopContainer for \"2a758e212865efdc271e39752da15fb0ce2a29d36cecffe2d31db5ae0f4ab0a7\" returns successfully" Sep 12 17:49:40.810789 containerd[1912]: time="2025-09-12T17:49:40.810761373Z" level=info msg="StopPodSandbox for \"69fcc6dbaf13904bc0fdca0a7372470fb618642328052e80131428315bdd1f90\"" Sep 12 17:49:40.811073 containerd[1912]: time="2025-09-12T17:49:40.810838860Z" level=info msg="Container to stop \"6389f6a7d3b82f8d37ac1677a2f5994aae988db6f46df0adadaa69ba9256d025\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:49:40.811073 containerd[1912]: time="2025-09-12T17:49:40.810855041Z" level=info msg="Container to stop \"edb0bd4e54bada3c3164e60e249a42256bbc3cc2dbda2d42bdc213a26aecc01b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:49:40.811073 containerd[1912]: time="2025-09-12T17:49:40.810869633Z" level=info msg="Container to stop \"e8136547ad44bc8a6c4f0ea912d8b7bef65dc7a99fc349aab6c46dad9a3d16e2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:49:40.811073 containerd[1912]: time="2025-09-12T17:49:40.810880711Z" level=info msg="Container to stop \"4389b29257bc08a9c496bfaf776b1142708c0cdd92e80ef0cf561872b3564dc4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:49:40.811073 containerd[1912]: time="2025-09-12T17:49:40.810891523Z" level=info msg="Container to stop \"2a758e212865efdc271e39752da15fb0ce2a29d36cecffe2d31db5ae0f4ab0a7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:49:40.811424 containerd[1912]: time="2025-09-12T17:49:40.811399576Z" level=info msg="Container to stop \"664ef6fd311b5c6a9679ab76f59f4ed7d397def0166957e823411739648b4360\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:49:40.821762 systemd[1]: cri-containerd-69fcc6dbaf13904bc0fdca0a7372470fb618642328052e80131428315bdd1f90.scope: Deactivated successfully. Sep 12 17:49:40.824929 systemd[1]: cri-containerd-6184645073ea128a34cf646b1aeb9e38687f9b112c6996ad4f1950b4ab3e50b8.scope: Deactivated successfully. Sep 12 17:49:40.829822 containerd[1912]: time="2025-09-12T17:49:40.829754909Z" level=info msg="TaskExit event in podsandbox handler container_id:\"69fcc6dbaf13904bc0fdca0a7372470fb618642328052e80131428315bdd1f90\" id:\"69fcc6dbaf13904bc0fdca0a7372470fb618642328052e80131428315bdd1f90\" pid:3401 exit_status:137 exited_at:{seconds:1757699380 nanos:828681078}" Sep 12 17:49:40.866967 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6184645073ea128a34cf646b1aeb9e38687f9b112c6996ad4f1950b4ab3e50b8-rootfs.mount: Deactivated successfully. Sep 12 17:49:40.878150 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-69fcc6dbaf13904bc0fdca0a7372470fb618642328052e80131428315bdd1f90-rootfs.mount: Deactivated successfully. Sep 12 17:49:40.882609 containerd[1912]: time="2025-09-12T17:49:40.881672413Z" level=info msg="shim disconnected" id=69fcc6dbaf13904bc0fdca0a7372470fb618642328052e80131428315bdd1f90 namespace=k8s.io Sep 12 17:49:40.882609 containerd[1912]: time="2025-09-12T17:49:40.881818846Z" level=warning msg="cleaning up after shim disconnected" id=69fcc6dbaf13904bc0fdca0a7372470fb618642328052e80131428315bdd1f90 namespace=k8s.io Sep 12 17:49:40.901188 containerd[1912]: time="2025-09-12T17:49:40.881832117Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:49:40.901419 containerd[1912]: time="2025-09-12T17:49:40.883612282Z" level=info msg="shim disconnected" id=6184645073ea128a34cf646b1aeb9e38687f9b112c6996ad4f1950b4ab3e50b8 namespace=k8s.io Sep 12 17:49:40.901419 containerd[1912]: time="2025-09-12T17:49:40.901401387Z" level=warning msg="cleaning up after shim disconnected" id=6184645073ea128a34cf646b1aeb9e38687f9b112c6996ad4f1950b4ab3e50b8 namespace=k8s.io Sep 12 17:49:40.901520 containerd[1912]: time="2025-09-12T17:49:40.901424222Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:49:40.954804 containerd[1912]: time="2025-09-12T17:49:40.954754959Z" level=info msg="received exit event sandbox_id:\"69fcc6dbaf13904bc0fdca0a7372470fb618642328052e80131428315bdd1f90\" exit_status:137 exited_at:{seconds:1757699380 nanos:828681078}" Sep 12 17:49:40.956096 containerd[1912]: time="2025-09-12T17:49:40.956033348Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6184645073ea128a34cf646b1aeb9e38687f9b112c6996ad4f1950b4ab3e50b8\" id:\"6184645073ea128a34cf646b1aeb9e38687f9b112c6996ad4f1950b4ab3e50b8\" pid:3316 exit_status:137 exited_at:{seconds:1757699380 nanos:827334250}" Sep 12 17:49:40.960009 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-69fcc6dbaf13904bc0fdca0a7372470fb618642328052e80131428315bdd1f90-shm.mount: Deactivated successfully. Sep 12 17:49:40.964389 containerd[1912]: time="2025-09-12T17:49:40.964353465Z" level=info msg="received exit event sandbox_id:\"6184645073ea128a34cf646b1aeb9e38687f9b112c6996ad4f1950b4ab3e50b8\" exit_status:137 exited_at:{seconds:1757699380 nanos:827334250}" Sep 12 17:49:40.965298 containerd[1912]: time="2025-09-12T17:49:40.965275245Z" level=info msg="TearDown network for sandbox \"6184645073ea128a34cf646b1aeb9e38687f9b112c6996ad4f1950b4ab3e50b8\" successfully" Sep 12 17:49:40.965393 containerd[1912]: time="2025-09-12T17:49:40.965382603Z" level=info msg="StopPodSandbox for \"6184645073ea128a34cf646b1aeb9e38687f9b112c6996ad4f1950b4ab3e50b8\" returns successfully" Sep 12 17:49:40.966904 containerd[1912]: time="2025-09-12T17:49:40.966876669Z" level=info msg="TearDown network for sandbox \"69fcc6dbaf13904bc0fdca0a7372470fb618642328052e80131428315bdd1f90\" successfully" Sep 12 17:49:40.966904 containerd[1912]: time="2025-09-12T17:49:40.966903190Z" level=info msg="StopPodSandbox for \"69fcc6dbaf13904bc0fdca0a7372470fb618642328052e80131428315bdd1f90\" returns successfully" Sep 12 17:49:41.111874 kubelet[3208]: I0912 17:49:41.111824 3208 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7e886de-a052-4260-8ede-050d4d6994fa-xtables-lock\") pod \"f7e886de-a052-4260-8ede-050d4d6994fa\" (UID: \"f7e886de-a052-4260-8ede-050d4d6994fa\") " Sep 12 17:49:41.111874 kubelet[3208]: I0912 17:49:41.111878 3208 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f7e886de-a052-4260-8ede-050d4d6994fa-cilium-cgroup\") pod \"f7e886de-a052-4260-8ede-050d4d6994fa\" (UID: \"f7e886de-a052-4260-8ede-050d4d6994fa\") " Sep 12 17:49:41.112658 kubelet[3208]: I0912 17:49:41.111908 3208 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f7e886de-a052-4260-8ede-050d4d6994fa-cilium-run\") pod \"f7e886de-a052-4260-8ede-050d4d6994fa\" (UID: \"f7e886de-a052-4260-8ede-050d4d6994fa\") " Sep 12 17:49:41.112658 kubelet[3208]: I0912 17:49:41.111938 3208 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pv6kk\" (UniqueName: \"kubernetes.io/projected/c1cd7ce3-ea60-4cf7-ba05-6bd6bb8f52b7-kube-api-access-pv6kk\") pod \"c1cd7ce3-ea60-4cf7-ba05-6bd6bb8f52b7\" (UID: \"c1cd7ce3-ea60-4cf7-ba05-6bd6bb8f52b7\") " Sep 12 17:49:41.112658 kubelet[3208]: I0912 17:49:41.111970 3208 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f7e886de-a052-4260-8ede-050d4d6994fa-hubble-tls\") pod \"f7e886de-a052-4260-8ede-050d4d6994fa\" (UID: \"f7e886de-a052-4260-8ede-050d4d6994fa\") " Sep 12 17:49:41.112658 kubelet[3208]: I0912 17:49:41.111993 3208 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f7e886de-a052-4260-8ede-050d4d6994fa-bpf-maps\") pod \"f7e886de-a052-4260-8ede-050d4d6994fa\" (UID: \"f7e886de-a052-4260-8ede-050d4d6994fa\") " Sep 12 17:49:41.112658 kubelet[3208]: I0912 17:49:41.112019 3208 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f7e886de-a052-4260-8ede-050d4d6994fa-host-proc-sys-kernel\") pod \"f7e886de-a052-4260-8ede-050d4d6994fa\" (UID: \"f7e886de-a052-4260-8ede-050d4d6994fa\") " Sep 12 17:49:41.112658 kubelet[3208]: I0912 17:49:41.112046 3208 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f7e886de-a052-4260-8ede-050d4d6994fa-cni-path\") pod \"f7e886de-a052-4260-8ede-050d4d6994fa\" (UID: \"f7e886de-a052-4260-8ede-050d4d6994fa\") " Sep 12 17:49:41.112942 kubelet[3208]: I0912 17:49:41.112073 3208 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c1cd7ce3-ea60-4cf7-ba05-6bd6bb8f52b7-cilium-config-path\") pod \"c1cd7ce3-ea60-4cf7-ba05-6bd6bb8f52b7\" (UID: \"c1cd7ce3-ea60-4cf7-ba05-6bd6bb8f52b7\") " Sep 12 17:49:41.112942 kubelet[3208]: I0912 17:49:41.112105 3208 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f7e886de-a052-4260-8ede-050d4d6994fa-host-proc-sys-net\") pod \"f7e886de-a052-4260-8ede-050d4d6994fa\" (UID: \"f7e886de-a052-4260-8ede-050d4d6994fa\") " Sep 12 17:49:41.112942 kubelet[3208]: I0912 17:49:41.112143 3208 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzdlh\" (UniqueName: \"kubernetes.io/projected/f7e886de-a052-4260-8ede-050d4d6994fa-kube-api-access-wzdlh\") pod \"f7e886de-a052-4260-8ede-050d4d6994fa\" (UID: \"f7e886de-a052-4260-8ede-050d4d6994fa\") " Sep 12 17:49:41.112942 kubelet[3208]: I0912 17:49:41.112183 3208 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7e886de-a052-4260-8ede-050d4d6994fa-lib-modules\") pod \"f7e886de-a052-4260-8ede-050d4d6994fa\" (UID: \"f7e886de-a052-4260-8ede-050d4d6994fa\") " Sep 12 17:49:41.112942 kubelet[3208]: I0912 17:49:41.112211 3208 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f7e886de-a052-4260-8ede-050d4d6994fa-etc-cni-netd\") pod \"f7e886de-a052-4260-8ede-050d4d6994fa\" (UID: \"f7e886de-a052-4260-8ede-050d4d6994fa\") " Sep 12 17:49:41.112942 kubelet[3208]: I0912 17:49:41.112235 3208 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f7e886de-a052-4260-8ede-050d4d6994fa-hostproc\") pod \"f7e886de-a052-4260-8ede-050d4d6994fa\" (UID: \"f7e886de-a052-4260-8ede-050d4d6994fa\") " Sep 12 17:49:41.113259 kubelet[3208]: I0912 17:49:41.112259 3208 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f7e886de-a052-4260-8ede-050d4d6994fa-cilium-config-path\") pod \"f7e886de-a052-4260-8ede-050d4d6994fa\" (UID: \"f7e886de-a052-4260-8ede-050d4d6994fa\") " Sep 12 17:49:41.113259 kubelet[3208]: I0912 17:49:41.112285 3208 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f7e886de-a052-4260-8ede-050d4d6994fa-clustermesh-secrets\") pod \"f7e886de-a052-4260-8ede-050d4d6994fa\" (UID: \"f7e886de-a052-4260-8ede-050d4d6994fa\") " Sep 12 17:49:41.114290 kubelet[3208]: I0912 17:49:41.114231 3208 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7e886de-a052-4260-8ede-050d4d6994fa-cni-path" (OuterVolumeSpecName: "cni-path") pod "f7e886de-a052-4260-8ede-050d4d6994fa" (UID: "f7e886de-a052-4260-8ede-050d4d6994fa"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:49:41.114377 kubelet[3208]: I0912 17:49:41.114316 3208 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7e886de-a052-4260-8ede-050d4d6994fa-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f7e886de-a052-4260-8ede-050d4d6994fa" (UID: "f7e886de-a052-4260-8ede-050d4d6994fa"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:49:41.114377 kubelet[3208]: I0912 17:49:41.114344 3208 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7e886de-a052-4260-8ede-050d4d6994fa-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f7e886de-a052-4260-8ede-050d4d6994fa" (UID: "f7e886de-a052-4260-8ede-050d4d6994fa"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:49:41.114377 kubelet[3208]: I0912 17:49:41.114366 3208 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7e886de-a052-4260-8ede-050d4d6994fa-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f7e886de-a052-4260-8ede-050d4d6994fa" (UID: "f7e886de-a052-4260-8ede-050d4d6994fa"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:49:41.115158 kubelet[3208]: I0912 17:49:41.115124 3208 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7e886de-a052-4260-8ede-050d4d6994fa-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f7e886de-a052-4260-8ede-050d4d6994fa" (UID: "f7e886de-a052-4260-8ede-050d4d6994fa"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:49:41.115502 kubelet[3208]: I0912 17:49:41.115468 3208 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7e886de-a052-4260-8ede-050d4d6994fa-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f7e886de-a052-4260-8ede-050d4d6994fa" (UID: "f7e886de-a052-4260-8ede-050d4d6994fa"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:49:41.119722 kubelet[3208]: I0912 17:49:41.115859 3208 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7e886de-a052-4260-8ede-050d4d6994fa-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f7e886de-a052-4260-8ede-050d4d6994fa" (UID: "f7e886de-a052-4260-8ede-050d4d6994fa"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:49:41.119722 kubelet[3208]: I0912 17:49:41.115884 3208 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7e886de-a052-4260-8ede-050d4d6994fa-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f7e886de-a052-4260-8ede-050d4d6994fa" (UID: "f7e886de-a052-4260-8ede-050d4d6994fa"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:49:41.119722 kubelet[3208]: I0912 17:49:41.115911 3208 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7e886de-a052-4260-8ede-050d4d6994fa-hostproc" (OuterVolumeSpecName: "hostproc") pod "f7e886de-a052-4260-8ede-050d4d6994fa" (UID: "f7e886de-a052-4260-8ede-050d4d6994fa"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:49:41.119722 kubelet[3208]: I0912 17:49:41.119455 3208 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7e886de-a052-4260-8ede-050d4d6994fa-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f7e886de-a052-4260-8ede-050d4d6994fa" (UID: "f7e886de-a052-4260-8ede-050d4d6994fa"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:49:41.119722 kubelet[3208]: I0912 17:49:41.119607 3208 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7e886de-a052-4260-8ede-050d4d6994fa-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f7e886de-a052-4260-8ede-050d4d6994fa" (UID: "f7e886de-a052-4260-8ede-050d4d6994fa"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 12 17:49:41.121467 kubelet[3208]: I0912 17:49:41.121113 3208 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e886de-a052-4260-8ede-050d4d6994fa-kube-api-access-wzdlh" (OuterVolumeSpecName: "kube-api-access-wzdlh") pod "f7e886de-a052-4260-8ede-050d4d6994fa" (UID: "f7e886de-a052-4260-8ede-050d4d6994fa"). InnerVolumeSpecName "kube-api-access-wzdlh". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 17:49:41.123180 kubelet[3208]: I0912 17:49:41.123075 3208 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1cd7ce3-ea60-4cf7-ba05-6bd6bb8f52b7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c1cd7ce3-ea60-4cf7-ba05-6bd6bb8f52b7" (UID: "c1cd7ce3-ea60-4cf7-ba05-6bd6bb8f52b7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 12 17:49:41.124957 kubelet[3208]: I0912 17:49:41.124924 3208 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7e886de-a052-4260-8ede-050d4d6994fa-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f7e886de-a052-4260-8ede-050d4d6994fa" (UID: "f7e886de-a052-4260-8ede-050d4d6994fa"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 12 17:49:41.126109 kubelet[3208]: I0912 17:49:41.126081 3208 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1cd7ce3-ea60-4cf7-ba05-6bd6bb8f52b7-kube-api-access-pv6kk" (OuterVolumeSpecName: "kube-api-access-pv6kk") pod "c1cd7ce3-ea60-4cf7-ba05-6bd6bb8f52b7" (UID: "c1cd7ce3-ea60-4cf7-ba05-6bd6bb8f52b7"). InnerVolumeSpecName "kube-api-access-pv6kk". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 17:49:41.126318 kubelet[3208]: I0912 17:49:41.126285 3208 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e886de-a052-4260-8ede-050d4d6994fa-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f7e886de-a052-4260-8ede-050d4d6994fa" (UID: "f7e886de-a052-4260-8ede-050d4d6994fa"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 17:49:41.213551 kubelet[3208]: I0912 17:49:41.213487 3208 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f7e886de-a052-4260-8ede-050d4d6994fa-host-proc-sys-kernel\") on node \"ip-172-31-16-223\" DevicePath \"\"" Sep 12 17:49:41.213551 kubelet[3208]: I0912 17:49:41.213522 3208 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f7e886de-a052-4260-8ede-050d4d6994fa-cni-path\") on node \"ip-172-31-16-223\" DevicePath \"\"" Sep 12 17:49:41.213551 kubelet[3208]: I0912 17:49:41.213532 3208 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c1cd7ce3-ea60-4cf7-ba05-6bd6bb8f52b7-cilium-config-path\") on node \"ip-172-31-16-223\" DevicePath \"\"" Sep 12 17:49:41.213551 kubelet[3208]: I0912 17:49:41.213542 3208 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f7e886de-a052-4260-8ede-050d4d6994fa-etc-cni-netd\") on node \"ip-172-31-16-223\" DevicePath \"\"" Sep 12 17:49:41.213551 kubelet[3208]: I0912 17:49:41.213553 3208 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f7e886de-a052-4260-8ede-050d4d6994fa-host-proc-sys-net\") on node \"ip-172-31-16-223\" DevicePath \"\"" Sep 12 17:49:41.213551 kubelet[3208]: I0912 17:49:41.213562 3208 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzdlh\" (UniqueName: \"kubernetes.io/projected/f7e886de-a052-4260-8ede-050d4d6994fa-kube-api-access-wzdlh\") on node \"ip-172-31-16-223\" DevicePath \"\"" Sep 12 17:49:41.213941 kubelet[3208]: I0912 17:49:41.213573 3208 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7e886de-a052-4260-8ede-050d4d6994fa-lib-modules\") on node \"ip-172-31-16-223\" DevicePath \"\"" Sep 12 17:49:41.213941 kubelet[3208]: I0912 17:49:41.213583 3208 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f7e886de-a052-4260-8ede-050d4d6994fa-hostproc\") on node \"ip-172-31-16-223\" DevicePath \"\"" Sep 12 17:49:41.213941 kubelet[3208]: I0912 17:49:41.213591 3208 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f7e886de-a052-4260-8ede-050d4d6994fa-cilium-config-path\") on node \"ip-172-31-16-223\" DevicePath \"\"" Sep 12 17:49:41.213941 kubelet[3208]: I0912 17:49:41.213601 3208 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f7e886de-a052-4260-8ede-050d4d6994fa-clustermesh-secrets\") on node \"ip-172-31-16-223\" DevicePath \"\"" Sep 12 17:49:41.213941 kubelet[3208]: I0912 17:49:41.213609 3208 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7e886de-a052-4260-8ede-050d4d6994fa-xtables-lock\") on node \"ip-172-31-16-223\" DevicePath \"\"" Sep 12 17:49:41.213941 kubelet[3208]: I0912 17:49:41.213618 3208 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f7e886de-a052-4260-8ede-050d4d6994fa-cilium-cgroup\") on node \"ip-172-31-16-223\" DevicePath \"\"" Sep 12 17:49:41.213941 kubelet[3208]: I0912 17:49:41.213626 3208 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f7e886de-a052-4260-8ede-050d4d6994fa-cilium-run\") on node \"ip-172-31-16-223\" DevicePath \"\"" Sep 12 17:49:41.213941 kubelet[3208]: I0912 17:49:41.213633 3208 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f7e886de-a052-4260-8ede-050d4d6994fa-hubble-tls\") on node \"ip-172-31-16-223\" DevicePath \"\"" Sep 12 17:49:41.214132 kubelet[3208]: I0912 17:49:41.213642 3208 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f7e886de-a052-4260-8ede-050d4d6994fa-bpf-maps\") on node \"ip-172-31-16-223\" DevicePath \"\"" Sep 12 17:49:41.214132 kubelet[3208]: I0912 17:49:41.213650 3208 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pv6kk\" (UniqueName: \"kubernetes.io/projected/c1cd7ce3-ea60-4cf7-ba05-6bd6bb8f52b7-kube-api-access-pv6kk\") on node \"ip-172-31-16-223\" DevicePath \"\"" Sep 12 17:49:41.333186 systemd[1]: Removed slice kubepods-besteffort-podc1cd7ce3_ea60_4cf7_ba05_6bd6bb8f52b7.slice - libcontainer container kubepods-besteffort-podc1cd7ce3_ea60_4cf7_ba05_6bd6bb8f52b7.slice. Sep 12 17:49:41.333318 systemd[1]: kubepods-besteffort-podc1cd7ce3_ea60_4cf7_ba05_6bd6bb8f52b7.slice: Consumed 459ms CPU time, 37.6M memory peak, 15.3M read from disk, 4K written to disk. Sep 12 17:49:41.334907 systemd[1]: Removed slice kubepods-burstable-podf7e886de_a052_4260_8ede_050d4d6994fa.slice - libcontainer container kubepods-burstable-podf7e886de_a052_4260_8ede_050d4d6994fa.slice. Sep 12 17:49:41.335014 systemd[1]: kubepods-burstable-podf7e886de_a052_4260_8ede_050d4d6994fa.slice: Consumed 7.924s CPU time, 196.8M memory peak, 78M read from disk, 13.3M written to disk. Sep 12 17:49:41.685779 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6184645073ea128a34cf646b1aeb9e38687f9b112c6996ad4f1950b4ab3e50b8-shm.mount: Deactivated successfully. Sep 12 17:49:41.687234 systemd[1]: var-lib-kubelet-pods-c1cd7ce3\x2dea60\x2d4cf7\x2dba05\x2d6bd6bb8f52b7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpv6kk.mount: Deactivated successfully. Sep 12 17:49:41.687490 systemd[1]: var-lib-kubelet-pods-f7e886de\x2da052\x2d4260\x2d8ede\x2d050d4d6994fa-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwzdlh.mount: Deactivated successfully. Sep 12 17:49:41.687585 systemd[1]: var-lib-kubelet-pods-f7e886de\x2da052\x2d4260\x2d8ede\x2d050d4d6994fa-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 12 17:49:41.687677 systemd[1]: var-lib-kubelet-pods-f7e886de\x2da052\x2d4260\x2d8ede\x2d050d4d6994fa-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 12 17:49:41.692971 kubelet[3208]: I0912 17:49:41.692907 3208 scope.go:117] "RemoveContainer" containerID="2a758e212865efdc271e39752da15fb0ce2a29d36cecffe2d31db5ae0f4ab0a7" Sep 12 17:49:41.700344 containerd[1912]: time="2025-09-12T17:49:41.700301296Z" level=info msg="RemoveContainer for \"2a758e212865efdc271e39752da15fb0ce2a29d36cecffe2d31db5ae0f4ab0a7\"" Sep 12 17:49:41.731445 containerd[1912]: time="2025-09-12T17:49:41.729412376Z" level=info msg="RemoveContainer for \"2a758e212865efdc271e39752da15fb0ce2a29d36cecffe2d31db5ae0f4ab0a7\" returns successfully" Sep 12 17:49:41.731576 kubelet[3208]: I0912 17:49:41.729730 3208 scope.go:117] "RemoveContainer" containerID="4389b29257bc08a9c496bfaf776b1142708c0cdd92e80ef0cf561872b3564dc4" Sep 12 17:49:41.732726 containerd[1912]: time="2025-09-12T17:49:41.732672281Z" level=info msg="RemoveContainer for \"4389b29257bc08a9c496bfaf776b1142708c0cdd92e80ef0cf561872b3564dc4\"" Sep 12 17:49:41.743253 containerd[1912]: time="2025-09-12T17:49:41.743202839Z" level=info msg="RemoveContainer for \"4389b29257bc08a9c496bfaf776b1142708c0cdd92e80ef0cf561872b3564dc4\" returns successfully" Sep 12 17:49:41.743712 kubelet[3208]: I0912 17:49:41.743506 3208 scope.go:117] "RemoveContainer" containerID="e8136547ad44bc8a6c4f0ea912d8b7bef65dc7a99fc349aab6c46dad9a3d16e2" Sep 12 17:49:41.746878 containerd[1912]: time="2025-09-12T17:49:41.746836878Z" level=info msg="RemoveContainer for \"e8136547ad44bc8a6c4f0ea912d8b7bef65dc7a99fc349aab6c46dad9a3d16e2\"" Sep 12 17:49:41.789459 containerd[1912]: time="2025-09-12T17:49:41.789280391Z" level=info msg="RemoveContainer for \"e8136547ad44bc8a6c4f0ea912d8b7bef65dc7a99fc349aab6c46dad9a3d16e2\" returns successfully" Sep 12 17:49:41.789901 kubelet[3208]: I0912 17:49:41.789789 3208 scope.go:117] "RemoveContainer" containerID="edb0bd4e54bada3c3164e60e249a42256bbc3cc2dbda2d42bdc213a26aecc01b" Sep 12 17:49:41.792112 containerd[1912]: time="2025-09-12T17:49:41.792077204Z" level=info msg="RemoveContainer for \"edb0bd4e54bada3c3164e60e249a42256bbc3cc2dbda2d42bdc213a26aecc01b\"" Sep 12 17:49:41.809078 containerd[1912]: time="2025-09-12T17:49:41.809031667Z" level=info msg="RemoveContainer for \"edb0bd4e54bada3c3164e60e249a42256bbc3cc2dbda2d42bdc213a26aecc01b\" returns successfully" Sep 12 17:49:41.809470 kubelet[3208]: I0912 17:49:41.809438 3208 scope.go:117] "RemoveContainer" containerID="6389f6a7d3b82f8d37ac1677a2f5994aae988db6f46df0adadaa69ba9256d025" Sep 12 17:49:41.811623 containerd[1912]: time="2025-09-12T17:49:41.811586101Z" level=info msg="RemoveContainer for \"6389f6a7d3b82f8d37ac1677a2f5994aae988db6f46df0adadaa69ba9256d025\"" Sep 12 17:49:41.817431 containerd[1912]: time="2025-09-12T17:49:41.817358462Z" level=info msg="RemoveContainer for \"6389f6a7d3b82f8d37ac1677a2f5994aae988db6f46df0adadaa69ba9256d025\" returns successfully" Sep 12 17:49:41.817903 kubelet[3208]: I0912 17:49:41.817867 3208 scope.go:117] "RemoveContainer" containerID="2a758e212865efdc271e39752da15fb0ce2a29d36cecffe2d31db5ae0f4ab0a7" Sep 12 17:49:41.818364 containerd[1912]: time="2025-09-12T17:49:41.818314235Z" level=error msg="ContainerStatus for \"2a758e212865efdc271e39752da15fb0ce2a29d36cecffe2d31db5ae0f4ab0a7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2a758e212865efdc271e39752da15fb0ce2a29d36cecffe2d31db5ae0f4ab0a7\": not found" Sep 12 17:49:41.820593 kubelet[3208]: E0912 17:49:41.819710 3208 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2a758e212865efdc271e39752da15fb0ce2a29d36cecffe2d31db5ae0f4ab0a7\": not found" containerID="2a758e212865efdc271e39752da15fb0ce2a29d36cecffe2d31db5ae0f4ab0a7" Sep 12 17:49:41.822694 kubelet[3208]: I0912 17:49:41.822567 3208 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2a758e212865efdc271e39752da15fb0ce2a29d36cecffe2d31db5ae0f4ab0a7"} err="failed to get container status \"2a758e212865efdc271e39752da15fb0ce2a29d36cecffe2d31db5ae0f4ab0a7\": rpc error: code = NotFound desc = an error occurred when try to find container \"2a758e212865efdc271e39752da15fb0ce2a29d36cecffe2d31db5ae0f4ab0a7\": not found" Sep 12 17:49:41.822694 kubelet[3208]: I0912 17:49:41.822699 3208 scope.go:117] "RemoveContainer" containerID="4389b29257bc08a9c496bfaf776b1142708c0cdd92e80ef0cf561872b3564dc4" Sep 12 17:49:41.823045 containerd[1912]: time="2025-09-12T17:49:41.823004101Z" level=error msg="ContainerStatus for \"4389b29257bc08a9c496bfaf776b1142708c0cdd92e80ef0cf561872b3564dc4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4389b29257bc08a9c496bfaf776b1142708c0cdd92e80ef0cf561872b3564dc4\": not found" Sep 12 17:49:41.833650 kubelet[3208]: E0912 17:49:41.832843 3208 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4389b29257bc08a9c496bfaf776b1142708c0cdd92e80ef0cf561872b3564dc4\": not found" containerID="4389b29257bc08a9c496bfaf776b1142708c0cdd92e80ef0cf561872b3564dc4" Sep 12 17:49:41.833650 kubelet[3208]: I0912 17:49:41.832888 3208 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4389b29257bc08a9c496bfaf776b1142708c0cdd92e80ef0cf561872b3564dc4"} err="failed to get container status \"4389b29257bc08a9c496bfaf776b1142708c0cdd92e80ef0cf561872b3564dc4\": rpc error: code = NotFound desc = an error occurred when try to find container \"4389b29257bc08a9c496bfaf776b1142708c0cdd92e80ef0cf561872b3564dc4\": not found" Sep 12 17:49:41.833650 kubelet[3208]: I0912 17:49:41.832913 3208 scope.go:117] "RemoveContainer" containerID="e8136547ad44bc8a6c4f0ea912d8b7bef65dc7a99fc349aab6c46dad9a3d16e2" Sep 12 17:49:41.833911 containerd[1912]: time="2025-09-12T17:49:41.833404537Z" level=error msg="ContainerStatus for \"e8136547ad44bc8a6c4f0ea912d8b7bef65dc7a99fc349aab6c46dad9a3d16e2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e8136547ad44bc8a6c4f0ea912d8b7bef65dc7a99fc349aab6c46dad9a3d16e2\": not found" Sep 12 17:49:41.833965 kubelet[3208]: E0912 17:49:41.833664 3208 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e8136547ad44bc8a6c4f0ea912d8b7bef65dc7a99fc349aab6c46dad9a3d16e2\": not found" containerID="e8136547ad44bc8a6c4f0ea912d8b7bef65dc7a99fc349aab6c46dad9a3d16e2" Sep 12 17:49:41.833965 kubelet[3208]: I0912 17:49:41.833688 3208 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e8136547ad44bc8a6c4f0ea912d8b7bef65dc7a99fc349aab6c46dad9a3d16e2"} err="failed to get container status \"e8136547ad44bc8a6c4f0ea912d8b7bef65dc7a99fc349aab6c46dad9a3d16e2\": rpc error: code = NotFound desc = an error occurred when try to find container \"e8136547ad44bc8a6c4f0ea912d8b7bef65dc7a99fc349aab6c46dad9a3d16e2\": not found" Sep 12 17:49:41.833965 kubelet[3208]: I0912 17:49:41.833723 3208 scope.go:117] "RemoveContainer" containerID="edb0bd4e54bada3c3164e60e249a42256bbc3cc2dbda2d42bdc213a26aecc01b" Sep 12 17:49:41.834057 containerd[1912]: time="2025-09-12T17:49:41.834036400Z" level=error msg="ContainerStatus for \"edb0bd4e54bada3c3164e60e249a42256bbc3cc2dbda2d42bdc213a26aecc01b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"edb0bd4e54bada3c3164e60e249a42256bbc3cc2dbda2d42bdc213a26aecc01b\": not found" Sep 12 17:49:41.834343 kubelet[3208]: E0912 17:49:41.834159 3208 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"edb0bd4e54bada3c3164e60e249a42256bbc3cc2dbda2d42bdc213a26aecc01b\": not found" containerID="edb0bd4e54bada3c3164e60e249a42256bbc3cc2dbda2d42bdc213a26aecc01b" Sep 12 17:49:41.834343 kubelet[3208]: I0912 17:49:41.834325 3208 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"edb0bd4e54bada3c3164e60e249a42256bbc3cc2dbda2d42bdc213a26aecc01b"} err="failed to get container status \"edb0bd4e54bada3c3164e60e249a42256bbc3cc2dbda2d42bdc213a26aecc01b\": rpc error: code = NotFound desc = an error occurred when try to find container \"edb0bd4e54bada3c3164e60e249a42256bbc3cc2dbda2d42bdc213a26aecc01b\": not found" Sep 12 17:49:41.834343 kubelet[3208]: I0912 17:49:41.834341 3208 scope.go:117] "RemoveContainer" containerID="6389f6a7d3b82f8d37ac1677a2f5994aae988db6f46df0adadaa69ba9256d025" Sep 12 17:49:41.834548 containerd[1912]: time="2025-09-12T17:49:41.834518077Z" level=error msg="ContainerStatus for \"6389f6a7d3b82f8d37ac1677a2f5994aae988db6f46df0adadaa69ba9256d025\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6389f6a7d3b82f8d37ac1677a2f5994aae988db6f46df0adadaa69ba9256d025\": not found" Sep 12 17:49:41.834662 kubelet[3208]: E0912 17:49:41.834639 3208 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6389f6a7d3b82f8d37ac1677a2f5994aae988db6f46df0adadaa69ba9256d025\": not found" containerID="6389f6a7d3b82f8d37ac1677a2f5994aae988db6f46df0adadaa69ba9256d025" Sep 12 17:49:41.834730 kubelet[3208]: I0912 17:49:41.834709 3208 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6389f6a7d3b82f8d37ac1677a2f5994aae988db6f46df0adadaa69ba9256d025"} err="failed to get container status \"6389f6a7d3b82f8d37ac1677a2f5994aae988db6f46df0adadaa69ba9256d025\": rpc error: code = NotFound desc = an error occurred when try to find container \"6389f6a7d3b82f8d37ac1677a2f5994aae988db6f46df0adadaa69ba9256d025\": not found" Sep 12 17:49:41.834730 kubelet[3208]: I0912 17:49:41.834728 3208 scope.go:117] "RemoveContainer" containerID="664ef6fd311b5c6a9679ab76f59f4ed7d397def0166957e823411739648b4360" Sep 12 17:49:41.836571 containerd[1912]: time="2025-09-12T17:49:41.836543222Z" level=info msg="RemoveContainer for \"664ef6fd311b5c6a9679ab76f59f4ed7d397def0166957e823411739648b4360\"" Sep 12 17:49:41.842118 containerd[1912]: time="2025-09-12T17:49:41.842082663Z" level=info msg="RemoveContainer for \"664ef6fd311b5c6a9679ab76f59f4ed7d397def0166957e823411739648b4360\" returns successfully" Sep 12 17:49:41.842376 kubelet[3208]: I0912 17:49:41.842358 3208 scope.go:117] "RemoveContainer" containerID="664ef6fd311b5c6a9679ab76f59f4ed7d397def0166957e823411739648b4360" Sep 12 17:49:41.842697 containerd[1912]: time="2025-09-12T17:49:41.842653627Z" level=error msg="ContainerStatus for \"664ef6fd311b5c6a9679ab76f59f4ed7d397def0166957e823411739648b4360\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"664ef6fd311b5c6a9679ab76f59f4ed7d397def0166957e823411739648b4360\": not found" Sep 12 17:49:41.842837 kubelet[3208]: E0912 17:49:41.842814 3208 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"664ef6fd311b5c6a9679ab76f59f4ed7d397def0166957e823411739648b4360\": not found" containerID="664ef6fd311b5c6a9679ab76f59f4ed7d397def0166957e823411739648b4360" Sep 12 17:49:41.842907 kubelet[3208]: I0912 17:49:41.842843 3208 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"664ef6fd311b5c6a9679ab76f59f4ed7d397def0166957e823411739648b4360"} err="failed to get container status \"664ef6fd311b5c6a9679ab76f59f4ed7d397def0166957e823411739648b4360\": rpc error: code = NotFound desc = an error occurred when try to find container \"664ef6fd311b5c6a9679ab76f59f4ed7d397def0166957e823411739648b4360\": not found" Sep 12 17:49:42.531461 sshd[5015]: Connection closed by 139.178.68.195 port 55710 Sep 12 17:49:42.532050 sshd-session[5011]: pam_unix(sshd:session): session closed for user core Sep 12 17:49:42.536362 systemd[1]: sshd@22-172.31.16.223:22-139.178.68.195:55710.service: Deactivated successfully. Sep 12 17:49:42.539092 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 17:49:42.540593 systemd-logind[1858]: Session 23 logged out. Waiting for processes to exit. Sep 12 17:49:42.542741 systemd-logind[1858]: Removed session 23. Sep 12 17:49:42.567911 systemd[1]: Started sshd@23-172.31.16.223:22-139.178.68.195:51826.service - OpenSSH per-connection server daemon (139.178.68.195:51826). Sep 12 17:49:42.735345 sshd[5164]: Accepted publickey for core from 139.178.68.195 port 51826 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:49:42.736925 sshd-session[5164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:49:42.742371 systemd-logind[1858]: New session 24 of user core. Sep 12 17:49:42.746405 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 17:49:43.102506 ntpd[1851]: Deleting interface #11 lxc_health, fe80::48bb:9bff:fe01:b8ae%8#123, interface stats: received=0, sent=0, dropped=0, active_time=57 secs Sep 12 17:49:43.103051 ntpd[1851]: 12 Sep 17:49:43 ntpd[1851]: Deleting interface #11 lxc_health, fe80::48bb:9bff:fe01:b8ae%8#123, interface stats: received=0, sent=0, dropped=0, active_time=57 secs Sep 12 17:49:43.328709 kubelet[3208]: I0912 17:49:43.328670 3208 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1cd7ce3-ea60-4cf7-ba05-6bd6bb8f52b7" path="/var/lib/kubelet/pods/c1cd7ce3-ea60-4cf7-ba05-6bd6bb8f52b7/volumes" Sep 12 17:49:43.329300 kubelet[3208]: I0912 17:49:43.329256 3208 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e886de-a052-4260-8ede-050d4d6994fa" path="/var/lib/kubelet/pods/f7e886de-a052-4260-8ede-050d4d6994fa/volumes" Sep 12 17:49:43.334860 sshd[5167]: Connection closed by 139.178.68.195 port 51826 Sep 12 17:49:43.336820 sshd-session[5164]: pam_unix(sshd:session): session closed for user core Sep 12 17:49:43.346705 systemd[1]: sshd@23-172.31.16.223:22-139.178.68.195:51826.service: Deactivated successfully. Sep 12 17:49:43.351485 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 17:49:43.353219 systemd-logind[1858]: Session 24 logged out. Waiting for processes to exit. Sep 12 17:49:43.360646 kubelet[3208]: E0912 17:49:43.360585 3208 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f7e886de-a052-4260-8ede-050d4d6994fa" containerName="apply-sysctl-overwrites" Sep 12 17:49:43.360646 kubelet[3208]: E0912 17:49:43.360622 3208 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f7e886de-a052-4260-8ede-050d4d6994fa" containerName="clean-cilium-state" Sep 12 17:49:43.360941 kubelet[3208]: E0912 17:49:43.360844 3208 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c1cd7ce3-ea60-4cf7-ba05-6bd6bb8f52b7" containerName="cilium-operator" Sep 12 17:49:43.360941 kubelet[3208]: E0912 17:49:43.360868 3208 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f7e886de-a052-4260-8ede-050d4d6994fa" containerName="mount-cgroup" Sep 12 17:49:43.360941 kubelet[3208]: E0912 17:49:43.360878 3208 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f7e886de-a052-4260-8ede-050d4d6994fa" containerName="mount-bpf-fs" Sep 12 17:49:43.360941 kubelet[3208]: E0912 17:49:43.360885 3208 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f7e886de-a052-4260-8ede-050d4d6994fa" containerName="cilium-agent" Sep 12 17:49:43.368856 kubelet[3208]: I0912 17:49:43.368266 3208 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1cd7ce3-ea60-4cf7-ba05-6bd6bb8f52b7" containerName="cilium-operator" Sep 12 17:49:43.368856 kubelet[3208]: I0912 17:49:43.368300 3208 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7e886de-a052-4260-8ede-050d4d6994fa" containerName="cilium-agent" Sep 12 17:49:43.373409 systemd-logind[1858]: Removed session 24. Sep 12 17:49:43.376930 systemd[1]: Started sshd@24-172.31.16.223:22-139.178.68.195:51832.service - OpenSSH per-connection server daemon (139.178.68.195:51832). Sep 12 17:49:43.390088 systemd[1]: Created slice kubepods-burstable-pod23088da0_7b27_4b1d_b45c_6bc2b1e9531e.slice - libcontainer container kubepods-burstable-pod23088da0_7b27_4b1d_b45c_6bc2b1e9531e.slice. Sep 12 17:49:43.539117 kubelet[3208]: I0912 17:49:43.539054 3208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/23088da0-7b27-4b1d-b45c-6bc2b1e9531e-hostproc\") pod \"cilium-77q6j\" (UID: \"23088da0-7b27-4b1d-b45c-6bc2b1e9531e\") " pod="kube-system/cilium-77q6j" Sep 12 17:49:43.539117 kubelet[3208]: I0912 17:49:43.539099 3208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/23088da0-7b27-4b1d-b45c-6bc2b1e9531e-cilium-config-path\") pod \"cilium-77q6j\" (UID: \"23088da0-7b27-4b1d-b45c-6bc2b1e9531e\") " pod="kube-system/cilium-77q6j" Sep 12 17:49:43.539117 kubelet[3208]: I0912 17:49:43.539117 3208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/23088da0-7b27-4b1d-b45c-6bc2b1e9531e-cilium-run\") pod \"cilium-77q6j\" (UID: \"23088da0-7b27-4b1d-b45c-6bc2b1e9531e\") " pod="kube-system/cilium-77q6j" Sep 12 17:49:43.539538 kubelet[3208]: I0912 17:49:43.539134 3208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/23088da0-7b27-4b1d-b45c-6bc2b1e9531e-etc-cni-netd\") pod \"cilium-77q6j\" (UID: \"23088da0-7b27-4b1d-b45c-6bc2b1e9531e\") " pod="kube-system/cilium-77q6j" Sep 12 17:49:43.539538 kubelet[3208]: I0912 17:49:43.539154 3208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/23088da0-7b27-4b1d-b45c-6bc2b1e9531e-cilium-cgroup\") pod \"cilium-77q6j\" (UID: \"23088da0-7b27-4b1d-b45c-6bc2b1e9531e\") " pod="kube-system/cilium-77q6j" Sep 12 17:49:43.539538 kubelet[3208]: I0912 17:49:43.539188 3208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/23088da0-7b27-4b1d-b45c-6bc2b1e9531e-lib-modules\") pod \"cilium-77q6j\" (UID: \"23088da0-7b27-4b1d-b45c-6bc2b1e9531e\") " pod="kube-system/cilium-77q6j" Sep 12 17:49:43.539538 kubelet[3208]: I0912 17:49:43.539203 3208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb2j6\" (UniqueName: \"kubernetes.io/projected/23088da0-7b27-4b1d-b45c-6bc2b1e9531e-kube-api-access-hb2j6\") pod \"cilium-77q6j\" (UID: \"23088da0-7b27-4b1d-b45c-6bc2b1e9531e\") " pod="kube-system/cilium-77q6j" Sep 12 17:49:43.539538 kubelet[3208]: I0912 17:49:43.539219 3208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/23088da0-7b27-4b1d-b45c-6bc2b1e9531e-bpf-maps\") pod \"cilium-77q6j\" (UID: \"23088da0-7b27-4b1d-b45c-6bc2b1e9531e\") " pod="kube-system/cilium-77q6j" Sep 12 17:49:43.539538 kubelet[3208]: I0912 17:49:43.539233 3208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/23088da0-7b27-4b1d-b45c-6bc2b1e9531e-xtables-lock\") pod \"cilium-77q6j\" (UID: \"23088da0-7b27-4b1d-b45c-6bc2b1e9531e\") " pod="kube-system/cilium-77q6j" Sep 12 17:49:43.539684 kubelet[3208]: I0912 17:49:43.539252 3208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/23088da0-7b27-4b1d-b45c-6bc2b1e9531e-host-proc-sys-kernel\") pod \"cilium-77q6j\" (UID: \"23088da0-7b27-4b1d-b45c-6bc2b1e9531e\") " pod="kube-system/cilium-77q6j" Sep 12 17:49:43.539684 kubelet[3208]: I0912 17:49:43.539321 3208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/23088da0-7b27-4b1d-b45c-6bc2b1e9531e-cni-path\") pod \"cilium-77q6j\" (UID: \"23088da0-7b27-4b1d-b45c-6bc2b1e9531e\") " pod="kube-system/cilium-77q6j" Sep 12 17:49:43.539684 kubelet[3208]: I0912 17:49:43.539366 3208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/23088da0-7b27-4b1d-b45c-6bc2b1e9531e-clustermesh-secrets\") pod \"cilium-77q6j\" (UID: \"23088da0-7b27-4b1d-b45c-6bc2b1e9531e\") " pod="kube-system/cilium-77q6j" Sep 12 17:49:43.539684 kubelet[3208]: I0912 17:49:43.539391 3208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/23088da0-7b27-4b1d-b45c-6bc2b1e9531e-hubble-tls\") pod \"cilium-77q6j\" (UID: \"23088da0-7b27-4b1d-b45c-6bc2b1e9531e\") " pod="kube-system/cilium-77q6j" Sep 12 17:49:43.539684 kubelet[3208]: I0912 17:49:43.539438 3208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/23088da0-7b27-4b1d-b45c-6bc2b1e9531e-cilium-ipsec-secrets\") pod \"cilium-77q6j\" (UID: \"23088da0-7b27-4b1d-b45c-6bc2b1e9531e\") " pod="kube-system/cilium-77q6j" Sep 12 17:49:43.539684 kubelet[3208]: I0912 17:49:43.539456 3208 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/23088da0-7b27-4b1d-b45c-6bc2b1e9531e-host-proc-sys-net\") pod \"cilium-77q6j\" (UID: \"23088da0-7b27-4b1d-b45c-6bc2b1e9531e\") " pod="kube-system/cilium-77q6j" Sep 12 17:49:43.578227 sshd[5178]: Accepted publickey for core from 139.178.68.195 port 51832 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:49:43.579645 sshd-session[5178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:49:43.585918 systemd-logind[1858]: New session 25 of user core. Sep 12 17:49:43.594428 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 17:49:43.701613 containerd[1912]: time="2025-09-12T17:49:43.701573234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-77q6j,Uid:23088da0-7b27-4b1d-b45c-6bc2b1e9531e,Namespace:kube-system,Attempt:0,}" Sep 12 17:49:43.708983 sshd[5181]: Connection closed by 139.178.68.195 port 51832 Sep 12 17:49:43.709800 sshd-session[5178]: pam_unix(sshd:session): session closed for user core Sep 12 17:49:43.714023 systemd[1]: sshd@24-172.31.16.223:22-139.178.68.195:51832.service: Deactivated successfully. Sep 12 17:49:43.716567 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 17:49:43.719065 systemd-logind[1858]: Session 25 logged out. Waiting for processes to exit. Sep 12 17:49:43.721028 systemd-logind[1858]: Removed session 25. Sep 12 17:49:43.732696 containerd[1912]: time="2025-09-12T17:49:43.732269538Z" level=info msg="connecting to shim b6cf996c95b62c899186f575f5ca6f8035c6dae45ebc605e6d2c787ee2cdb3c6" address="unix:///run/containerd/s/b7df0f060072785dc77d8933cfada292694ffb16263c8b12deea4e8480e94658" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:49:43.746816 systemd[1]: Started sshd@25-172.31.16.223:22-139.178.68.195:51840.service - OpenSSH per-connection server daemon (139.178.68.195:51840). Sep 12 17:49:43.778437 systemd[1]: Started cri-containerd-b6cf996c95b62c899186f575f5ca6f8035c6dae45ebc605e6d2c787ee2cdb3c6.scope - libcontainer container b6cf996c95b62c899186f575f5ca6f8035c6dae45ebc605e6d2c787ee2cdb3c6. Sep 12 17:49:43.817187 containerd[1912]: time="2025-09-12T17:49:43.817061198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-77q6j,Uid:23088da0-7b27-4b1d-b45c-6bc2b1e9531e,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6cf996c95b62c899186f575f5ca6f8035c6dae45ebc605e6d2c787ee2cdb3c6\"" Sep 12 17:49:43.820716 containerd[1912]: time="2025-09-12T17:49:43.820687245Z" level=info msg="CreateContainer within sandbox \"b6cf996c95b62c899186f575f5ca6f8035c6dae45ebc605e6d2c787ee2cdb3c6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 17:49:43.832735 containerd[1912]: time="2025-09-12T17:49:43.832057349Z" level=info msg="Container c582cd8a475bd178df34f635b3014d5d0d39ec55c8498542588d6a2226274cda: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:49:43.845213 containerd[1912]: time="2025-09-12T17:49:43.845177907Z" level=info msg="CreateContainer within sandbox \"b6cf996c95b62c899186f575f5ca6f8035c6dae45ebc605e6d2c787ee2cdb3c6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c582cd8a475bd178df34f635b3014d5d0d39ec55c8498542588d6a2226274cda\"" Sep 12 17:49:43.845992 containerd[1912]: time="2025-09-12T17:49:43.845971353Z" level=info msg="StartContainer for \"c582cd8a475bd178df34f635b3014d5d0d39ec55c8498542588d6a2226274cda\"" Sep 12 17:49:43.846848 containerd[1912]: time="2025-09-12T17:49:43.846825261Z" level=info msg="connecting to shim c582cd8a475bd178df34f635b3014d5d0d39ec55c8498542588d6a2226274cda" address="unix:///run/containerd/s/b7df0f060072785dc77d8933cfada292694ffb16263c8b12deea4e8480e94658" protocol=ttrpc version=3 Sep 12 17:49:43.872509 systemd[1]: Started cri-containerd-c582cd8a475bd178df34f635b3014d5d0d39ec55c8498542588d6a2226274cda.scope - libcontainer container c582cd8a475bd178df34f635b3014d5d0d39ec55c8498542588d6a2226274cda. Sep 12 17:49:43.914260 containerd[1912]: time="2025-09-12T17:49:43.914215472Z" level=info msg="StartContainer for \"c582cd8a475bd178df34f635b3014d5d0d39ec55c8498542588d6a2226274cda\" returns successfully" Sep 12 17:49:43.932853 systemd[1]: cri-containerd-c582cd8a475bd178df34f635b3014d5d0d39ec55c8498542588d6a2226274cda.scope: Deactivated successfully. Sep 12 17:49:43.933233 systemd[1]: cri-containerd-c582cd8a475bd178df34f635b3014d5d0d39ec55c8498542588d6a2226274cda.scope: Consumed 24ms CPU time, 9.9M memory peak, 3.4M read from disk. Sep 12 17:49:43.936780 containerd[1912]: time="2025-09-12T17:49:43.936734144Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c582cd8a475bd178df34f635b3014d5d0d39ec55c8498542588d6a2226274cda\" id:\"c582cd8a475bd178df34f635b3014d5d0d39ec55c8498542588d6a2226274cda\" pid:5252 exited_at:{seconds:1757699383 nanos:935113833}" Sep 12 17:49:43.937027 containerd[1912]: time="2025-09-12T17:49:43.936994302Z" level=info msg="received exit event container_id:\"c582cd8a475bd178df34f635b3014d5d0d39ec55c8498542588d6a2226274cda\" id:\"c582cd8a475bd178df34f635b3014d5d0d39ec55c8498542588d6a2226274cda\" pid:5252 exited_at:{seconds:1757699383 nanos:935113833}" Sep 12 17:49:43.937264 sshd[5211]: Accepted publickey for core from 139.178.68.195 port 51840 ssh2: RSA SHA256:W4YS5mF+BRASipQdoc83aYiclsJC9j/B8FPkaJHVZC4 Sep 12 17:49:43.940324 sshd-session[5211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:49:43.951977 systemd-logind[1858]: New session 26 of user core. Sep 12 17:49:43.956524 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 17:49:44.725373 containerd[1912]: time="2025-09-12T17:49:44.724716807Z" level=info msg="CreateContainer within sandbox \"b6cf996c95b62c899186f575f5ca6f8035c6dae45ebc605e6d2c787ee2cdb3c6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 17:49:44.742937 containerd[1912]: time="2025-09-12T17:49:44.740926630Z" level=info msg="Container 75feef5745f0111916ca320161978e35bc7d4bdcf01a4bc294ea4648e8930ec0: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:49:44.755037 containerd[1912]: time="2025-09-12T17:49:44.754994187Z" level=info msg="CreateContainer within sandbox \"b6cf996c95b62c899186f575f5ca6f8035c6dae45ebc605e6d2c787ee2cdb3c6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"75feef5745f0111916ca320161978e35bc7d4bdcf01a4bc294ea4648e8930ec0\"" Sep 12 17:49:44.756202 containerd[1912]: time="2025-09-12T17:49:44.755878614Z" level=info msg="StartContainer for \"75feef5745f0111916ca320161978e35bc7d4bdcf01a4bc294ea4648e8930ec0\"" Sep 12 17:49:44.756999 containerd[1912]: time="2025-09-12T17:49:44.756971147Z" level=info msg="connecting to shim 75feef5745f0111916ca320161978e35bc7d4bdcf01a4bc294ea4648e8930ec0" address="unix:///run/containerd/s/b7df0f060072785dc77d8933cfada292694ffb16263c8b12deea4e8480e94658" protocol=ttrpc version=3 Sep 12 17:49:44.793407 systemd[1]: Started cri-containerd-75feef5745f0111916ca320161978e35bc7d4bdcf01a4bc294ea4648e8930ec0.scope - libcontainer container 75feef5745f0111916ca320161978e35bc7d4bdcf01a4bc294ea4648e8930ec0. Sep 12 17:49:44.831094 containerd[1912]: time="2025-09-12T17:49:44.831045187Z" level=info msg="StartContainer for \"75feef5745f0111916ca320161978e35bc7d4bdcf01a4bc294ea4648e8930ec0\" returns successfully" Sep 12 17:49:44.843576 systemd[1]: cri-containerd-75feef5745f0111916ca320161978e35bc7d4bdcf01a4bc294ea4648e8930ec0.scope: Deactivated successfully. Sep 12 17:49:44.843966 systemd[1]: cri-containerd-75feef5745f0111916ca320161978e35bc7d4bdcf01a4bc294ea4648e8930ec0.scope: Consumed 20ms CPU time, 7.6M memory peak, 2.2M read from disk. Sep 12 17:49:44.844846 containerd[1912]: time="2025-09-12T17:49:44.844672533Z" level=info msg="received exit event container_id:\"75feef5745f0111916ca320161978e35bc7d4bdcf01a4bc294ea4648e8930ec0\" id:\"75feef5745f0111916ca320161978e35bc7d4bdcf01a4bc294ea4648e8930ec0\" pid:5307 exited_at:{seconds:1757699384 nanos:844464799}" Sep 12 17:49:44.845119 containerd[1912]: time="2025-09-12T17:49:44.845096193Z" level=info msg="TaskExit event in podsandbox handler container_id:\"75feef5745f0111916ca320161978e35bc7d4bdcf01a4bc294ea4648e8930ec0\" id:\"75feef5745f0111916ca320161978e35bc7d4bdcf01a4bc294ea4648e8930ec0\" pid:5307 exited_at:{seconds:1757699384 nanos:844464799}" Sep 12 17:49:44.867705 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75feef5745f0111916ca320161978e35bc7d4bdcf01a4bc294ea4648e8930ec0-rootfs.mount: Deactivated successfully. Sep 12 17:49:45.497590 kubelet[3208]: E0912 17:49:45.497544 3208 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 12 17:49:45.717186 containerd[1912]: time="2025-09-12T17:49:45.716087468Z" level=info msg="CreateContainer within sandbox \"b6cf996c95b62c899186f575f5ca6f8035c6dae45ebc605e6d2c787ee2cdb3c6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 17:49:45.737188 containerd[1912]: time="2025-09-12T17:49:45.735014240Z" level=info msg="Container 6597c63d9e62401144845299a55f73e501098419c521f7025592fb77b9cc4347: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:49:45.758624 containerd[1912]: time="2025-09-12T17:49:45.758517312Z" level=info msg="CreateContainer within sandbox \"b6cf996c95b62c899186f575f5ca6f8035c6dae45ebc605e6d2c787ee2cdb3c6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6597c63d9e62401144845299a55f73e501098419c521f7025592fb77b9cc4347\"" Sep 12 17:49:45.759276 containerd[1912]: time="2025-09-12T17:49:45.759253259Z" level=info msg="StartContainer for \"6597c63d9e62401144845299a55f73e501098419c521f7025592fb77b9cc4347\"" Sep 12 17:49:45.760930 containerd[1912]: time="2025-09-12T17:49:45.760879051Z" level=info msg="connecting to shim 6597c63d9e62401144845299a55f73e501098419c521f7025592fb77b9cc4347" address="unix:///run/containerd/s/b7df0f060072785dc77d8933cfada292694ffb16263c8b12deea4e8480e94658" protocol=ttrpc version=3 Sep 12 17:49:45.786433 systemd[1]: Started cri-containerd-6597c63d9e62401144845299a55f73e501098419c521f7025592fb77b9cc4347.scope - libcontainer container 6597c63d9e62401144845299a55f73e501098419c521f7025592fb77b9cc4347. Sep 12 17:49:45.831639 containerd[1912]: time="2025-09-12T17:49:45.831594686Z" level=info msg="StartContainer for \"6597c63d9e62401144845299a55f73e501098419c521f7025592fb77b9cc4347\" returns successfully" Sep 12 17:49:45.839928 systemd[1]: cri-containerd-6597c63d9e62401144845299a55f73e501098419c521f7025592fb77b9cc4347.scope: Deactivated successfully. Sep 12 17:49:45.842997 containerd[1912]: time="2025-09-12T17:49:45.842952384Z" level=info msg="received exit event container_id:\"6597c63d9e62401144845299a55f73e501098419c521f7025592fb77b9cc4347\" id:\"6597c63d9e62401144845299a55f73e501098419c521f7025592fb77b9cc4347\" pid:5350 exited_at:{seconds:1757699385 nanos:842721081}" Sep 12 17:49:45.843532 containerd[1912]: time="2025-09-12T17:49:45.843425573Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6597c63d9e62401144845299a55f73e501098419c521f7025592fb77b9cc4347\" id:\"6597c63d9e62401144845299a55f73e501098419c521f7025592fb77b9cc4347\" pid:5350 exited_at:{seconds:1757699385 nanos:842721081}" Sep 12 17:49:45.866472 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6597c63d9e62401144845299a55f73e501098419c521f7025592fb77b9cc4347-rootfs.mount: Deactivated successfully. Sep 12 17:49:46.720265 containerd[1912]: time="2025-09-12T17:49:46.720232472Z" level=info msg="CreateContainer within sandbox \"b6cf996c95b62c899186f575f5ca6f8035c6dae45ebc605e6d2c787ee2cdb3c6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 17:49:46.735402 containerd[1912]: time="2025-09-12T17:49:46.735330600Z" level=info msg="Container 0f0bcf341a4df20cdb60326f53727cb9ef10dc2d6067fcc376115060bfdda213: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:49:46.751927 containerd[1912]: time="2025-09-12T17:49:46.751887255Z" level=info msg="CreateContainer within sandbox \"b6cf996c95b62c899186f575f5ca6f8035c6dae45ebc605e6d2c787ee2cdb3c6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0f0bcf341a4df20cdb60326f53727cb9ef10dc2d6067fcc376115060bfdda213\"" Sep 12 17:49:46.753102 containerd[1912]: time="2025-09-12T17:49:46.752999181Z" level=info msg="StartContainer for \"0f0bcf341a4df20cdb60326f53727cb9ef10dc2d6067fcc376115060bfdda213\"" Sep 12 17:49:46.754233 containerd[1912]: time="2025-09-12T17:49:46.754203343Z" level=info msg="connecting to shim 0f0bcf341a4df20cdb60326f53727cb9ef10dc2d6067fcc376115060bfdda213" address="unix:///run/containerd/s/b7df0f060072785dc77d8933cfada292694ffb16263c8b12deea4e8480e94658" protocol=ttrpc version=3 Sep 12 17:49:46.781385 systemd[1]: Started cri-containerd-0f0bcf341a4df20cdb60326f53727cb9ef10dc2d6067fcc376115060bfdda213.scope - libcontainer container 0f0bcf341a4df20cdb60326f53727cb9ef10dc2d6067fcc376115060bfdda213. Sep 12 17:49:46.812626 systemd[1]: cri-containerd-0f0bcf341a4df20cdb60326f53727cb9ef10dc2d6067fcc376115060bfdda213.scope: Deactivated successfully. Sep 12 17:49:46.814064 containerd[1912]: time="2025-09-12T17:49:46.814021657Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0f0bcf341a4df20cdb60326f53727cb9ef10dc2d6067fcc376115060bfdda213\" id:\"0f0bcf341a4df20cdb60326f53727cb9ef10dc2d6067fcc376115060bfdda213\" pid:5390 exited_at:{seconds:1757699386 nanos:813663683}" Sep 12 17:49:46.816778 containerd[1912]: time="2025-09-12T17:49:46.816155554Z" level=info msg="received exit event container_id:\"0f0bcf341a4df20cdb60326f53727cb9ef10dc2d6067fcc376115060bfdda213\" id:\"0f0bcf341a4df20cdb60326f53727cb9ef10dc2d6067fcc376115060bfdda213\" pid:5390 exited_at:{seconds:1757699386 nanos:813663683}" Sep 12 17:49:46.825225 containerd[1912]: time="2025-09-12T17:49:46.825181379Z" level=info msg="StartContainer for \"0f0bcf341a4df20cdb60326f53727cb9ef10dc2d6067fcc376115060bfdda213\" returns successfully" Sep 12 17:49:46.842756 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f0bcf341a4df20cdb60326f53727cb9ef10dc2d6067fcc376115060bfdda213-rootfs.mount: Deactivated successfully. Sep 12 17:49:47.485809 kubelet[3208]: I0912 17:49:47.485754 3208 setters.go:600] "Node became not ready" node="ip-172-31-16-223" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-12T17:49:47Z","lastTransitionTime":"2025-09-12T17:49:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 12 17:49:47.729886 containerd[1912]: time="2025-09-12T17:49:47.729501827Z" level=info msg="CreateContainer within sandbox \"b6cf996c95b62c899186f575f5ca6f8035c6dae45ebc605e6d2c787ee2cdb3c6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 17:49:47.745978 containerd[1912]: time="2025-09-12T17:49:47.745788177Z" level=info msg="Container b767588a27e3caa1b1c936ab43df9a5499748a38f0fbd009677bf5fe542905a8: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:49:47.759628 containerd[1912]: time="2025-09-12T17:49:47.759574470Z" level=info msg="CreateContainer within sandbox \"b6cf996c95b62c899186f575f5ca6f8035c6dae45ebc605e6d2c787ee2cdb3c6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b767588a27e3caa1b1c936ab43df9a5499748a38f0fbd009677bf5fe542905a8\"" Sep 12 17:49:47.760700 containerd[1912]: time="2025-09-12T17:49:47.760655572Z" level=info msg="StartContainer for \"b767588a27e3caa1b1c936ab43df9a5499748a38f0fbd009677bf5fe542905a8\"" Sep 12 17:49:47.762646 containerd[1912]: time="2025-09-12T17:49:47.762609367Z" level=info msg="connecting to shim b767588a27e3caa1b1c936ab43df9a5499748a38f0fbd009677bf5fe542905a8" address="unix:///run/containerd/s/b7df0f060072785dc77d8933cfada292694ffb16263c8b12deea4e8480e94658" protocol=ttrpc version=3 Sep 12 17:49:47.796490 systemd[1]: Started cri-containerd-b767588a27e3caa1b1c936ab43df9a5499748a38f0fbd009677bf5fe542905a8.scope - libcontainer container b767588a27e3caa1b1c936ab43df9a5499748a38f0fbd009677bf5fe542905a8. Sep 12 17:49:47.837134 containerd[1912]: time="2025-09-12T17:49:47.837093368Z" level=info msg="StartContainer for \"b767588a27e3caa1b1c936ab43df9a5499748a38f0fbd009677bf5fe542905a8\" returns successfully" Sep 12 17:49:47.952684 containerd[1912]: time="2025-09-12T17:49:47.952618815Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b767588a27e3caa1b1c936ab43df9a5499748a38f0fbd009677bf5fe542905a8\" id:\"56117288ec2414eaa38dacf1a9fb5c4118c036a124aa5db113c61a036f7596f9\" pid:5459 exited_at:{seconds:1757699387 nanos:952061666}" Sep 12 17:49:48.596197 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Sep 12 17:49:50.500447 containerd[1912]: time="2025-09-12T17:49:50.500407495Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b767588a27e3caa1b1c936ab43df9a5499748a38f0fbd009677bf5fe542905a8\" id:\"6b3fb71c9a3af25f1e984cc316ef4d0239f92eac6bba8a5fbd74ebf353262a98\" pid:5621 exit_status:1 exited_at:{seconds:1757699390 nanos:499888201}" Sep 12 17:49:51.618780 (udev-worker)[5940]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:49:51.621776 (udev-worker)[5941]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:49:51.627972 systemd-networkd[1812]: lxc_health: Link UP Sep 12 17:49:51.648787 systemd-networkd[1812]: lxc_health: Gained carrier Sep 12 17:49:51.728601 kubelet[3208]: I0912 17:49:51.728533 3208 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-77q6j" podStartSLOduration=8.728513958 podStartE2EDuration="8.728513958s" podCreationTimestamp="2025-09-12 17:49:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:49:48.747496069 +0000 UTC m=+93.545766841" watchObservedRunningTime="2025-09-12 17:49:51.728513958 +0000 UTC m=+96.526784726" Sep 12 17:49:52.800995 containerd[1912]: time="2025-09-12T17:49:52.800943305Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b767588a27e3caa1b1c936ab43df9a5499748a38f0fbd009677bf5fe542905a8\" id:\"5a7d8d3d39122b4a7ba8a9e2c61b0b6351bfcbaac7aea3d9184a323f98e26b52\" pid:5970 exited_at:{seconds:1757699392 nanos:800374666}" Sep 12 17:49:52.974587 systemd-networkd[1812]: lxc_health: Gained IPv6LL Sep 12 17:49:55.043022 containerd[1912]: time="2025-09-12T17:49:55.042976317Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b767588a27e3caa1b1c936ab43df9a5499748a38f0fbd009677bf5fe542905a8\" id:\"b461c0d981d4ecc10d65d6f0a9d0705711ea00060706623f9a2daa96e27cc112\" pid:6002 exited_at:{seconds:1757699395 nanos:42462889}" Sep 12 17:49:55.102567 ntpd[1851]: Listen normally on 14 lxc_health [fe80::6879:84ff:fe2a:a3ed%14]:123 Sep 12 17:49:55.103061 ntpd[1851]: 12 Sep 17:49:55 ntpd[1851]: Listen normally on 14 lxc_health [fe80::6879:84ff:fe2a:a3ed%14]:123 Sep 12 17:49:57.170497 containerd[1912]: time="2025-09-12T17:49:57.170451085Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b767588a27e3caa1b1c936ab43df9a5499748a38f0fbd009677bf5fe542905a8\" id:\"81051d23fef3c545aa25096935b2e993b4fb76ac6155db3ac0d4f882757b6bad\" pid:6047 exited_at:{seconds:1757699397 nanos:170180195}" Sep 12 17:49:59.373402 containerd[1912]: time="2025-09-12T17:49:59.373292669Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b767588a27e3caa1b1c936ab43df9a5499748a38f0fbd009677bf5fe542905a8\" id:\"66f473387afffb02df0fdb515e0328ef04a1dadc1f7c1b8fe6d01f3ab0fb4ab0\" pid:6071 exited_at:{seconds:1757699399 nanos:372742290}" Sep 12 17:49:59.416524 sshd[5287]: Connection closed by 139.178.68.195 port 51840 Sep 12 17:49:59.417837 sshd-session[5211]: pam_unix(sshd:session): session closed for user core Sep 12 17:49:59.422021 systemd[1]: sshd@25-172.31.16.223:22-139.178.68.195:51840.service: Deactivated successfully. Sep 12 17:49:59.424920 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 17:49:59.426011 systemd-logind[1858]: Session 26 logged out. Waiting for processes to exit. Sep 12 17:49:59.428046 systemd-logind[1858]: Removed session 26. Sep 12 17:50:14.841113 systemd[1]: cri-containerd-89246adef84e91c88f88e9c20b7feaad67c3a1ebd5e68330aff87b23d3e07728.scope: Deactivated successfully. Sep 12 17:50:14.842724 systemd[1]: cri-containerd-89246adef84e91c88f88e9c20b7feaad67c3a1ebd5e68330aff87b23d3e07728.scope: Consumed 3.449s CPU time, 69.2M memory peak, 23M read from disk. Sep 12 17:50:14.844211 containerd[1912]: time="2025-09-12T17:50:14.843876373Z" level=info msg="received exit event container_id:\"89246adef84e91c88f88e9c20b7feaad67c3a1ebd5e68330aff87b23d3e07728\" id:\"89246adef84e91c88f88e9c20b7feaad67c3a1ebd5e68330aff87b23d3e07728\" pid:3043 exit_status:1 exited_at:{seconds:1757699414 nanos:842152842}" Sep 12 17:50:14.846594 containerd[1912]: time="2025-09-12T17:50:14.846425682Z" level=info msg="TaskExit event in podsandbox handler container_id:\"89246adef84e91c88f88e9c20b7feaad67c3a1ebd5e68330aff87b23d3e07728\" id:\"89246adef84e91c88f88e9c20b7feaad67c3a1ebd5e68330aff87b23d3e07728\" pid:3043 exit_status:1 exited_at:{seconds:1757699414 nanos:842152842}" Sep 12 17:50:14.872749 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89246adef84e91c88f88e9c20b7feaad67c3a1ebd5e68330aff87b23d3e07728-rootfs.mount: Deactivated successfully. Sep 12 17:50:15.354035 containerd[1912]: time="2025-09-12T17:50:15.353970428Z" level=info msg="StopPodSandbox for \"6184645073ea128a34cf646b1aeb9e38687f9b112c6996ad4f1950b4ab3e50b8\"" Sep 12 17:50:15.354263 containerd[1912]: time="2025-09-12T17:50:15.354135361Z" level=info msg="TearDown network for sandbox \"6184645073ea128a34cf646b1aeb9e38687f9b112c6996ad4f1950b4ab3e50b8\" successfully" Sep 12 17:50:15.354263 containerd[1912]: time="2025-09-12T17:50:15.354156811Z" level=info msg="StopPodSandbox for \"6184645073ea128a34cf646b1aeb9e38687f9b112c6996ad4f1950b4ab3e50b8\" returns successfully" Sep 12 17:50:15.354552 containerd[1912]: time="2025-09-12T17:50:15.354524413Z" level=info msg="RemovePodSandbox for \"6184645073ea128a34cf646b1aeb9e38687f9b112c6996ad4f1950b4ab3e50b8\"" Sep 12 17:50:15.361415 containerd[1912]: time="2025-09-12T17:50:15.361354215Z" level=info msg="Forcibly stopping sandbox \"6184645073ea128a34cf646b1aeb9e38687f9b112c6996ad4f1950b4ab3e50b8\"" Sep 12 17:50:15.361568 containerd[1912]: time="2025-09-12T17:50:15.361524072Z" level=info msg="TearDown network for sandbox \"6184645073ea128a34cf646b1aeb9e38687f9b112c6996ad4f1950b4ab3e50b8\" successfully" Sep 12 17:50:15.366549 containerd[1912]: time="2025-09-12T17:50:15.366503916Z" level=info msg="Ensure that sandbox 6184645073ea128a34cf646b1aeb9e38687f9b112c6996ad4f1950b4ab3e50b8 in task-service has been cleanup successfully" Sep 12 17:50:15.373300 containerd[1912]: time="2025-09-12T17:50:15.373242104Z" level=info msg="RemovePodSandbox \"6184645073ea128a34cf646b1aeb9e38687f9b112c6996ad4f1950b4ab3e50b8\" returns successfully" Sep 12 17:50:15.373950 containerd[1912]: time="2025-09-12T17:50:15.373766829Z" level=info msg="StopPodSandbox for \"69fcc6dbaf13904bc0fdca0a7372470fb618642328052e80131428315bdd1f90\"" Sep 12 17:50:15.373950 containerd[1912]: time="2025-09-12T17:50:15.373877971Z" level=info msg="TearDown network for sandbox \"69fcc6dbaf13904bc0fdca0a7372470fb618642328052e80131428315bdd1f90\" successfully" Sep 12 17:50:15.373950 containerd[1912]: time="2025-09-12T17:50:15.373888725Z" level=info msg="StopPodSandbox for \"69fcc6dbaf13904bc0fdca0a7372470fb618642328052e80131428315bdd1f90\" returns successfully" Sep 12 17:50:15.374356 containerd[1912]: time="2025-09-12T17:50:15.374329556Z" level=info msg="RemovePodSandbox for \"69fcc6dbaf13904bc0fdca0a7372470fb618642328052e80131428315bdd1f90\"" Sep 12 17:50:15.374419 containerd[1912]: time="2025-09-12T17:50:15.374359944Z" level=info msg="Forcibly stopping sandbox \"69fcc6dbaf13904bc0fdca0a7372470fb618642328052e80131428315bdd1f90\"" Sep 12 17:50:15.374461 containerd[1912]: time="2025-09-12T17:50:15.374449474Z" level=info msg="TearDown network for sandbox \"69fcc6dbaf13904bc0fdca0a7372470fb618642328052e80131428315bdd1f90\" successfully" Sep 12 17:50:15.376228 containerd[1912]: time="2025-09-12T17:50:15.376096507Z" level=info msg="Ensure that sandbox 69fcc6dbaf13904bc0fdca0a7372470fb618642328052e80131428315bdd1f90 in task-service has been cleanup successfully" Sep 12 17:50:15.382995 containerd[1912]: time="2025-09-12T17:50:15.382931554Z" level=info msg="RemovePodSandbox \"69fcc6dbaf13904bc0fdca0a7372470fb618642328052e80131428315bdd1f90\" returns successfully" Sep 12 17:50:15.813784 kubelet[3208]: I0912 17:50:15.813749 3208 scope.go:117] "RemoveContainer" containerID="89246adef84e91c88f88e9c20b7feaad67c3a1ebd5e68330aff87b23d3e07728" Sep 12 17:50:15.817221 containerd[1912]: time="2025-09-12T17:50:15.817186619Z" level=info msg="CreateContainer within sandbox \"ad3243827cfc478b7b8d2f682af9612836a6004243d3e35b068b93f7c3f50d58\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 12 17:50:15.837006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2592861585.mount: Deactivated successfully. Sep 12 17:50:15.839278 containerd[1912]: time="2025-09-12T17:50:15.839235264Z" level=info msg="Container a7ee55d6ac29cb49f549d61e6b1212178d5c783de3d3c9a26b18086521bb2c55: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:50:15.851656 containerd[1912]: time="2025-09-12T17:50:15.851606723Z" level=info msg="CreateContainer within sandbox \"ad3243827cfc478b7b8d2f682af9612836a6004243d3e35b068b93f7c3f50d58\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"a7ee55d6ac29cb49f549d61e6b1212178d5c783de3d3c9a26b18086521bb2c55\"" Sep 12 17:50:15.853198 containerd[1912]: time="2025-09-12T17:50:15.852456288Z" level=info msg="StartContainer for \"a7ee55d6ac29cb49f549d61e6b1212178d5c783de3d3c9a26b18086521bb2c55\"" Sep 12 17:50:15.853536 containerd[1912]: time="2025-09-12T17:50:15.853479279Z" level=info msg="connecting to shim a7ee55d6ac29cb49f549d61e6b1212178d5c783de3d3c9a26b18086521bb2c55" address="unix:///run/containerd/s/9cc7a4b6d00e5ed0c7066d0d3d50f9a01d77b7cbf9e2fc4c33a4240b7893d032" protocol=ttrpc version=3 Sep 12 17:50:15.884443 systemd[1]: Started cri-containerd-a7ee55d6ac29cb49f549d61e6b1212178d5c783de3d3c9a26b18086521bb2c55.scope - libcontainer container a7ee55d6ac29cb49f549d61e6b1212178d5c783de3d3c9a26b18086521bb2c55. Sep 12 17:50:15.939977 containerd[1912]: time="2025-09-12T17:50:15.939873712Z" level=info msg="StartContainer for \"a7ee55d6ac29cb49f549d61e6b1212178d5c783de3d3c9a26b18086521bb2c55\" returns successfully" Sep 12 17:50:17.767105 kubelet[3208]: E0912 17:50:17.766861 3208 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.223:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-223?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 12 17:50:19.747195 systemd[1]: cri-containerd-545c6b16cec757051fb3aac64b496fff705343c09fcb52c5c51c63cb3b7b80de.scope: Deactivated successfully. Sep 12 17:50:19.747469 systemd[1]: cri-containerd-545c6b16cec757051fb3aac64b496fff705343c09fcb52c5c51c63cb3b7b80de.scope: Consumed 1.848s CPU time, 28.1M memory peak, 11.1M read from disk. Sep 12 17:50:19.749676 containerd[1912]: time="2025-09-12T17:50:19.749631147Z" level=info msg="received exit event container_id:\"545c6b16cec757051fb3aac64b496fff705343c09fcb52c5c51c63cb3b7b80de\" id:\"545c6b16cec757051fb3aac64b496fff705343c09fcb52c5c51c63cb3b7b80de\" pid:3052 exit_status:1 exited_at:{seconds:1757699419 nanos:749303956}" Sep 12 17:50:19.750683 containerd[1912]: time="2025-09-12T17:50:19.750489894Z" level=info msg="TaskExit event in podsandbox handler container_id:\"545c6b16cec757051fb3aac64b496fff705343c09fcb52c5c51c63cb3b7b80de\" id:\"545c6b16cec757051fb3aac64b496fff705343c09fcb52c5c51c63cb3b7b80de\" pid:3052 exit_status:1 exited_at:{seconds:1757699419 nanos:749303956}" Sep 12 17:50:19.775654 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-545c6b16cec757051fb3aac64b496fff705343c09fcb52c5c51c63cb3b7b80de-rootfs.mount: Deactivated successfully. Sep 12 17:50:19.832200 kubelet[3208]: I0912 17:50:19.831579 3208 scope.go:117] "RemoveContainer" containerID="545c6b16cec757051fb3aac64b496fff705343c09fcb52c5c51c63cb3b7b80de" Sep 12 17:50:19.836267 containerd[1912]: time="2025-09-12T17:50:19.836230869Z" level=info msg="CreateContainer within sandbox \"8921c1171d47572f5984bde41279f421d18f1b44595e29468271da3d4239c2e5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 12 17:50:19.854978 containerd[1912]: time="2025-09-12T17:50:19.854292183Z" level=info msg="Container 6fbc8df9fb7d85c1d600789fada07171fd42b9b28c54549d452fa3e1d994ba7f: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:50:19.878229 containerd[1912]: time="2025-09-12T17:50:19.878149188Z" level=info msg="CreateContainer within sandbox \"8921c1171d47572f5984bde41279f421d18f1b44595e29468271da3d4239c2e5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"6fbc8df9fb7d85c1d600789fada07171fd42b9b28c54549d452fa3e1d994ba7f\"" Sep 12 17:50:19.878791 containerd[1912]: time="2025-09-12T17:50:19.878701310Z" level=info msg="StartContainer for \"6fbc8df9fb7d85c1d600789fada07171fd42b9b28c54549d452fa3e1d994ba7f\"" Sep 12 17:50:19.879828 containerd[1912]: time="2025-09-12T17:50:19.879796879Z" level=info msg="connecting to shim 6fbc8df9fb7d85c1d600789fada07171fd42b9b28c54549d452fa3e1d994ba7f" address="unix:///run/containerd/s/5c4beef987c603f8916542054c1a1a9afe85b33c7586b4724508c556aca15a70" protocol=ttrpc version=3 Sep 12 17:50:19.904429 systemd[1]: Started cri-containerd-6fbc8df9fb7d85c1d600789fada07171fd42b9b28c54549d452fa3e1d994ba7f.scope - libcontainer container 6fbc8df9fb7d85c1d600789fada07171fd42b9b28c54549d452fa3e1d994ba7f. Sep 12 17:50:19.960792 containerd[1912]: time="2025-09-12T17:50:19.960754169Z" level=info msg="StartContainer for \"6fbc8df9fb7d85c1d600789fada07171fd42b9b28c54549d452fa3e1d994ba7f\" returns successfully" Sep 12 17:50:27.768293 kubelet[3208]: E0912 17:50:27.768246 3208 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ip-172-31-16-223)"