Sep 12 17:15:57.938590 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 15:35:29 -00 2025 Sep 12 17:15:57.938627 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=ea81bd4228a6b9fed11f4ec3af9a6e9673be062592f47971c283403bcba44656 Sep 12 17:15:57.938647 kernel: BIOS-provided physical RAM map: Sep 12 17:15:57.938659 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 12 17:15:57.938670 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Sep 12 17:15:57.938682 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Sep 12 17:15:57.938697 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Sep 12 17:15:57.938710 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Sep 12 17:15:57.939466 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Sep 12 17:15:57.939482 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Sep 12 17:15:57.939500 kernel: NX (Execute Disable) protection: active Sep 12 17:15:57.939512 kernel: APIC: Static calls initialized Sep 12 17:15:57.939525 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Sep 12 17:15:57.939538 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Sep 12 17:15:57.939553 kernel: extended physical RAM map: Sep 12 17:15:57.939566 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 12 17:15:57.939583 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Sep 12 17:15:57.939597 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Sep 12 17:15:57.939610 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Sep 12 17:15:57.939623 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Sep 12 17:15:57.939637 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Sep 12 17:15:57.939650 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Sep 12 17:15:57.939664 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Sep 12 17:15:57.939677 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Sep 12 17:15:57.939691 kernel: efi: EFI v2.7 by EDK II Sep 12 17:15:57.939704 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 Sep 12 17:15:57.939732 kernel: secureboot: Secure boot disabled Sep 12 17:15:57.939746 kernel: SMBIOS 2.7 present. Sep 12 17:15:57.939759 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Sep 12 17:15:57.939772 kernel: Hypervisor detected: KVM Sep 12 17:15:57.939786 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 12 17:15:57.939799 kernel: kvm-clock: using sched offset of 3876071137 cycles Sep 12 17:15:57.939813 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 17:15:57.939827 kernel: tsc: Detected 2499.996 MHz processor Sep 12 17:15:57.939841 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 17:15:57.939854 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 17:15:57.939867 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Sep 12 17:15:57.939885 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 12 17:15:57.939898 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 17:15:57.939912 kernel: Using GB pages for direct mapping Sep 12 17:15:57.939932 kernel: ACPI: Early table checksum verification disabled Sep 12 17:15:57.939946 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Sep 12 17:15:57.939960 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Sep 12 17:15:57.939978 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 12 17:15:57.939992 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Sep 12 17:15:57.940006 kernel: ACPI: FACS 0x00000000789D0000 000040 Sep 12 17:15:57.940021 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Sep 12 17:15:57.940035 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 12 17:15:57.940050 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 12 17:15:57.940065 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Sep 12 17:15:57.940079 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Sep 12 17:15:57.940096 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 12 17:15:57.940111 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 12 17:15:57.940126 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Sep 12 17:15:57.940140 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Sep 12 17:15:57.940154 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Sep 12 17:15:57.940167 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Sep 12 17:15:57.940181 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Sep 12 17:15:57.940196 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Sep 12 17:15:57.940210 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Sep 12 17:15:57.940228 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Sep 12 17:15:57.940242 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Sep 12 17:15:57.940257 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Sep 12 17:15:57.940271 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Sep 12 17:15:57.940285 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Sep 12 17:15:57.940300 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 12 17:15:57.940314 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 12 17:15:57.940328 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Sep 12 17:15:57.940342 kernel: NUMA: Initialized distance table, cnt=1 Sep 12 17:15:57.940359 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] Sep 12 17:15:57.940373 kernel: Zone ranges: Sep 12 17:15:57.940389 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 17:15:57.940410 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Sep 12 17:15:57.940427 kernel: Normal empty Sep 12 17:15:57.940441 kernel: Movable zone start for each node Sep 12 17:15:57.940455 kernel: Early memory node ranges Sep 12 17:15:57.940467 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 12 17:15:57.940481 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Sep 12 17:15:57.940497 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Sep 12 17:15:57.940511 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Sep 12 17:15:57.940525 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 17:15:57.940539 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 12 17:15:57.940554 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 12 17:15:57.940569 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Sep 12 17:15:57.940582 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 12 17:15:57.940595 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 12 17:15:57.940609 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Sep 12 17:15:57.940626 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 12 17:15:57.940640 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 17:15:57.940654 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 12 17:15:57.940666 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 12 17:15:57.940681 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 17:15:57.940695 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 12 17:15:57.940709 kernel: TSC deadline timer available Sep 12 17:15:57.940756 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 12 17:15:57.940771 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 12 17:15:57.940785 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Sep 12 17:15:57.940804 kernel: Booting paravirtualized kernel on KVM Sep 12 17:15:57.940819 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 17:15:57.940833 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 12 17:15:57.940847 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u1048576 Sep 12 17:15:57.940861 kernel: pcpu-alloc: s197160 r8192 d32216 u1048576 alloc=1*2097152 Sep 12 17:15:57.940876 kernel: pcpu-alloc: [0] 0 1 Sep 12 17:15:57.940891 kernel: kvm-guest: PV spinlocks enabled Sep 12 17:15:57.940907 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 12 17:15:57.940929 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=ea81bd4228a6b9fed11f4ec3af9a6e9673be062592f47971c283403bcba44656 Sep 12 17:15:57.940945 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:15:57.940961 kernel: random: crng init done Sep 12 17:15:57.940975 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:15:57.940989 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 12 17:15:57.941003 kernel: Fallback order for Node 0: 0 Sep 12 17:15:57.941018 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Sep 12 17:15:57.941032 kernel: Policy zone: DMA32 Sep 12 17:15:57.941051 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:15:57.941068 kernel: Memory: 1872536K/2037804K available (14336K kernel code, 2293K rwdata, 22872K rodata, 43520K init, 1556K bss, 165012K reserved, 0K cma-reserved) Sep 12 17:15:57.941082 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 17:15:57.941098 kernel: Kernel/User page tables isolation: enabled Sep 12 17:15:57.941114 kernel: ftrace: allocating 37948 entries in 149 pages Sep 12 17:15:57.941141 kernel: ftrace: allocated 149 pages with 4 groups Sep 12 17:15:57.941160 kernel: Dynamic Preempt: voluntary Sep 12 17:15:57.941176 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:15:57.941193 kernel: rcu: RCU event tracing is enabled. Sep 12 17:15:57.941209 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 17:15:57.941224 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:15:57.941241 kernel: Rude variant of Tasks RCU enabled. Sep 12 17:15:57.941260 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:15:57.941278 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:15:57.941295 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 17:15:57.941311 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 12 17:15:57.941329 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:15:57.941348 kernel: Console: colour dummy device 80x25 Sep 12 17:15:57.941363 kernel: printk: console [tty0] enabled Sep 12 17:15:57.941379 kernel: printk: console [ttyS0] enabled Sep 12 17:15:57.941394 kernel: ACPI: Core revision 20230628 Sep 12 17:15:57.941413 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Sep 12 17:15:57.941431 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 17:15:57.941447 kernel: x2apic enabled Sep 12 17:15:57.941462 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 17:15:57.941479 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Sep 12 17:15:57.941500 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Sep 12 17:15:57.941516 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 12 17:15:57.941532 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 12 17:15:57.941549 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 17:15:57.941566 kernel: Spectre V2 : Mitigation: Retpolines Sep 12 17:15:57.941584 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 17:15:57.941600 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 12 17:15:57.941616 kernel: RETBleed: Vulnerable Sep 12 17:15:57.941632 kernel: Speculative Store Bypass: Vulnerable Sep 12 17:15:57.941649 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Sep 12 17:15:57.941671 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 12 17:15:57.941688 kernel: GDS: Unknown: Dependent on hypervisor status Sep 12 17:15:57.941705 kernel: active return thunk: its_return_thunk Sep 12 17:15:57.942767 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 12 17:15:57.942791 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 17:15:57.942808 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 17:15:57.942824 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 17:15:57.942840 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Sep 12 17:15:57.942856 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Sep 12 17:15:57.942872 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 12 17:15:57.942888 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 12 17:15:57.942910 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 12 17:15:57.942926 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Sep 12 17:15:57.942942 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 17:15:57.942958 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Sep 12 17:15:57.942974 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Sep 12 17:15:57.942990 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Sep 12 17:15:57.943006 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Sep 12 17:15:57.943022 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Sep 12 17:15:57.943038 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Sep 12 17:15:57.943054 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Sep 12 17:15:57.943071 kernel: Freeing SMP alternatives memory: 32K Sep 12 17:15:57.943087 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:15:57.943106 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 17:15:57.943122 kernel: landlock: Up and running. Sep 12 17:15:57.943138 kernel: SELinux: Initializing. Sep 12 17:15:57.943154 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 12 17:15:57.943170 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 12 17:15:57.943187 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Sep 12 17:15:57.943203 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:15:57.943220 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:15:57.943237 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:15:57.943254 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Sep 12 17:15:57.943273 kernel: signal: max sigframe size: 3632 Sep 12 17:15:57.943290 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:15:57.943307 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:15:57.943323 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 12 17:15:57.943339 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:15:57.943355 kernel: smpboot: x86: Booting SMP configuration: Sep 12 17:15:57.943371 kernel: .... node #0, CPUs: #1 Sep 12 17:15:57.943389 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 12 17:15:57.943406 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 12 17:15:57.943425 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 17:15:57.943442 kernel: smpboot: Max logical packages: 1 Sep 12 17:15:57.943458 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Sep 12 17:15:57.943474 kernel: devtmpfs: initialized Sep 12 17:15:57.943490 kernel: x86/mm: Memory block size: 128MB Sep 12 17:15:57.943507 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Sep 12 17:15:57.943523 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:15:57.943539 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 17:15:57.943559 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:15:57.943575 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:15:57.943591 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:15:57.943607 kernel: audit: type=2000 audit(1757697356.999:1): state=initialized audit_enabled=0 res=1 Sep 12 17:15:57.943624 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:15:57.943640 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 17:15:57.943655 kernel: cpuidle: using governor menu Sep 12 17:15:57.943672 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:15:57.943688 kernel: dca service started, version 1.12.1 Sep 12 17:15:57.943707 kernel: PCI: Using configuration type 1 for base access Sep 12 17:15:57.945513 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 17:15:57.945534 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:15:57.945549 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:15:57.945564 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:15:57.945579 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:15:57.945594 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:15:57.945609 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:15:57.945624 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:15:57.945645 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Sep 12 17:15:57.945659 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 12 17:15:57.945674 kernel: ACPI: Interpreter enabled Sep 12 17:15:57.945688 kernel: ACPI: PM: (supports S0 S5) Sep 12 17:15:57.945702 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 17:15:57.945755 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 17:15:57.945770 kernel: PCI: Using E820 reservations for host bridge windows Sep 12 17:15:57.945784 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 12 17:15:57.945799 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 17:15:57.946033 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 12 17:15:57.946173 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 12 17:15:57.946319 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 12 17:15:57.946337 kernel: acpiphp: Slot [3] registered Sep 12 17:15:57.946352 kernel: acpiphp: Slot [4] registered Sep 12 17:15:57.946367 kernel: acpiphp: Slot [5] registered Sep 12 17:15:57.946382 kernel: acpiphp: Slot [6] registered Sep 12 17:15:57.946396 kernel: acpiphp: Slot [7] registered Sep 12 17:15:57.946414 kernel: acpiphp: Slot [8] registered Sep 12 17:15:57.946428 kernel: acpiphp: Slot [9] registered Sep 12 17:15:57.946443 kernel: acpiphp: Slot [10] registered Sep 12 17:15:57.946457 kernel: acpiphp: Slot [11] registered Sep 12 17:15:57.946472 kernel: acpiphp: Slot [12] registered Sep 12 17:15:57.946486 kernel: acpiphp: Slot [13] registered Sep 12 17:15:57.946501 kernel: acpiphp: Slot [14] registered Sep 12 17:15:57.946516 kernel: acpiphp: Slot [15] registered Sep 12 17:15:57.946530 kernel: acpiphp: Slot [16] registered Sep 12 17:15:57.946547 kernel: acpiphp: Slot [17] registered Sep 12 17:15:57.946561 kernel: acpiphp: Slot [18] registered Sep 12 17:15:57.946575 kernel: acpiphp: Slot [19] registered Sep 12 17:15:57.946590 kernel: acpiphp: Slot [20] registered Sep 12 17:15:57.946605 kernel: acpiphp: Slot [21] registered Sep 12 17:15:57.946618 kernel: acpiphp: Slot [22] registered Sep 12 17:15:57.946631 kernel: acpiphp: Slot [23] registered Sep 12 17:15:57.946647 kernel: acpiphp: Slot [24] registered Sep 12 17:15:57.946663 kernel: acpiphp: Slot [25] registered Sep 12 17:15:57.946679 kernel: acpiphp: Slot [26] registered Sep 12 17:15:57.946698 kernel: acpiphp: Slot [27] registered Sep 12 17:15:57.946837 kernel: acpiphp: Slot [28] registered Sep 12 17:15:57.946857 kernel: acpiphp: Slot [29] registered Sep 12 17:15:57.946873 kernel: acpiphp: Slot [30] registered Sep 12 17:15:57.946889 kernel: acpiphp: Slot [31] registered Sep 12 17:15:57.946906 kernel: PCI host bridge to bus 0000:00 Sep 12 17:15:57.947063 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 17:15:57.947199 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 17:15:57.947334 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 17:15:57.947461 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 12 17:15:57.947605 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Sep 12 17:15:57.947769 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 17:15:57.947960 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 12 17:15:57.948119 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Sep 12 17:15:57.948272 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Sep 12 17:15:57.948413 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 12 17:15:57.948552 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Sep 12 17:15:57.948689 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Sep 12 17:15:57.950600 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Sep 12 17:15:57.951989 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Sep 12 17:15:57.952216 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Sep 12 17:15:57.952391 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Sep 12 17:15:57.952556 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Sep 12 17:15:57.952701 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Sep 12 17:15:57.952872 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 12 17:15:57.953014 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Sep 12 17:15:57.953160 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 12 17:15:57.953331 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Sep 12 17:15:57.953499 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Sep 12 17:15:57.953664 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Sep 12 17:15:57.953904 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Sep 12 17:15:57.953929 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 12 17:15:57.953945 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 12 17:15:57.953960 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 17:15:57.953976 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 12 17:15:57.953992 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 12 17:15:57.954012 kernel: iommu: Default domain type: Translated Sep 12 17:15:57.954028 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 17:15:57.954044 kernel: efivars: Registered efivars operations Sep 12 17:15:57.954061 kernel: PCI: Using ACPI for IRQ routing Sep 12 17:15:57.954078 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 17:15:57.954095 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Sep 12 17:15:57.954111 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Sep 12 17:15:57.954127 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Sep 12 17:15:57.954302 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Sep 12 17:15:57.954471 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Sep 12 17:15:57.954623 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 12 17:15:57.954643 kernel: vgaarb: loaded Sep 12 17:15:57.954660 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Sep 12 17:15:57.954677 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Sep 12 17:15:57.954694 kernel: clocksource: Switched to clocksource kvm-clock Sep 12 17:15:57.956787 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:15:57.956819 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:15:57.956837 kernel: pnp: PnP ACPI init Sep 12 17:15:57.956858 kernel: pnp: PnP ACPI: found 5 devices Sep 12 17:15:57.956874 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 17:15:57.956890 kernel: NET: Registered PF_INET protocol family Sep 12 17:15:57.956905 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 17:15:57.956921 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 12 17:15:57.956937 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:15:57.956953 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 12 17:15:57.956968 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 12 17:15:57.956987 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 12 17:15:57.957002 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 12 17:15:57.957018 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 12 17:15:57.957033 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:15:57.957048 kernel: NET: Registered PF_XDP protocol family Sep 12 17:15:57.957222 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 17:15:57.957356 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 17:15:57.957485 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 17:15:57.957618 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 12 17:15:57.958195 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Sep 12 17:15:57.958355 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 12 17:15:57.958376 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:15:57.958393 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 12 17:15:57.958409 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Sep 12 17:15:57.958424 kernel: clocksource: Switched to clocksource tsc Sep 12 17:15:57.958440 kernel: Initialise system trusted keyrings Sep 12 17:15:57.958455 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 12 17:15:57.958475 kernel: Key type asymmetric registered Sep 12 17:15:57.958490 kernel: Asymmetric key parser 'x509' registered Sep 12 17:15:57.958505 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 12 17:15:57.958520 kernel: io scheduler mq-deadline registered Sep 12 17:15:57.958536 kernel: io scheduler kyber registered Sep 12 17:15:57.958550 kernel: io scheduler bfq registered Sep 12 17:15:57.958566 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 17:15:57.958582 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:15:57.958597 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 17:15:57.958613 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 12 17:15:57.958631 kernel: i8042: Warning: Keylock active Sep 12 17:15:57.958646 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 17:15:57.958661 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 17:15:57.958844 kernel: rtc_cmos 00:00: RTC can wake from S4 Sep 12 17:15:57.958982 kernel: rtc_cmos 00:00: registered as rtc0 Sep 12 17:15:57.959123 kernel: rtc_cmos 00:00: setting system clock to 2025-09-12T17:15:57 UTC (1757697357) Sep 12 17:15:57.959254 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Sep 12 17:15:57.959280 kernel: intel_pstate: CPU model not supported Sep 12 17:15:57.959297 kernel: efifb: probing for efifb Sep 12 17:15:57.959313 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Sep 12 17:15:57.959356 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Sep 12 17:15:57.959376 kernel: efifb: scrolling: redraw Sep 12 17:15:57.959393 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 12 17:15:57.959410 kernel: Console: switching to colour frame buffer device 100x37 Sep 12 17:15:57.959428 kernel: fb0: EFI VGA frame buffer device Sep 12 17:15:57.959445 kernel: pstore: Using crash dump compression: deflate Sep 12 17:15:57.959466 kernel: pstore: Registered efi_pstore as persistent store backend Sep 12 17:15:57.959483 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:15:57.959500 kernel: Segment Routing with IPv6 Sep 12 17:15:57.959516 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:15:57.959533 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:15:57.959551 kernel: Key type dns_resolver registered Sep 12 17:15:57.959568 kernel: IPI shorthand broadcast: enabled Sep 12 17:15:57.959585 kernel: sched_clock: Marking stable (489002469, 140441642)->(733621087, -104176976) Sep 12 17:15:57.959602 kernel: registered taskstats version 1 Sep 12 17:15:57.959619 kernel: Loading compiled-in X.509 certificates Sep 12 17:15:57.959640 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: d1d9e065fdbec39026aa56a07626d6d91ab4fce4' Sep 12 17:15:57.959656 kernel: Key type .fscrypt registered Sep 12 17:15:57.959673 kernel: Key type fscrypt-provisioning registered Sep 12 17:15:57.959690 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:15:57.959707 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:15:57.959759 kernel: ima: No architecture policies found Sep 12 17:15:57.959776 kernel: clk: Disabling unused clocks Sep 12 17:15:57.959793 kernel: Freeing unused kernel image (initmem) memory: 43520K Sep 12 17:15:57.959814 kernel: Write protecting the kernel read-only data: 38912k Sep 12 17:15:57.959832 kernel: Freeing unused kernel image (rodata/data gap) memory: 1704K Sep 12 17:15:57.959849 kernel: Run /init as init process Sep 12 17:15:57.959866 kernel: with arguments: Sep 12 17:15:57.959898 kernel: /init Sep 12 17:15:57.959916 kernel: with environment: Sep 12 17:15:57.959933 kernel: HOME=/ Sep 12 17:15:57.959949 kernel: TERM=linux Sep 12 17:15:57.959966 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:15:57.959989 systemd[1]: Successfully made /usr/ read-only. Sep 12 17:15:57.960012 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 17:15:57.960030 systemd[1]: Detected virtualization amazon. Sep 12 17:15:57.960048 systemd[1]: Detected architecture x86-64. Sep 12 17:15:57.960071 systemd[1]: Running in initrd. Sep 12 17:15:57.960088 systemd[1]: No hostname configured, using default hostname. Sep 12 17:15:57.960106 systemd[1]: Hostname set to . Sep 12 17:15:57.960124 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:15:57.960142 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:15:57.960160 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:15:57.960177 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:15:57.960196 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:15:57.960217 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:15:57.960235 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:15:57.960255 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:15:57.960275 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:15:57.960293 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:15:57.960311 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:15:57.960329 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:15:57.960350 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:15:57.960368 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:15:57.960386 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:15:57.960404 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:15:57.960422 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:15:57.960440 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:15:57.960458 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:15:57.960486 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 12 17:15:57.960507 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:15:57.960525 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:15:57.960543 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:15:57.960561 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:15:57.960579 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:15:57.960600 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:15:57.960618 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:15:57.960636 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:15:57.960654 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:15:57.960676 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:15:57.960694 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:15:57.960712 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:15:57.960770 systemd-journald[179]: Collecting audit messages is disabled. Sep 12 17:15:57.960814 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:15:57.960833 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:15:57.960853 systemd-journald[179]: Journal started Sep 12 17:15:57.960893 systemd-journald[179]: Runtime Journal (/run/log/journal/ec2406c66d466379aaca85db1123acfa) is 4.7M, max 38.2M, 33.4M free. Sep 12 17:15:57.965539 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:15:57.954942 systemd-modules-load[180]: Inserted module 'overlay' Sep 12 17:15:57.970755 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:15:57.971387 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:15:57.975321 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:15:57.996659 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:15:57.994739 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:15:58.001825 kernel: Bridge firewalling registered Sep 12 17:15:57.999489 systemd-modules-load[180]: Inserted module 'br_netfilter' Sep 12 17:15:58.007439 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 12 17:15:58.002687 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:15:58.016137 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:15:58.021166 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:15:58.027605 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:15:58.030050 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:15:58.037957 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:15:58.040294 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:15:58.042350 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:15:58.055489 dracut-cmdline[208]: dracut-dracut-053 Sep 12 17:15:58.059285 dracut-cmdline[208]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=ea81bd4228a6b9fed11f4ec3af9a6e9673be062592f47971c283403bcba44656 Sep 12 17:15:58.066292 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:15:58.077985 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:15:58.126919 systemd-resolved[231]: Positive Trust Anchors: Sep 12 17:15:58.126936 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:15:58.127001 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:15:58.136979 systemd-resolved[231]: Defaulting to hostname 'linux'. Sep 12 17:15:58.138434 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:15:58.141101 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:15:58.158804 kernel: SCSI subsystem initialized Sep 12 17:15:58.168743 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:15:58.180743 kernel: iscsi: registered transport (tcp) Sep 12 17:15:58.203190 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:15:58.203279 kernel: QLogic iSCSI HBA Driver Sep 12 17:15:58.241833 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:15:58.248887 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:15:58.274481 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:15:58.274560 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:15:58.274583 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 17:15:58.317751 kernel: raid6: avx512x4 gen() 17812 MB/s Sep 12 17:15:58.335738 kernel: raid6: avx512x2 gen() 17838 MB/s Sep 12 17:15:58.353747 kernel: raid6: avx512x1 gen() 17777 MB/s Sep 12 17:15:58.371742 kernel: raid6: avx2x4 gen() 17689 MB/s Sep 12 17:15:58.389743 kernel: raid6: avx2x2 gen() 17633 MB/s Sep 12 17:15:58.407969 kernel: raid6: avx2x1 gen() 13523 MB/s Sep 12 17:15:58.408014 kernel: raid6: using algorithm avx512x2 gen() 17838 MB/s Sep 12 17:15:58.426939 kernel: raid6: .... xor() 24737 MB/s, rmw enabled Sep 12 17:15:58.426997 kernel: raid6: using avx512x2 recovery algorithm Sep 12 17:15:58.448758 kernel: xor: automatically using best checksumming function avx Sep 12 17:15:58.604747 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:15:58.615204 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:15:58.626009 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:15:58.641139 systemd-udevd[398]: Using default interface naming scheme 'v255'. Sep 12 17:15:58.647171 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:15:58.655908 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:15:58.675759 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Sep 12 17:15:58.705531 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:15:58.713989 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:15:58.767397 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:15:58.776978 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:15:58.801139 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:15:58.805214 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:15:58.805861 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:15:58.806820 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:15:58.813984 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:15:58.843107 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:15:58.879426 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 12 17:15:58.879710 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 12 17:15:58.883760 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 17:15:58.898752 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Sep 12 17:15:58.911876 kernel: AVX2 version of gcm_enc/dec engaged. Sep 12 17:15:58.911946 kernel: AES CTR mode by8 optimization enabled Sep 12 17:15:58.922277 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:15:58.923070 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:af:88:c0:73:8f Sep 12 17:15:58.924374 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:15:58.927033 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:15:58.928640 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:15:58.930884 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:15:58.932306 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:15:58.938216 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:15:58.945383 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 12 17:15:58.945629 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 12 17:15:58.947928 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 17:15:58.954252 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:15:58.955192 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:15:58.959253 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 12 17:15:58.967359 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 17:15:58.967421 kernel: GPT:9289727 != 16777215 Sep 12 17:15:58.967329 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:15:58.974103 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 17:15:58.974143 kernel: GPT:9289727 != 16777215 Sep 12 17:15:58.974163 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 17:15:58.974183 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:15:58.980013 (udev-worker)[446]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:15:58.997548 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:15:59.002983 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:15:59.020913 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:15:59.073754 kernel: BTRFS: device fsid 8328a8c6-e42c-42bb-93d2-f755d7523d53 devid 1 transid 34 /dev/nvme0n1p3 scanned by (udev-worker) (447) Sep 12 17:15:59.081749 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (449) Sep 12 17:15:59.130052 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 12 17:15:59.149734 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 12 17:15:59.159547 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 12 17:15:59.160237 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 12 17:15:59.172411 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 12 17:15:59.183079 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:15:59.191114 disk-uuid[632]: Primary Header is updated. Sep 12 17:15:59.191114 disk-uuid[632]: Secondary Entries is updated. Sep 12 17:15:59.191114 disk-uuid[632]: Secondary Header is updated. Sep 12 17:15:59.196749 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:15:59.203744 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:16:00.205677 disk-uuid[633]: The operation has completed successfully. Sep 12 17:16:00.206365 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:16:00.358205 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:16:00.358356 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:16:00.400993 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:16:00.404896 sh[891]: Success Sep 12 17:16:00.419742 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 12 17:16:00.507450 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:16:00.517903 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:16:00.520100 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:16:00.552742 kernel: BTRFS info (device dm-0): first mount of filesystem 8328a8c6-e42c-42bb-93d2-f755d7523d53 Sep 12 17:16:00.552816 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:16:00.554976 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 17:16:00.557686 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:16:00.557754 kernel: BTRFS info (device dm-0): using free space tree Sep 12 17:16:00.666932 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 12 17:16:00.686150 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:16:00.687654 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:16:00.693959 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:16:00.696904 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:16:00.722617 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 27144f91-5d6e-4232-8594-aeebe7d5186d Sep 12 17:16:00.722690 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:16:00.722712 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 12 17:16:00.730054 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 17:16:00.736783 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 27144f91-5d6e-4232-8594-aeebe7d5186d Sep 12 17:16:00.738456 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:16:00.744941 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:16:00.813974 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:16:00.823922 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:16:00.852921 systemd-networkd[1080]: lo: Link UP Sep 12 17:16:00.852932 systemd-networkd[1080]: lo: Gained carrier Sep 12 17:16:00.854697 systemd-networkd[1080]: Enumeration completed Sep 12 17:16:00.855304 systemd-networkd[1080]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:16:00.855310 systemd-networkd[1080]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:16:00.856339 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:16:00.857321 systemd[1]: Reached target network.target - Network. Sep 12 17:16:00.860087 systemd-networkd[1080]: eth0: Link UP Sep 12 17:16:00.860094 systemd-networkd[1080]: eth0: Gained carrier Sep 12 17:16:00.860108 systemd-networkd[1080]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:16:00.873158 systemd-networkd[1080]: eth0: DHCPv4 address 172.31.19.109/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 12 17:16:01.192355 ignition[980]: Ignition 2.20.0 Sep 12 17:16:01.192377 ignition[980]: Stage: fetch-offline Sep 12 17:16:01.192618 ignition[980]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:16:01.192632 ignition[980]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:16:01.194703 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:16:01.192994 ignition[980]: Ignition finished successfully Sep 12 17:16:01.202048 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 17:16:01.219297 ignition[1090]: Ignition 2.20.0 Sep 12 17:16:01.219323 ignition[1090]: Stage: fetch Sep 12 17:16:01.219761 ignition[1090]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:16:01.219776 ignition[1090]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:16:01.219907 ignition[1090]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:16:01.228490 ignition[1090]: PUT result: OK Sep 12 17:16:01.232347 ignition[1090]: parsed url from cmdline: "" Sep 12 17:16:01.232357 ignition[1090]: no config URL provided Sep 12 17:16:01.232366 ignition[1090]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:16:01.232379 ignition[1090]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:16:01.232400 ignition[1090]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:16:01.233491 ignition[1090]: PUT result: OK Sep 12 17:16:01.234332 ignition[1090]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 12 17:16:01.236012 ignition[1090]: GET result: OK Sep 12 17:16:01.237235 ignition[1090]: parsing config with SHA512: 87b0fabe393e479262c1599372b81bb20814ad7a6d86fabfca6655a81c40a01e1349fcfefb004fff7c0d249433e17add10abc564de3069d72d107d5d77167175 Sep 12 17:16:01.244725 unknown[1090]: fetched base config from "system" Sep 12 17:16:01.245479 unknown[1090]: fetched base config from "system" Sep 12 17:16:01.246034 ignition[1090]: fetch: fetch complete Sep 12 17:16:01.245486 unknown[1090]: fetched user config from "aws" Sep 12 17:16:01.246039 ignition[1090]: fetch: fetch passed Sep 12 17:16:01.246095 ignition[1090]: Ignition finished successfully Sep 12 17:16:01.248623 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 17:16:01.255292 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:16:01.273904 ignition[1097]: Ignition 2.20.0 Sep 12 17:16:01.273918 ignition[1097]: Stage: kargs Sep 12 17:16:01.274352 ignition[1097]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:16:01.274366 ignition[1097]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:16:01.274493 ignition[1097]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:16:01.275630 ignition[1097]: PUT result: OK Sep 12 17:16:01.279281 ignition[1097]: kargs: kargs passed Sep 12 17:16:01.279367 ignition[1097]: Ignition finished successfully Sep 12 17:16:01.280802 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:16:01.291197 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:16:01.354669 ignition[1103]: Ignition 2.20.0 Sep 12 17:16:01.354685 ignition[1103]: Stage: disks Sep 12 17:16:01.355323 ignition[1103]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:16:01.355338 ignition[1103]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:16:01.355478 ignition[1103]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:16:01.356351 ignition[1103]: PUT result: OK Sep 12 17:16:01.359278 ignition[1103]: disks: disks passed Sep 12 17:16:01.359360 ignition[1103]: Ignition finished successfully Sep 12 17:16:01.361209 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:16:01.361878 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:16:01.362164 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:16:01.362419 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:16:01.362656 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:16:01.363105 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:16:01.378935 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:16:01.428589 systemd-fsck[1112]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 12 17:16:01.434586 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:16:01.446600 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:16:01.685173 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 5378802a-8117-4ea8-949a-cd38005ba44a r/w with ordered data mode. Quota mode: none. Sep 12 17:16:01.685985 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:16:01.688778 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:16:01.706891 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:16:01.710888 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:16:01.717477 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 17:16:01.717564 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:16:01.717605 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:16:01.741158 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:16:01.770574 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1131) Sep 12 17:16:01.770616 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 27144f91-5d6e-4232-8594-aeebe7d5186d Sep 12 17:16:01.770638 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:16:01.770658 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 12 17:16:01.776771 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 17:16:01.779048 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:16:01.782892 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:16:02.137237 initrd-setup-root[1155]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:16:02.160266 initrd-setup-root[1162]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:16:02.171413 initrd-setup-root[1169]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:16:02.193071 initrd-setup-root[1176]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:16:02.221465 systemd-networkd[1080]: eth0: Gained IPv6LL Sep 12 17:16:02.542394 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:16:02.547965 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:16:02.551929 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:16:02.561419 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:16:02.563772 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 27144f91-5d6e-4232-8594-aeebe7d5186d Sep 12 17:16:02.598500 ignition[1243]: INFO : Ignition 2.20.0 Sep 12 17:16:02.600348 ignition[1243]: INFO : Stage: mount Sep 12 17:16:02.600348 ignition[1243]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:16:02.600348 ignition[1243]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:16:02.600348 ignition[1243]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:16:02.602622 ignition[1243]: INFO : PUT result: OK Sep 12 17:16:02.607068 ignition[1243]: INFO : mount: mount passed Sep 12 17:16:02.607068 ignition[1243]: INFO : Ignition finished successfully Sep 12 17:16:02.609434 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:16:02.611212 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:16:02.617940 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:16:02.691996 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:16:02.716242 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1256) Sep 12 17:16:02.722037 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 27144f91-5d6e-4232-8594-aeebe7d5186d Sep 12 17:16:02.722116 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:16:02.722139 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 12 17:16:02.729746 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 17:16:02.732291 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:16:02.765694 ignition[1273]: INFO : Ignition 2.20.0 Sep 12 17:16:02.765694 ignition[1273]: INFO : Stage: files Sep 12 17:16:02.767345 ignition[1273]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:16:02.767345 ignition[1273]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:16:02.767345 ignition[1273]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:16:02.768811 ignition[1273]: INFO : PUT result: OK Sep 12 17:16:02.770475 ignition[1273]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:16:02.772767 ignition[1273]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:16:02.772767 ignition[1273]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:16:02.797150 ignition[1273]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:16:02.799192 ignition[1273]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:16:02.799192 ignition[1273]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:16:02.797969 unknown[1273]: wrote ssh authorized keys file for user: core Sep 12 17:16:02.802497 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 12 17:16:02.802497 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 12 17:16:02.910183 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 17:16:03.277175 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 12 17:16:03.277175 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 17:16:03.277175 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 12 17:16:03.465765 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 12 17:16:03.589589 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 17:16:03.589589 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:16:03.592496 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:16:03.592496 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:16:03.592496 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:16:03.592496 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:16:03.592496 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:16:03.592496 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:16:03.592496 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:16:03.592496 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:16:03.592496 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:16:03.592496 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 17:16:03.592496 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 17:16:03.592496 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 17:16:03.592496 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 12 17:16:04.043848 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 12 17:16:04.859315 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 17:16:04.859315 ignition[1273]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 12 17:16:04.884849 ignition[1273]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:16:04.885861 ignition[1273]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:16:04.885861 ignition[1273]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 12 17:16:04.885861 ignition[1273]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:16:04.885861 ignition[1273]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:16:04.885861 ignition[1273]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:16:04.885861 ignition[1273]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:16:04.885861 ignition[1273]: INFO : files: files passed Sep 12 17:16:04.885861 ignition[1273]: INFO : Ignition finished successfully Sep 12 17:16:04.887243 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:16:04.891961 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:16:04.894895 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:16:04.898596 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:16:04.898697 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:16:04.906862 initrd-setup-root-after-ignition[1302]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:16:04.908663 initrd-setup-root-after-ignition[1302]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:16:04.911184 initrd-setup-root-after-ignition[1306]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:16:04.912155 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:16:04.913078 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:16:04.919949 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:16:04.945234 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:16:04.945388 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:16:04.947065 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:16:04.947992 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:16:04.948935 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:16:04.950964 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:16:04.979047 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:16:04.989060 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:16:05.002371 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:16:05.003251 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:16:05.004271 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:16:05.005181 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:16:05.005372 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:16:05.006579 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:16:05.007616 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:16:05.008441 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:16:05.009232 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:16:05.010000 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:16:05.010935 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:16:05.011650 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:16:05.012444 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:16:05.013604 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:16:05.014364 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:16:05.015224 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:16:05.015412 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:16:05.016531 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:16:05.017343 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:16:05.018047 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:16:05.018184 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:16:05.018977 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:16:05.019201 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:16:05.020566 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:16:05.020784 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:16:05.021486 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:16:05.021649 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:16:05.034047 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:16:05.040089 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:16:05.040891 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:16:05.041127 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:16:05.046115 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:16:05.046338 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:16:05.056186 ignition[1326]: INFO : Ignition 2.20.0 Sep 12 17:16:05.056186 ignition[1326]: INFO : Stage: umount Sep 12 17:16:05.059479 ignition[1326]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:16:05.059479 ignition[1326]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:16:05.059479 ignition[1326]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:16:05.057176 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:16:05.066926 ignition[1326]: INFO : PUT result: OK Sep 12 17:16:05.057320 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:16:05.068212 ignition[1326]: INFO : umount: umount passed Sep 12 17:16:05.068212 ignition[1326]: INFO : Ignition finished successfully Sep 12 17:16:05.071255 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:16:05.071399 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:16:05.072484 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:16:05.072549 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:16:05.073149 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:16:05.073207 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:16:05.074796 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 17:16:05.074858 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 17:16:05.075350 systemd[1]: Stopped target network.target - Network. Sep 12 17:16:05.075810 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:16:05.075867 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:16:05.076340 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:16:05.080357 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:16:05.084814 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:16:05.086012 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:16:05.086376 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:16:05.087184 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:16:05.087247 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:16:05.087901 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:16:05.087955 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:16:05.088498 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:16:05.088573 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:16:05.089146 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:16:05.089205 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:16:05.089895 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:16:05.090490 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:16:05.092323 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:16:05.094147 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:16:05.094267 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:16:05.095382 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:16:05.095509 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:16:05.097615 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:16:05.097739 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:16:05.102685 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 12 17:16:05.103033 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:16:05.103183 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:16:05.105503 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 12 17:16:05.107135 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:16:05.107204 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:16:05.112890 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:16:05.113441 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:16:05.113527 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:16:05.114227 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:16:05.114290 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:16:05.117053 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:16:05.117125 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:16:05.117649 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:16:05.117709 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:16:05.118438 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:16:05.121879 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 17:16:05.121979 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 12 17:16:05.137141 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:16:05.137450 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:16:05.140080 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:16:05.140218 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:16:05.142260 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:16:05.142336 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:16:05.143444 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:16:05.143496 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:16:05.144173 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:16:05.144249 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:16:05.145440 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:16:05.145508 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:16:05.146599 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:16:05.146668 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:16:05.155145 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:16:05.155850 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:16:05.155944 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:16:05.157526 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 12 17:16:05.157600 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:16:05.158282 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:16:05.158341 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:16:05.161874 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:16:05.161960 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:16:05.164615 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 12 17:16:05.164680 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 17:16:05.165048 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:16:05.165140 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:16:05.167265 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:16:05.181030 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:16:05.189792 systemd[1]: Switching root. Sep 12 17:16:05.253058 systemd-journald[179]: Journal stopped Sep 12 17:16:06.773997 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Sep 12 17:16:06.774060 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 17:16:06.774098 kernel: SELinux: policy capability open_perms=1 Sep 12 17:16:06.774115 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 17:16:06.774127 kernel: SELinux: policy capability always_check_network=0 Sep 12 17:16:06.774143 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 17:16:06.774155 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 17:16:06.774167 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 17:16:06.774178 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 17:16:06.774190 kernel: audit: type=1403 audit(1757697365.539:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 17:16:06.774211 systemd[1]: Successfully loaded SELinux policy in 53.753ms. Sep 12 17:16:06.774237 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.754ms. Sep 12 17:16:06.774251 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 17:16:06.774263 systemd[1]: Detected virtualization amazon. Sep 12 17:16:06.774276 systemd[1]: Detected architecture x86-64. Sep 12 17:16:06.774288 systemd[1]: Detected first boot. Sep 12 17:16:06.774301 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:16:06.774314 zram_generator::config[1370]: No configuration found. Sep 12 17:16:06.774331 kernel: Guest personality initialized and is inactive Sep 12 17:16:06.774343 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 12 17:16:06.774354 kernel: Initialized host personality Sep 12 17:16:06.774371 kernel: NET: Registered PF_VSOCK protocol family Sep 12 17:16:06.774383 systemd[1]: Populated /etc with preset unit settings. Sep 12 17:16:06.774396 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 12 17:16:06.774409 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 17:16:06.774422 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 17:16:06.774434 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 17:16:06.774449 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 17:16:06.774462 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 17:16:06.774475 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 17:16:06.774491 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 17:16:06.774504 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 17:16:06.774516 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 17:16:06.774529 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 17:16:06.774541 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 17:16:06.774556 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:16:06.774569 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:16:06.774582 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 17:16:06.774594 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 17:16:06.774607 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 17:16:06.774619 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:16:06.774633 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 17:16:06.774646 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:16:06.774660 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 17:16:06.774673 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 17:16:06.774685 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 17:16:06.774698 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 17:16:06.774710 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:16:06.774751 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:16:06.774764 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:16:06.774777 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:16:06.774790 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 17:16:06.774806 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 17:16:06.774819 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 12 17:16:06.774831 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:16:06.774844 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:16:06.774857 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:16:06.774870 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 17:16:06.774884 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 17:16:06.774896 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 17:16:06.774908 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 17:16:06.774923 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:16:06.774937 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 17:16:06.774949 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 17:16:06.774961 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 17:16:06.774974 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 17:16:06.774987 systemd[1]: Reached target machines.target - Containers. Sep 12 17:16:06.774999 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 17:16:06.775012 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:16:06.775028 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:16:06.775040 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 17:16:06.775052 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:16:06.775064 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:16:06.775077 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:16:06.775089 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 17:16:06.775102 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:16:06.775114 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:16:06.775126 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 17:16:06.775142 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 17:16:06.775154 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 17:16:06.775167 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 17:16:06.775180 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:16:06.775193 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:16:06.775205 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:16:06.775218 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:16:06.775230 kernel: loop: module loaded Sep 12 17:16:06.775244 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 17:16:06.775256 kernel: fuse: init (API version 7.39) Sep 12 17:16:06.775269 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 12 17:16:06.775283 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:16:06.775296 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 17:16:06.775308 systemd[1]: Stopped verity-setup.service. Sep 12 17:16:06.775321 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:16:06.775337 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 17:16:06.775352 kernel: ACPI: bus type drm_connector registered Sep 12 17:16:06.775364 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 17:16:06.775376 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 17:16:06.775391 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 17:16:06.775404 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 17:16:06.775416 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 17:16:06.775428 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:16:06.775464 systemd-journald[1453]: Collecting audit messages is disabled. Sep 12 17:16:06.775490 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 17:16:06.775503 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 17:16:06.775519 systemd-journald[1453]: Journal started Sep 12 17:16:06.775544 systemd-journald[1453]: Runtime Journal (/run/log/journal/ec2406c66d466379aaca85db1123acfa) is 4.7M, max 38.2M, 33.4M free. Sep 12 17:16:06.482407 systemd[1]: Queued start job for default target multi-user.target. Sep 12 17:16:06.491103 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 12 17:16:06.491610 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 17:16:06.779736 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:16:06.781280 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:16:06.781599 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:16:06.782492 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:16:06.782824 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:16:06.783535 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:16:06.783781 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:16:06.784528 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 17:16:06.785123 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 17:16:06.785261 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 17:16:06.785872 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:16:06.786020 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:16:06.786697 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:16:06.787696 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:16:06.788369 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 17:16:06.799845 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:16:06.807075 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 17:16:06.819414 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 17:16:06.821548 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:16:06.821611 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:16:06.824410 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 12 17:16:06.830934 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 17:16:06.839042 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 17:16:06.842007 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:16:06.845084 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 17:16:06.851444 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 17:16:06.852462 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:16:06.856478 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 17:16:06.859048 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:16:06.865460 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:16:06.870087 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 17:16:06.873968 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:16:06.881793 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 12 17:16:06.885180 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 17:16:06.886848 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 17:16:06.888504 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 17:16:06.923865 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 17:16:06.925991 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 17:16:06.931023 systemd-journald[1453]: Time spent on flushing to /var/log/journal/ec2406c66d466379aaca85db1123acfa is 95.775ms for 1018 entries. Sep 12 17:16:06.931023 systemd-journald[1453]: System Journal (/var/log/journal/ec2406c66d466379aaca85db1123acfa) is 8M, max 195.6M, 187.6M free. Sep 12 17:16:07.040589 systemd-journald[1453]: Received client request to flush runtime journal. Sep 12 17:16:07.040663 kernel: loop0: detected capacity change from 0 to 138176 Sep 12 17:16:06.932779 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 12 17:16:06.960460 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:16:06.961590 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:16:06.969545 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 17:16:06.984481 systemd-tmpfiles[1505]: ACLs are not supported, ignoring. Sep 12 17:16:06.984506 systemd-tmpfiles[1505]: ACLs are not supported, ignoring. Sep 12 17:16:06.995276 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:16:07.009077 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 17:16:07.011510 udevadm[1517]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 12 17:16:07.044331 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 17:16:07.062999 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 12 17:16:07.070767 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 17:16:07.089305 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 17:16:07.103926 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:16:07.115346 kernel: loop1: detected capacity change from 0 to 62832 Sep 12 17:16:07.138282 systemd-tmpfiles[1529]: ACLs are not supported, ignoring. Sep 12 17:16:07.138313 systemd-tmpfiles[1529]: ACLs are not supported, ignoring. Sep 12 17:16:07.147574 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:16:07.242864 kernel: loop2: detected capacity change from 0 to 147912 Sep 12 17:16:07.381760 kernel: loop3: detected capacity change from 0 to 224512 Sep 12 17:16:07.493436 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 17:16:07.519106 kernel: loop4: detected capacity change from 0 to 138176 Sep 12 17:16:07.550230 kernel: loop5: detected capacity change from 0 to 62832 Sep 12 17:16:07.567817 kernel: loop6: detected capacity change from 0 to 147912 Sep 12 17:16:07.591762 kernel: loop7: detected capacity change from 0 to 224512 Sep 12 17:16:07.624734 (sd-merge)[1536]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 12 17:16:07.625511 (sd-merge)[1536]: Merged extensions into '/usr'. Sep 12 17:16:07.632480 systemd[1]: Reload requested from client PID 1504 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 17:16:07.632649 systemd[1]: Reloading... Sep 12 17:16:07.778750 zram_generator::config[1564]: No configuration found. Sep 12 17:16:08.020763 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:16:08.127136 systemd[1]: Reloading finished in 493 ms. Sep 12 17:16:08.152735 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 17:16:08.153590 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 17:16:08.165122 systemd[1]: Starting ensure-sysext.service... Sep 12 17:16:08.167928 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:16:08.181028 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:16:08.210270 systemd-tmpfiles[1617]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 17:16:08.211136 systemd-tmpfiles[1617]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 17:16:08.212550 systemd-tmpfiles[1617]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 17:16:08.213104 systemd-tmpfiles[1617]: ACLs are not supported, ignoring. Sep 12 17:16:08.213288 systemd-tmpfiles[1617]: ACLs are not supported, ignoring. Sep 12 17:16:08.214017 systemd[1]: Reload requested from client PID 1616 ('systemctl') (unit ensure-sysext.service)... Sep 12 17:16:08.214034 systemd[1]: Reloading... Sep 12 17:16:08.230878 systemd-tmpfiles[1617]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:16:08.230900 systemd-tmpfiles[1617]: Skipping /boot Sep 12 17:16:08.242241 systemd-udevd[1618]: Using default interface naming scheme 'v255'. Sep 12 17:16:08.261433 systemd-tmpfiles[1617]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:16:08.261592 systemd-tmpfiles[1617]: Skipping /boot Sep 12 17:16:08.335747 zram_generator::config[1648]: No configuration found. Sep 12 17:16:08.478485 (udev-worker)[1656]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:16:08.609797 ldconfig[1499]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 17:16:08.616774 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 12 17:16:08.621747 kernel: ACPI: button: Power Button [PWRF] Sep 12 17:16:08.630068 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Sep 12 17:16:08.630439 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Sep 12 17:16:08.646794 kernel: ACPI: button: Sleep Button [SLPF] Sep 12 17:16:08.697772 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:16:08.723822 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1656) Sep 12 17:16:08.727742 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Sep 12 17:16:08.871508 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 17:16:08.872913 systemd[1]: Reloading finished in 658 ms. Sep 12 17:16:08.883693 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:16:08.885543 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 17:16:08.906761 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:16:08.945769 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 17:16:08.963620 systemd[1]: Finished ensure-sysext.service. Sep 12 17:16:08.983262 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:16:08.990006 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 17:16:09.011391 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 17:16:09.012651 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:16:09.021963 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:16:09.027086 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:16:09.031424 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:16:09.035963 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:16:09.037001 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:16:09.037074 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:16:09.042925 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 17:16:09.053828 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:16:09.063987 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:16:09.065866 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 17:16:09.070534 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 17:16:09.074688 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:16:09.076057 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:16:09.078870 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 17:16:09.086115 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:16:09.086938 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:16:09.088093 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:16:09.088333 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:16:09.091471 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:16:09.092784 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:16:09.094754 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:16:09.095212 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:16:09.147438 augenrules[1848]: No rules Sep 12 17:16:09.153312 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 12 17:16:09.156850 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:16:09.157119 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 17:16:09.158400 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 17:16:09.164380 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 17:16:09.172167 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 17:16:09.174923 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 17:16:09.175552 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:16:09.175650 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:16:09.189560 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 17:16:09.198605 lvm[1856]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:16:09.200020 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 17:16:09.228481 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 17:16:09.233896 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 17:16:09.234843 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:16:09.244014 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 17:16:09.250055 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 17:16:09.258318 lvm[1864]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:16:09.271242 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:16:09.296519 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 17:16:09.300118 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 17:16:09.323748 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 17:16:09.327913 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:16:09.380034 systemd-networkd[1828]: lo: Link UP Sep 12 17:16:09.380048 systemd-networkd[1828]: lo: Gained carrier Sep 12 17:16:09.381839 systemd-networkd[1828]: Enumeration completed Sep 12 17:16:09.382413 systemd-networkd[1828]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:16:09.382512 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:16:09.383378 systemd-networkd[1828]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:16:09.389950 systemd-networkd[1828]: eth0: Link UP Sep 12 17:16:09.390837 systemd-networkd[1828]: eth0: Gained carrier Sep 12 17:16:09.390877 systemd-networkd[1828]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:16:09.392038 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 12 17:16:09.402073 systemd-resolved[1830]: Positive Trust Anchors: Sep 12 17:16:09.402096 systemd-resolved[1830]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:16:09.402159 systemd-resolved[1830]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:16:09.403023 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 17:16:09.403922 systemd-networkd[1828]: eth0: DHCPv4 address 172.31.19.109/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 12 17:16:09.418818 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 12 17:16:09.420754 systemd-resolved[1830]: Defaulting to hostname 'linux'. Sep 12 17:16:09.423420 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:16:09.424021 systemd[1]: Reached target network.target - Network. Sep 12 17:16:09.424425 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:16:09.424835 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:16:09.425282 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 17:16:09.425651 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 17:16:09.426184 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 17:16:09.426614 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 17:16:09.427126 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 17:16:09.427451 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 17:16:09.427494 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:16:09.427831 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:16:09.429763 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 17:16:09.431757 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 17:16:09.434787 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 12 17:16:09.435394 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 12 17:16:09.435800 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 12 17:16:09.438400 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 17:16:09.439572 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 12 17:16:09.440701 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 17:16:09.441177 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:16:09.441534 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:16:09.441921 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:16:09.441956 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:16:09.447346 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 17:16:09.449708 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 12 17:16:09.452423 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 17:16:09.459897 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 17:16:09.461910 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 17:16:09.463683 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 17:16:09.471258 jq[1886]: false Sep 12 17:16:09.470909 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 17:16:09.475326 systemd[1]: Started ntpd.service - Network Time Service. Sep 12 17:16:09.480654 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 17:16:09.491874 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 12 17:16:09.497550 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 17:16:09.501875 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 17:16:09.523289 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 17:16:09.527191 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 17:16:09.527987 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 17:16:09.529947 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 17:16:09.539939 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 17:16:09.568401 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 17:16:09.569660 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 17:16:09.584771 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 17:16:09.585065 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 17:16:09.601300 jq[1900]: true Sep 12 17:16:09.627647 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 17:16:09.628820 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 17:16:09.660182 tar[1907]: linux-amd64/LICENSE Sep 12 17:16:09.660182 tar[1907]: linux-amd64/helm Sep 12 17:16:09.642509 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 17:16:09.660713 ntpd[1889]: 12 Sep 17:16:09 ntpd[1889]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 15:00:58 UTC 2025 (1): Starting Sep 12 17:16:09.660713 ntpd[1889]: 12 Sep 17:16:09 ntpd[1889]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 12 17:16:09.660713 ntpd[1889]: 12 Sep 17:16:09 ntpd[1889]: ---------------------------------------------------- Sep 12 17:16:09.660713 ntpd[1889]: 12 Sep 17:16:09 ntpd[1889]: ntp-4 is maintained by Network Time Foundation, Sep 12 17:16:09.660713 ntpd[1889]: 12 Sep 17:16:09 ntpd[1889]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 12 17:16:09.660713 ntpd[1889]: 12 Sep 17:16:09 ntpd[1889]: corporation. Support and training for ntp-4 are Sep 12 17:16:09.660713 ntpd[1889]: 12 Sep 17:16:09 ntpd[1889]: available at https://www.nwtime.org/support Sep 12 17:16:09.660713 ntpd[1889]: 12 Sep 17:16:09 ntpd[1889]: ---------------------------------------------------- Sep 12 17:16:09.642274 dbus-daemon[1885]: [system] SELinux support is enabled Sep 12 17:16:09.643037 (ntainerd)[1909]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 17:16:09.653400 ntpd[1889]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 15:00:58 UTC 2025 (1): Starting Sep 12 17:16:09.683530 extend-filesystems[1887]: Found loop4 Sep 12 17:16:09.683530 extend-filesystems[1887]: Found loop5 Sep 12 17:16:09.683530 extend-filesystems[1887]: Found loop6 Sep 12 17:16:09.683530 extend-filesystems[1887]: Found loop7 Sep 12 17:16:09.683530 extend-filesystems[1887]: Found nvme0n1 Sep 12 17:16:09.683530 extend-filesystems[1887]: Found nvme0n1p1 Sep 12 17:16:09.683530 extend-filesystems[1887]: Found nvme0n1p2 Sep 12 17:16:09.683530 extend-filesystems[1887]: Found nvme0n1p3 Sep 12 17:16:09.683530 extend-filesystems[1887]: Found usr Sep 12 17:16:09.683530 extend-filesystems[1887]: Found nvme0n1p4 Sep 12 17:16:09.683530 extend-filesystems[1887]: Found nvme0n1p6 Sep 12 17:16:09.683530 extend-filesystems[1887]: Found nvme0n1p7 Sep 12 17:16:09.683530 extend-filesystems[1887]: Found nvme0n1p9 Sep 12 17:16:09.683530 extend-filesystems[1887]: Checking size of /dev/nvme0n1p9 Sep 12 17:16:09.651764 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 17:16:09.690395 ntpd[1889]: 12 Sep 17:16:09 ntpd[1889]: proto: precision = 0.100 usec (-23) Sep 12 17:16:09.690395 ntpd[1889]: 12 Sep 17:16:09 ntpd[1889]: basedate set to 2025-08-31 Sep 12 17:16:09.690395 ntpd[1889]: 12 Sep 17:16:09 ntpd[1889]: gps base set to 2025-08-31 (week 2382) Sep 12 17:16:09.690488 jq[1916]: true Sep 12 17:16:09.653427 ntpd[1889]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 12 17:16:09.651802 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 17:16:09.653438 ntpd[1889]: ---------------------------------------------------- Sep 12 17:16:09.654148 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 17:16:09.653449 ntpd[1889]: ntp-4 is maintained by Network Time Foundation, Sep 12 17:16:09.654174 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 17:16:09.653459 ntpd[1889]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 12 17:16:09.653469 ntpd[1889]: corporation. Support and training for ntp-4 are Sep 12 17:16:09.653479 ntpd[1889]: available at https://www.nwtime.org/support Sep 12 17:16:09.653489 ntpd[1889]: ---------------------------------------------------- Sep 12 17:16:09.676028 ntpd[1889]: proto: precision = 0.100 usec (-23) Sep 12 17:16:09.684249 ntpd[1889]: basedate set to 2025-08-31 Sep 12 17:16:09.684274 ntpd[1889]: gps base set to 2025-08-31 (week 2382) Sep 12 17:16:09.696864 dbus-daemon[1885]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1828 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 12 17:16:09.705214 ntpd[1889]: Listen and drop on 0 v6wildcard [::]:123 Sep 12 17:16:09.705283 ntpd[1889]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 12 17:16:09.705398 ntpd[1889]: 12 Sep 17:16:09 ntpd[1889]: Listen and drop on 0 v6wildcard [::]:123 Sep 12 17:16:09.705398 ntpd[1889]: 12 Sep 17:16:09 ntpd[1889]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 12 17:16:09.706984 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 12 17:16:09.707947 ntpd[1889]: Listen normally on 2 lo 127.0.0.1:123 Sep 12 17:16:09.709871 ntpd[1889]: 12 Sep 17:16:09 ntpd[1889]: Listen normally on 2 lo 127.0.0.1:123 Sep 12 17:16:09.709871 ntpd[1889]: 12 Sep 17:16:09 ntpd[1889]: Listen normally on 3 eth0 172.31.19.109:123 Sep 12 17:16:09.709871 ntpd[1889]: 12 Sep 17:16:09 ntpd[1889]: Listen normally on 4 lo [::1]:123 Sep 12 17:16:09.709871 ntpd[1889]: 12 Sep 17:16:09 ntpd[1889]: bind(21) AF_INET6 fe80::4af:88ff:fec0:738f%2#123 flags 0x11 failed: Cannot assign requested address Sep 12 17:16:09.709871 ntpd[1889]: 12 Sep 17:16:09 ntpd[1889]: unable to create socket on eth0 (5) for fe80::4af:88ff:fec0:738f%2#123 Sep 12 17:16:09.709871 ntpd[1889]: 12 Sep 17:16:09 ntpd[1889]: failed to init interface for address fe80::4af:88ff:fec0:738f%2 Sep 12 17:16:09.709871 ntpd[1889]: 12 Sep 17:16:09 ntpd[1889]: Listening on routing socket on fd #21 for interface updates Sep 12 17:16:09.707998 ntpd[1889]: Listen normally on 3 eth0 172.31.19.109:123 Sep 12 17:16:09.708044 ntpd[1889]: Listen normally on 4 lo [::1]:123 Sep 12 17:16:09.708099 ntpd[1889]: bind(21) AF_INET6 fe80::4af:88ff:fec0:738f%2#123 flags 0x11 failed: Cannot assign requested address Sep 12 17:16:09.708122 ntpd[1889]: unable to create socket on eth0 (5) for fe80::4af:88ff:fec0:738f%2#123 Sep 12 17:16:09.708136 ntpd[1889]: failed to init interface for address fe80::4af:88ff:fec0:738f%2 Sep 12 17:16:09.708174 ntpd[1889]: Listening on routing socket on fd #21 for interface updates Sep 12 17:16:09.715495 update_engine[1899]: I20250912 17:16:09.715379 1899 main.cc:92] Flatcar Update Engine starting Sep 12 17:16:09.727795 ntpd[1889]: 12 Sep 17:16:09 ntpd[1889]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:16:09.727795 ntpd[1889]: 12 Sep 17:16:09 ntpd[1889]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:16:09.727473 ntpd[1889]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:16:09.727508 ntpd[1889]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:16:09.744480 extend-filesystems[1887]: Resized partition /dev/nvme0n1p9 Sep 12 17:16:09.753156 extend-filesystems[1938]: resize2fs 1.47.1 (20-May-2024) Sep 12 17:16:09.770696 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 12 17:16:09.771708 systemd[1]: Started update-engine.service - Update Engine. Sep 12 17:16:09.777200 update_engine[1899]: I20250912 17:16:09.772902 1899 update_check_scheduler.cc:74] Next update check in 8m16s Sep 12 17:16:09.773133 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 12 17:16:09.787941 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 17:16:09.800358 coreos-metadata[1884]: Sep 12 17:16:09.799 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 12 17:16:09.805174 coreos-metadata[1884]: Sep 12 17:16:09.802 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 12 17:16:09.807082 coreos-metadata[1884]: Sep 12 17:16:09.806 INFO Fetch successful Sep 12 17:16:09.807082 coreos-metadata[1884]: Sep 12 17:16:09.806 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 12 17:16:09.813848 coreos-metadata[1884]: Sep 12 17:16:09.813 INFO Fetch successful Sep 12 17:16:09.813848 coreos-metadata[1884]: Sep 12 17:16:09.813 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 12 17:16:09.814470 coreos-metadata[1884]: Sep 12 17:16:09.814 INFO Fetch successful Sep 12 17:16:09.814470 coreos-metadata[1884]: Sep 12 17:16:09.814 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 12 17:16:09.823294 coreos-metadata[1884]: Sep 12 17:16:09.815 INFO Fetch successful Sep 12 17:16:09.823294 coreos-metadata[1884]: Sep 12 17:16:09.815 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 12 17:16:09.823294 coreos-metadata[1884]: Sep 12 17:16:09.820 INFO Fetch failed with 404: resource not found Sep 12 17:16:09.823294 coreos-metadata[1884]: Sep 12 17:16:09.820 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 12 17:16:09.825289 coreos-metadata[1884]: Sep 12 17:16:09.824 INFO Fetch successful Sep 12 17:16:09.825289 coreos-metadata[1884]: Sep 12 17:16:09.824 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 12 17:16:09.827198 coreos-metadata[1884]: Sep 12 17:16:09.825 INFO Fetch successful Sep 12 17:16:09.829494 coreos-metadata[1884]: Sep 12 17:16:09.828 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 12 17:16:09.829958 coreos-metadata[1884]: Sep 12 17:16:09.829 INFO Fetch successful Sep 12 17:16:09.830515 coreos-metadata[1884]: Sep 12 17:16:09.830 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 12 17:16:09.832039 coreos-metadata[1884]: Sep 12 17:16:09.831 INFO Fetch successful Sep 12 17:16:09.832039 coreos-metadata[1884]: Sep 12 17:16:09.831 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 12 17:16:09.835349 coreos-metadata[1884]: Sep 12 17:16:09.834 INFO Fetch successful Sep 12 17:16:09.850862 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 12 17:16:09.869327 extend-filesystems[1938]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 12 17:16:09.869327 extend-filesystems[1938]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 17:16:09.869327 extend-filesystems[1938]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 12 17:16:09.890215 extend-filesystems[1887]: Resized filesystem in /dev/nvme0n1p9 Sep 12 17:16:09.877283 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 17:16:09.877595 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 17:16:09.911578 bash[1963]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:16:09.927650 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 17:16:09.936008 systemd[1]: Starting sshkeys.service... Sep 12 17:16:09.937535 systemd-logind[1896]: Watching system buttons on /dev/input/event1 (Power Button) Sep 12 17:16:09.937560 systemd-logind[1896]: Watching system buttons on /dev/input/event2 (Sleep Button) Sep 12 17:16:09.937585 systemd-logind[1896]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 17:16:09.940403 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 12 17:16:09.949468 systemd-logind[1896]: New seat seat0. Sep 12 17:16:09.956928 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 17:16:09.965181 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 17:16:09.969774 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 12 17:16:09.979356 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 12 17:16:10.029673 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 12 17:16:10.039758 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1667) Sep 12 17:16:10.040135 dbus-daemon[1885]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 12 17:16:10.042269 dbus-daemon[1885]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1931 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 12 17:16:10.059167 systemd[1]: Starting polkit.service - Authorization Manager... Sep 12 17:16:10.136986 polkitd[1979]: Started polkitd version 121 Sep 12 17:16:10.173262 polkitd[1979]: Loading rules from directory /etc/polkit-1/rules.d Sep 12 17:16:10.173353 polkitd[1979]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 12 17:16:10.191368 polkitd[1979]: Finished loading, compiling and executing 2 rules Sep 12 17:16:10.192097 dbus-daemon[1885]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 12 17:16:10.195195 systemd[1]: Started polkit.service - Authorization Manager. Sep 12 17:16:10.195491 polkitd[1979]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 12 17:16:10.240845 coreos-metadata[1970]: Sep 12 17:16:10.240 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 12 17:16:10.243155 coreos-metadata[1970]: Sep 12 17:16:10.242 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 12 17:16:10.249495 coreos-metadata[1970]: Sep 12 17:16:10.249 INFO Fetch successful Sep 12 17:16:10.249495 coreos-metadata[1970]: Sep 12 17:16:10.249 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 12 17:16:10.251838 coreos-metadata[1970]: Sep 12 17:16:10.250 INFO Fetch successful Sep 12 17:16:10.253474 unknown[1970]: wrote ssh authorized keys file for user: core Sep 12 17:16:10.261458 systemd-hostnamed[1931]: Hostname set to (transient) Sep 12 17:16:10.261620 systemd-resolved[1830]: System hostname changed to 'ip-172-31-19-109'. Sep 12 17:16:10.304740 update-ssh-keys[2033]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:16:10.314830 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 12 17:16:10.326313 systemd[1]: Finished sshkeys.service. Sep 12 17:16:10.337248 locksmithd[1944]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 17:16:10.513787 containerd[1909]: time="2025-09-12T17:16:10.506773023Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Sep 12 17:16:10.558733 sshd_keygen[1924]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 17:16:10.602451 containerd[1909]: time="2025-09-12T17:16:10.601280757Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:16:10.602890 systemd-networkd[1828]: eth0: Gained IPv6LL Sep 12 17:16:10.603805 containerd[1909]: time="2025-09-12T17:16:10.603413415Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:16:10.603805 containerd[1909]: time="2025-09-12T17:16:10.603454252Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 17:16:10.603805 containerd[1909]: time="2025-09-12T17:16:10.603478524Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 17:16:10.603805 containerd[1909]: time="2025-09-12T17:16:10.603668563Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 17:16:10.603805 containerd[1909]: time="2025-09-12T17:16:10.603692543Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 17:16:10.603805 containerd[1909]: time="2025-09-12T17:16:10.603784204Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:16:10.603805 containerd[1909]: time="2025-09-12T17:16:10.603803391Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:16:10.604916 containerd[1909]: time="2025-09-12T17:16:10.604081000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:16:10.604916 containerd[1909]: time="2025-09-12T17:16:10.604106178Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 17:16:10.604916 containerd[1909]: time="2025-09-12T17:16:10.604126553Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:16:10.604916 containerd[1909]: time="2025-09-12T17:16:10.604141365Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 17:16:10.604916 containerd[1909]: time="2025-09-12T17:16:10.604257262Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:16:10.604916 containerd[1909]: time="2025-09-12T17:16:10.604500145Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:16:10.604916 containerd[1909]: time="2025-09-12T17:16:10.604741026Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:16:10.604916 containerd[1909]: time="2025-09-12T17:16:10.604760740Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 17:16:10.604916 containerd[1909]: time="2025-09-12T17:16:10.604853960Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 17:16:10.604916 containerd[1909]: time="2025-09-12T17:16:10.604912553Z" level=info msg="metadata content store policy set" policy=shared Sep 12 17:16:10.608397 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 17:16:10.611459 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 17:16:10.612343 containerd[1909]: time="2025-09-12T17:16:10.612180537Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 17:16:10.612343 containerd[1909]: time="2025-09-12T17:16:10.612277169Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 17:16:10.612455 containerd[1909]: time="2025-09-12T17:16:10.612304058Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 17:16:10.612455 containerd[1909]: time="2025-09-12T17:16:10.612377284Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 17:16:10.612455 containerd[1909]: time="2025-09-12T17:16:10.612413580Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 17:16:10.616167 containerd[1909]: time="2025-09-12T17:16:10.612917074Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 17:16:10.622311 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 12 17:16:10.631740 containerd[1909]: time="2025-09-12T17:16:10.631659908Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 17:16:10.632847 containerd[1909]: time="2025-09-12T17:16:10.632369446Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 17:16:10.632847 containerd[1909]: time="2025-09-12T17:16:10.632422499Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 17:16:10.632847 containerd[1909]: time="2025-09-12T17:16:10.632449893Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 17:16:10.632847 containerd[1909]: time="2025-09-12T17:16:10.632492701Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 17:16:10.632847 containerd[1909]: time="2025-09-12T17:16:10.632515319Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 17:16:10.632847 containerd[1909]: time="2025-09-12T17:16:10.632534600Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 17:16:10.632847 containerd[1909]: time="2025-09-12T17:16:10.632569707Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 17:16:10.632847 containerd[1909]: time="2025-09-12T17:16:10.632602591Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 17:16:10.632847 containerd[1909]: time="2025-09-12T17:16:10.632621123Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 17:16:10.632847 containerd[1909]: time="2025-09-12T17:16:10.632649748Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 17:16:10.632847 containerd[1909]: time="2025-09-12T17:16:10.632669498Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 17:16:10.632847 containerd[1909]: time="2025-09-12T17:16:10.632700826Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 17:16:10.632847 containerd[1909]: time="2025-09-12T17:16:10.632746487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 17:16:10.632847 containerd[1909]: time="2025-09-12T17:16:10.632769809Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 17:16:10.635635 containerd[1909]: time="2025-09-12T17:16:10.633324780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 17:16:10.635635 containerd[1909]: time="2025-09-12T17:16:10.633357689Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 17:16:10.635635 containerd[1909]: time="2025-09-12T17:16:10.633394642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 17:16:10.635635 containerd[1909]: time="2025-09-12T17:16:10.633416116Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 17:16:10.635635 containerd[1909]: time="2025-09-12T17:16:10.633437515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 17:16:10.635635 containerd[1909]: time="2025-09-12T17:16:10.633475991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 17:16:10.635635 containerd[1909]: time="2025-09-12T17:16:10.633510811Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 17:16:10.635635 containerd[1909]: time="2025-09-12T17:16:10.633560747Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 17:16:10.635635 containerd[1909]: time="2025-09-12T17:16:10.633580473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 17:16:10.635635 containerd[1909]: time="2025-09-12T17:16:10.633601891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 17:16:10.635635 containerd[1909]: time="2025-09-12T17:16:10.633638700Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 17:16:10.635635 containerd[1909]: time="2025-09-12T17:16:10.633673632Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 17:16:10.635635 containerd[1909]: time="2025-09-12T17:16:10.633707568Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 17:16:10.635635 containerd[1909]: time="2025-09-12T17:16:10.633777309Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 17:16:10.633074 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:16:10.640076 containerd[1909]: time="2025-09-12T17:16:10.634339133Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 17:16:10.640748 containerd[1909]: time="2025-09-12T17:16:10.640222533Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 17:16:10.640919 containerd[1909]: time="2025-09-12T17:16:10.640876197Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 17:16:10.641028 containerd[1909]: time="2025-09-12T17:16:10.641010090Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 17:16:10.641627 containerd[1909]: time="2025-09-12T17:16:10.641173386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 17:16:10.641627 containerd[1909]: time="2025-09-12T17:16:10.641205324Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 17:16:10.641627 containerd[1909]: time="2025-09-12T17:16:10.641221587Z" level=info msg="NRI interface is disabled by configuration." Sep 12 17:16:10.641840 containerd[1909]: time="2025-09-12T17:16:10.641814545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 17:16:10.644121 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 17:16:10.646187 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 17:16:10.647276 containerd[1909]: time="2025-09-12T17:16:10.644412552Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 17:16:10.647276 containerd[1909]: time="2025-09-12T17:16:10.645841031Z" level=info msg="Connect containerd service" Sep 12 17:16:10.647276 containerd[1909]: time="2025-09-12T17:16:10.645923288Z" level=info msg="using legacy CRI server" Sep 12 17:16:10.647276 containerd[1909]: time="2025-09-12T17:16:10.645937577Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 17:16:10.647833 containerd[1909]: time="2025-09-12T17:16:10.647806954Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 17:16:10.651555 containerd[1909]: time="2025-09-12T17:16:10.651025356Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:16:10.651555 containerd[1909]: time="2025-09-12T17:16:10.651503690Z" level=info msg="Start subscribing containerd event" Sep 12 17:16:10.651885 containerd[1909]: time="2025-09-12T17:16:10.651862538Z" level=info msg="Start recovering state" Sep 12 17:16:10.655028 containerd[1909]: time="2025-09-12T17:16:10.653102985Z" level=info msg="Start event monitor" Sep 12 17:16:10.656778 containerd[1909]: time="2025-09-12T17:16:10.656041094Z" level=info msg="Start snapshots syncer" Sep 12 17:16:10.656778 containerd[1909]: time="2025-09-12T17:16:10.656071284Z" level=info msg="Start cni network conf syncer for default" Sep 12 17:16:10.656778 containerd[1909]: time="2025-09-12T17:16:10.656082559Z" level=info msg="Start streaming server" Sep 12 17:16:10.656778 containerd[1909]: time="2025-09-12T17:16:10.655568749Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 17:16:10.656778 containerd[1909]: time="2025-09-12T17:16:10.656265462Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 17:16:10.656778 containerd[1909]: time="2025-09-12T17:16:10.656321722Z" level=info msg="containerd successfully booted in 0.154353s" Sep 12 17:16:10.659805 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 17:16:10.661470 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 17:16:10.684259 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 17:16:10.684541 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 17:16:10.695809 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 17:16:10.737430 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 17:16:10.750665 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 17:16:10.754593 amazon-ssm-agent[2096]: Initializing new seelog logger Sep 12 17:16:10.755000 amazon-ssm-agent[2096]: New Seelog Logger Creation Complete Sep 12 17:16:10.755745 amazon-ssm-agent[2096]: 2025/09/12 17:16:10 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:16:10.755745 amazon-ssm-agent[2096]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:16:10.755745 amazon-ssm-agent[2096]: 2025/09/12 17:16:10 processing appconfig overrides Sep 12 17:16:10.757430 amazon-ssm-agent[2096]: 2025/09/12 17:16:10 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:16:10.757563 amazon-ssm-agent[2096]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:16:10.757866 amazon-ssm-agent[2096]: 2025/09/12 17:16:10 processing appconfig overrides Sep 12 17:16:10.758273 amazon-ssm-agent[2096]: 2025/09/12 17:16:10 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:16:10.759744 amazon-ssm-agent[2096]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:16:10.759744 amazon-ssm-agent[2096]: 2025/09/12 17:16:10 processing appconfig overrides Sep 12 17:16:10.759744 amazon-ssm-agent[2096]: 2025-09-12 17:16:10 INFO Proxy environment variables: Sep 12 17:16:10.761760 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 17:16:10.763878 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 17:16:10.766340 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 17:16:10.776003 amazon-ssm-agent[2096]: 2025/09/12 17:16:10 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:16:10.776627 amazon-ssm-agent[2096]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:16:10.776963 amazon-ssm-agent[2096]: 2025/09/12 17:16:10 processing appconfig overrides Sep 12 17:16:10.864189 amazon-ssm-agent[2096]: 2025-09-12 17:16:10 INFO https_proxy: Sep 12 17:16:10.962434 amazon-ssm-agent[2096]: 2025-09-12 17:16:10 INFO http_proxy: Sep 12 17:16:11.061045 amazon-ssm-agent[2096]: 2025-09-12 17:16:10 INFO no_proxy: Sep 12 17:16:11.158419 amazon-ssm-agent[2096]: 2025-09-12 17:16:10 INFO Checking if agent identity type OnPrem can be assumed Sep 12 17:16:11.214351 tar[1907]: linux-amd64/README.md Sep 12 17:16:11.233642 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 17:16:11.256782 amazon-ssm-agent[2096]: 2025-09-12 17:16:10 INFO Checking if agent identity type EC2 can be assumed Sep 12 17:16:11.356215 amazon-ssm-agent[2096]: 2025-09-12 17:16:10 INFO Agent will take identity from EC2 Sep 12 17:16:11.413202 amazon-ssm-agent[2096]: 2025-09-12 17:16:10 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 12 17:16:11.413202 amazon-ssm-agent[2096]: 2025-09-12 17:16:10 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 12 17:16:11.413202 amazon-ssm-agent[2096]: 2025-09-12 17:16:10 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 12 17:16:11.413202 amazon-ssm-agent[2096]: 2025-09-12 17:16:10 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Sep 12 17:16:11.413202 amazon-ssm-agent[2096]: 2025-09-12 17:16:10 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Sep 12 17:16:11.413202 amazon-ssm-agent[2096]: 2025-09-12 17:16:10 INFO [amazon-ssm-agent] Starting Core Agent Sep 12 17:16:11.413202 amazon-ssm-agent[2096]: 2025-09-12 17:16:10 INFO [amazon-ssm-agent] registrar detected. Attempting registration Sep 12 17:16:11.413202 amazon-ssm-agent[2096]: 2025-09-12 17:16:10 INFO [Registrar] Starting registrar module Sep 12 17:16:11.413202 amazon-ssm-agent[2096]: 2025-09-12 17:16:10 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Sep 12 17:16:11.413202 amazon-ssm-agent[2096]: 2025-09-12 17:16:11 INFO [EC2Identity] EC2 registration was successful. Sep 12 17:16:11.413202 amazon-ssm-agent[2096]: 2025-09-12 17:16:11 INFO [CredentialRefresher] credentialRefresher has started Sep 12 17:16:11.413202 amazon-ssm-agent[2096]: 2025-09-12 17:16:11 INFO [CredentialRefresher] Starting credentials refresher loop Sep 12 17:16:11.413202 amazon-ssm-agent[2096]: 2025-09-12 17:16:11 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 12 17:16:11.456011 amazon-ssm-agent[2096]: 2025-09-12 17:16:11 INFO [CredentialRefresher] Next credential rotation will be in 30.483328032116667 minutes Sep 12 17:16:12.427778 amazon-ssm-agent[2096]: 2025-09-12 17:16:12 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 12 17:16:12.508166 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 17:16:12.518854 systemd[1]: Started sshd@0-172.31.19.109:22-139.178.89.65:48934.service - OpenSSH per-connection server daemon (139.178.89.65:48934). Sep 12 17:16:12.528105 amazon-ssm-agent[2096]: 2025-09-12 17:16:12 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2129) started Sep 12 17:16:12.628529 amazon-ssm-agent[2096]: 2025-09-12 17:16:12 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 12 17:16:12.653897 ntpd[1889]: Listen normally on 6 eth0 [fe80::4af:88ff:fec0:738f%2]:123 Sep 12 17:16:12.654231 ntpd[1889]: 12 Sep 17:16:12 ntpd[1889]: Listen normally on 6 eth0 [fe80::4af:88ff:fec0:738f%2]:123 Sep 12 17:16:12.748864 sshd[2135]: Accepted publickey for core from 139.178.89.65 port 48934 ssh2: RSA SHA256:y2CKJkWUYShnRPQHaX6GVCzN7kSZ4Mn9aBLXYnNVJUA Sep 12 17:16:12.750310 sshd-session[2135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:16:12.757928 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 17:16:12.766054 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 17:16:12.774994 systemd-logind[1896]: New session 1 of user core. Sep 12 17:16:12.784256 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 17:16:12.796254 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 17:16:12.801907 (systemd)[2144]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 17:16:12.805131 systemd-logind[1896]: New session c1 of user core. Sep 12 17:16:12.885060 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:16:12.888562 (kubelet)[2154]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:16:12.890969 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 17:16:13.000882 systemd[2144]: Queued start job for default target default.target. Sep 12 17:16:13.011175 systemd[2144]: Created slice app.slice - User Application Slice. Sep 12 17:16:13.011221 systemd[2144]: Reached target paths.target - Paths. Sep 12 17:16:13.011282 systemd[2144]: Reached target timers.target - Timers. Sep 12 17:16:13.012758 systemd[2144]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 17:16:13.026126 systemd[2144]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 17:16:13.026279 systemd[2144]: Reached target sockets.target - Sockets. Sep 12 17:16:13.026341 systemd[2144]: Reached target basic.target - Basic System. Sep 12 17:16:13.026399 systemd[2144]: Reached target default.target - Main User Target. Sep 12 17:16:13.026442 systemd[2144]: Startup finished in 211ms. Sep 12 17:16:13.026532 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 17:16:13.035991 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 17:16:13.036857 systemd[1]: Startup finished in 619ms (kernel) + 7.829s (initrd) + 7.549s (userspace) = 15.998s. Sep 12 17:16:13.195596 systemd[1]: Started sshd@1-172.31.19.109:22-139.178.89.65:48944.service - OpenSSH per-connection server daemon (139.178.89.65:48944). Sep 12 17:16:13.361342 sshd[2169]: Accepted publickey for core from 139.178.89.65 port 48944 ssh2: RSA SHA256:y2CKJkWUYShnRPQHaX6GVCzN7kSZ4Mn9aBLXYnNVJUA Sep 12 17:16:13.362022 sshd-session[2169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:16:13.368040 systemd-logind[1896]: New session 2 of user core. Sep 12 17:16:13.373155 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 17:16:13.496459 sshd[2171]: Connection closed by 139.178.89.65 port 48944 Sep 12 17:16:13.496995 sshd-session[2169]: pam_unix(sshd:session): session closed for user core Sep 12 17:16:13.500196 systemd[1]: sshd@1-172.31.19.109:22-139.178.89.65:48944.service: Deactivated successfully. Sep 12 17:16:13.502182 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 17:16:13.504858 systemd-logind[1896]: Session 2 logged out. Waiting for processes to exit. Sep 12 17:16:13.506220 systemd-logind[1896]: Removed session 2. Sep 12 17:16:13.537077 systemd[1]: Started sshd@2-172.31.19.109:22-139.178.89.65:48960.service - OpenSSH per-connection server daemon (139.178.89.65:48960). Sep 12 17:16:13.708728 sshd[2177]: Accepted publickey for core from 139.178.89.65 port 48960 ssh2: RSA SHA256:y2CKJkWUYShnRPQHaX6GVCzN7kSZ4Mn9aBLXYnNVJUA Sep 12 17:16:13.710262 sshd-session[2177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:16:13.716997 systemd-logind[1896]: New session 3 of user core. Sep 12 17:16:13.721927 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 17:16:13.840684 sshd[2180]: Connection closed by 139.178.89.65 port 48960 Sep 12 17:16:13.840915 sshd-session[2177]: pam_unix(sshd:session): session closed for user core Sep 12 17:16:13.847185 systemd[1]: sshd@2-172.31.19.109:22-139.178.89.65:48960.service: Deactivated successfully. Sep 12 17:16:13.847577 systemd-logind[1896]: Session 3 logged out. Waiting for processes to exit. Sep 12 17:16:13.849341 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 17:16:13.850178 systemd-logind[1896]: Removed session 3. Sep 12 17:16:13.871534 systemd[1]: Started sshd@3-172.31.19.109:22-139.178.89.65:48968.service - OpenSSH per-connection server daemon (139.178.89.65:48968). Sep 12 17:16:13.929663 kubelet[2154]: E0912 17:16:13.929602 2154 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:16:13.932394 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:16:13.932663 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:16:13.933266 systemd[1]: kubelet.service: Consumed 1.089s CPU time, 266.5M memory peak. Sep 12 17:16:14.045896 sshd[2186]: Accepted publickey for core from 139.178.89.65 port 48968 ssh2: RSA SHA256:y2CKJkWUYShnRPQHaX6GVCzN7kSZ4Mn9aBLXYnNVJUA Sep 12 17:16:14.048389 sshd-session[2186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:16:14.053457 systemd-logind[1896]: New session 4 of user core. Sep 12 17:16:14.063977 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 17:16:14.180078 sshd[2189]: Connection closed by 139.178.89.65 port 48968 Sep 12 17:16:14.180850 sshd-session[2186]: pam_unix(sshd:session): session closed for user core Sep 12 17:16:14.185473 systemd[1]: sshd@3-172.31.19.109:22-139.178.89.65:48968.service: Deactivated successfully. Sep 12 17:16:14.187499 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 17:16:14.188333 systemd-logind[1896]: Session 4 logged out. Waiting for processes to exit. Sep 12 17:16:14.189555 systemd-logind[1896]: Removed session 4. Sep 12 17:16:14.223328 systemd[1]: Started sshd@4-172.31.19.109:22-139.178.89.65:48974.service - OpenSSH per-connection server daemon (139.178.89.65:48974). Sep 12 17:16:14.382250 sshd[2195]: Accepted publickey for core from 139.178.89.65 port 48974 ssh2: RSA SHA256:y2CKJkWUYShnRPQHaX6GVCzN7kSZ4Mn9aBLXYnNVJUA Sep 12 17:16:14.383641 sshd-session[2195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:16:14.389535 systemd-logind[1896]: New session 5 of user core. Sep 12 17:16:14.397950 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 17:16:14.537271 sudo[2198]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 17:16:14.537600 sudo[2198]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:16:14.554595 sudo[2198]: pam_unix(sudo:session): session closed for user root Sep 12 17:16:14.576815 sshd[2197]: Connection closed by 139.178.89.65 port 48974 Sep 12 17:16:14.577521 sshd-session[2195]: pam_unix(sshd:session): session closed for user core Sep 12 17:16:14.581346 systemd[1]: sshd@4-172.31.19.109:22-139.178.89.65:48974.service: Deactivated successfully. Sep 12 17:16:14.583677 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 17:16:14.585621 systemd-logind[1896]: Session 5 logged out. Waiting for processes to exit. Sep 12 17:16:14.587161 systemd-logind[1896]: Removed session 5. Sep 12 17:16:14.619399 systemd[1]: Started sshd@5-172.31.19.109:22-139.178.89.65:48990.service - OpenSSH per-connection server daemon (139.178.89.65:48990). Sep 12 17:16:14.779934 sshd[2204]: Accepted publickey for core from 139.178.89.65 port 48990 ssh2: RSA SHA256:y2CKJkWUYShnRPQHaX6GVCzN7kSZ4Mn9aBLXYnNVJUA Sep 12 17:16:14.781533 sshd-session[2204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:16:14.786959 systemd-logind[1896]: New session 6 of user core. Sep 12 17:16:14.797007 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 17:16:14.893159 sudo[2208]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 17:16:14.893454 sudo[2208]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:16:14.897783 sudo[2208]: pam_unix(sudo:session): session closed for user root Sep 12 17:16:14.903861 sudo[2207]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 12 17:16:14.904174 sudo[2207]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:16:14.919365 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 17:16:14.951832 augenrules[2230]: No rules Sep 12 17:16:14.953430 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:16:14.953886 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 17:16:14.956275 sudo[2207]: pam_unix(sudo:session): session closed for user root Sep 12 17:16:14.978473 sshd[2206]: Connection closed by 139.178.89.65 port 48990 Sep 12 17:16:14.979199 sshd-session[2204]: pam_unix(sshd:session): session closed for user core Sep 12 17:16:14.982261 systemd[1]: sshd@5-172.31.19.109:22-139.178.89.65:48990.service: Deactivated successfully. Sep 12 17:16:14.984728 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 17:16:14.987067 systemd-logind[1896]: Session 6 logged out. Waiting for processes to exit. Sep 12 17:16:14.988361 systemd-logind[1896]: Removed session 6. Sep 12 17:16:15.030209 systemd[1]: Started sshd@6-172.31.19.109:22-139.178.89.65:49002.service - OpenSSH per-connection server daemon (139.178.89.65:49002). Sep 12 17:16:15.204441 sshd[2239]: Accepted publickey for core from 139.178.89.65 port 49002 ssh2: RSA SHA256:y2CKJkWUYShnRPQHaX6GVCzN7kSZ4Mn9aBLXYnNVJUA Sep 12 17:16:15.205819 sshd-session[2239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:16:15.210603 systemd-logind[1896]: New session 7 of user core. Sep 12 17:16:15.219261 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 17:16:15.318581 sudo[2242]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 17:16:15.319202 sudo[2242]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:16:16.096254 (dockerd)[2260]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 17:16:16.096701 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 17:16:16.809274 systemd-resolved[1830]: Clock change detected. Flushing caches. Sep 12 17:16:16.896889 dockerd[2260]: time="2025-09-12T17:16:16.896818049Z" level=info msg="Starting up" Sep 12 17:16:17.181526 dockerd[2260]: time="2025-09-12T17:16:17.181282562Z" level=info msg="Loading containers: start." Sep 12 17:16:17.370981 kernel: Initializing XFRM netlink socket Sep 12 17:16:17.401651 (udev-worker)[2284]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:16:17.490257 systemd-networkd[1828]: docker0: Link UP Sep 12 17:16:17.515736 dockerd[2260]: time="2025-09-12T17:16:17.515679534Z" level=info msg="Loading containers: done." Sep 12 17:16:17.533389 dockerd[2260]: time="2025-09-12T17:16:17.533319418Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 17:16:17.533585 dockerd[2260]: time="2025-09-12T17:16:17.533427059Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Sep 12 17:16:17.533585 dockerd[2260]: time="2025-09-12T17:16:17.533534159Z" level=info msg="Daemon has completed initialization" Sep 12 17:16:17.572973 dockerd[2260]: time="2025-09-12T17:16:17.572887623Z" level=info msg="API listen on /run/docker.sock" Sep 12 17:16:17.573158 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 17:16:18.728596 containerd[1909]: time="2025-09-12T17:16:18.728258340Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 12 17:16:19.272892 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3489372632.mount: Deactivated successfully. Sep 12 17:16:20.689317 containerd[1909]: time="2025-09-12T17:16:20.689260092Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:16:20.690592 containerd[1909]: time="2025-09-12T17:16:20.690523498Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Sep 12 17:16:20.692828 containerd[1909]: time="2025-09-12T17:16:20.692351958Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:16:20.695548 containerd[1909]: time="2025-09-12T17:16:20.695503176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:16:20.696802 containerd[1909]: time="2025-09-12T17:16:20.696765024Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 1.968463882s" Sep 12 17:16:20.696961 containerd[1909]: time="2025-09-12T17:16:20.696924032Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Sep 12 17:16:20.697824 containerd[1909]: time="2025-09-12T17:16:20.697793767Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 12 17:16:22.249487 containerd[1909]: time="2025-09-12T17:16:22.249414828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:16:22.250780 containerd[1909]: time="2025-09-12T17:16:22.250602042Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Sep 12 17:16:22.252097 containerd[1909]: time="2025-09-12T17:16:22.252058427Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:16:22.255268 containerd[1909]: time="2025-09-12T17:16:22.255206218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:16:22.256975 containerd[1909]: time="2025-09-12T17:16:22.256495292Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.558664929s" Sep 12 17:16:22.256975 containerd[1909]: time="2025-09-12T17:16:22.256541565Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Sep 12 17:16:22.258086 containerd[1909]: time="2025-09-12T17:16:22.257630837Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 12 17:16:23.638735 containerd[1909]: time="2025-09-12T17:16:23.638676152Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:16:23.639990 containerd[1909]: time="2025-09-12T17:16:23.639910770Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Sep 12 17:16:23.641363 containerd[1909]: time="2025-09-12T17:16:23.640981316Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:16:23.643912 containerd[1909]: time="2025-09-12T17:16:23.643877929Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:16:23.645243 containerd[1909]: time="2025-09-12T17:16:23.645205956Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.38753812s" Sep 12 17:16:23.645345 containerd[1909]: time="2025-09-12T17:16:23.645247952Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Sep 12 17:16:23.645897 containerd[1909]: time="2025-09-12T17:16:23.645784166Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 12 17:16:24.254886 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 17:16:24.265469 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:16:24.523011 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:16:24.536874 (kubelet)[2526]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:16:24.613820 kubelet[2526]: E0912 17:16:24.613774 2526 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:16:24.619345 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:16:24.619559 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:16:24.620994 systemd[1]: kubelet.service: Consumed 191ms CPU time, 108.4M memory peak. Sep 12 17:16:24.831411 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2234354456.mount: Deactivated successfully. Sep 12 17:16:25.407728 containerd[1909]: time="2025-09-12T17:16:25.407665390Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:16:25.409618 containerd[1909]: time="2025-09-12T17:16:25.409537069Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Sep 12 17:16:25.412062 containerd[1909]: time="2025-09-12T17:16:25.411997547Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:16:25.415346 containerd[1909]: time="2025-09-12T17:16:25.415273942Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:16:25.416664 containerd[1909]: time="2025-09-12T17:16:25.416063241Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 1.770237347s" Sep 12 17:16:25.416664 containerd[1909]: time="2025-09-12T17:16:25.416106318Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Sep 12 17:16:25.416664 containerd[1909]: time="2025-09-12T17:16:25.416650287Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 17:16:26.046272 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1562173852.mount: Deactivated successfully. Sep 12 17:16:27.111022 containerd[1909]: time="2025-09-12T17:16:27.110966517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:16:27.112151 containerd[1909]: time="2025-09-12T17:16:27.112093664Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 12 17:16:27.113519 containerd[1909]: time="2025-09-12T17:16:27.113049178Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:16:27.116065 containerd[1909]: time="2025-09-12T17:16:27.116034131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:16:27.117043 containerd[1909]: time="2025-09-12T17:16:27.117009375Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.700331086s" Sep 12 17:16:27.117043 containerd[1909]: time="2025-09-12T17:16:27.117044435Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 12 17:16:27.117552 containerd[1909]: time="2025-09-12T17:16:27.117513295Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 17:16:27.678844 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3932848197.mount: Deactivated successfully. Sep 12 17:16:27.688930 containerd[1909]: time="2025-09-12T17:16:27.688871977Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:16:27.690678 containerd[1909]: time="2025-09-12T17:16:27.690606913Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 12 17:16:27.692816 containerd[1909]: time="2025-09-12T17:16:27.692766626Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:16:27.696035 containerd[1909]: time="2025-09-12T17:16:27.695971216Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:16:27.697004 containerd[1909]: time="2025-09-12T17:16:27.696967136Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 579.423675ms" Sep 12 17:16:27.697107 containerd[1909]: time="2025-09-12T17:16:27.697007941Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 17:16:27.698080 containerd[1909]: time="2025-09-12T17:16:27.698004034Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 12 17:16:28.216200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2109664323.mount: Deactivated successfully. Sep 12 17:16:30.431789 containerd[1909]: time="2025-09-12T17:16:30.431734461Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:16:30.433377 containerd[1909]: time="2025-09-12T17:16:30.433130516Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Sep 12 17:16:30.435026 containerd[1909]: time="2025-09-12T17:16:30.434993034Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:16:30.439273 containerd[1909]: time="2025-09-12T17:16:30.438846800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:16:30.440197 containerd[1909]: time="2025-09-12T17:16:30.440152663Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.742093855s" Sep 12 17:16:30.440296 containerd[1909]: time="2025-09-12T17:16:30.440200078Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 12 17:16:33.569660 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:16:33.569932 systemd[1]: kubelet.service: Consumed 191ms CPU time, 108.4M memory peak. Sep 12 17:16:33.587321 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:16:33.621417 systemd[1]: Reload requested from client PID 2675 ('systemctl') (unit session-7.scope)... Sep 12 17:16:33.621436 systemd[1]: Reloading... Sep 12 17:16:33.775980 zram_generator::config[2731]: No configuration found. Sep 12 17:16:33.902890 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:16:34.056278 systemd[1]: Reloading finished in 434 ms. Sep 12 17:16:34.115238 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:16:34.126654 (kubelet)[2775]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:16:34.127828 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:16:34.128117 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:16:34.128330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:16:34.128374 systemd[1]: kubelet.service: Consumed 137ms CPU time, 97.5M memory peak. Sep 12 17:16:34.141443 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:16:34.356179 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:16:34.363271 (kubelet)[2787]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:16:34.417145 kubelet[2787]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:16:34.417145 kubelet[2787]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:16:34.417145 kubelet[2787]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:16:34.419392 kubelet[2787]: I0912 17:16:34.419320 2787 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:16:35.173986 kubelet[2787]: I0912 17:16:35.173278 2787 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 17:16:35.173986 kubelet[2787]: I0912 17:16:35.173448 2787 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:16:35.173986 kubelet[2787]: I0912 17:16:35.173835 2787 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 17:16:35.216853 kubelet[2787]: E0912 17:16:35.216787 2787 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.19.109:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.19.109:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:16:35.217549 kubelet[2787]: I0912 17:16:35.217481 2787 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:16:35.238141 kubelet[2787]: E0912 17:16:35.238076 2787 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:16:35.238141 kubelet[2787]: I0912 17:16:35.238145 2787 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:16:35.242781 kubelet[2787]: I0912 17:16:35.242746 2787 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:16:35.245080 kubelet[2787]: I0912 17:16:35.245002 2787 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:16:35.245288 kubelet[2787]: I0912 17:16:35.245069 2787 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-19-109","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:16:35.248630 kubelet[2787]: I0912 17:16:35.248570 2787 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:16:35.248630 kubelet[2787]: I0912 17:16:35.248619 2787 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 17:16:35.250280 kubelet[2787]: I0912 17:16:35.250221 2787 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:16:35.255324 kubelet[2787]: I0912 17:16:35.255285 2787 kubelet.go:446] "Attempting to sync node with API server" Sep 12 17:16:35.255324 kubelet[2787]: I0912 17:16:35.255342 2787 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:16:35.255683 kubelet[2787]: I0912 17:16:35.255366 2787 kubelet.go:352] "Adding apiserver pod source" Sep 12 17:16:35.255683 kubelet[2787]: I0912 17:16:35.255379 2787 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:16:35.269593 kubelet[2787]: W0912 17:16:35.269320 2787 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.19.109:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.19.109:6443: connect: connection refused Sep 12 17:16:35.269593 kubelet[2787]: E0912 17:16:35.269387 2787 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.19.109:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.19.109:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:16:35.269593 kubelet[2787]: I0912 17:16:35.269491 2787 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 12 17:16:35.274003 kubelet[2787]: I0912 17:16:35.273542 2787 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:16:35.274003 kubelet[2787]: W0912 17:16:35.273626 2787 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 17:16:35.274003 kubelet[2787]: W0912 17:16:35.273637 2787 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.19.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-109&limit=500&resourceVersion=0": dial tcp 172.31.19.109:6443: connect: connection refused Sep 12 17:16:35.274003 kubelet[2787]: E0912 17:16:35.273697 2787 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.19.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-109&limit=500&resourceVersion=0\": dial tcp 172.31.19.109:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:16:35.276004 kubelet[2787]: I0912 17:16:35.275976 2787 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:16:35.276004 kubelet[2787]: I0912 17:16:35.276015 2787 server.go:1287] "Started kubelet" Sep 12 17:16:35.281427 kubelet[2787]: I0912 17:16:35.281374 2787 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:16:35.286083 kubelet[2787]: I0912 17:16:35.284522 2787 server.go:479] "Adding debug handlers to kubelet server" Sep 12 17:16:35.286083 kubelet[2787]: I0912 17:16:35.286010 2787 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:16:35.286364 kubelet[2787]: I0912 17:16:35.286330 2787 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:16:35.287002 kubelet[2787]: I0912 17:16:35.286972 2787 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:16:35.293767 kubelet[2787]: E0912 17:16:35.288430 2787 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.19.109:6443/api/v1/namespaces/default/events\": dial tcp 172.31.19.109:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-19-109.18649877e40e199f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-19-109,UID:ip-172-31-19-109,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-19-109,},FirstTimestamp:2025-09-12 17:16:35.275995551 +0000 UTC m=+0.907906844,LastTimestamp:2025-09-12 17:16:35.275995551 +0000 UTC m=+0.907906844,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-19-109,}" Sep 12 17:16:35.295385 kubelet[2787]: I0912 17:16:35.295336 2787 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:16:35.301527 kubelet[2787]: I0912 17:16:35.301487 2787 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:16:35.303268 kubelet[2787]: I0912 17:16:35.303252 2787 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:16:35.305813 kubelet[2787]: E0912 17:16:35.302841 2787 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-109?timeout=10s\": dial tcp 172.31.19.109:6443: connect: connection refused" interval="200ms" Sep 12 17:16:35.305813 kubelet[2787]: E0912 17:16:35.302377 2787 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-19-109\" not found" Sep 12 17:16:35.305813 kubelet[2787]: I0912 17:16:35.303445 2787 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:16:35.305813 kubelet[2787]: W0912 17:16:35.303861 2787 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.19.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.109:6443: connect: connection refused Sep 12 17:16:35.305813 kubelet[2787]: E0912 17:16:35.303922 2787 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.19.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.19.109:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:16:35.308671 kubelet[2787]: I0912 17:16:35.308641 2787 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:16:35.308911 kubelet[2787]: I0912 17:16:35.308888 2787 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:16:35.313341 kubelet[2787]: E0912 17:16:35.313314 2787 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:16:35.314308 kubelet[2787]: I0912 17:16:35.314288 2787 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:16:35.326198 kubelet[2787]: I0912 17:16:35.326119 2787 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:16:35.327660 kubelet[2787]: I0912 17:16:35.327389 2787 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:16:35.327660 kubelet[2787]: I0912 17:16:35.327412 2787 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 17:16:35.327660 kubelet[2787]: I0912 17:16:35.327433 2787 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:16:35.327660 kubelet[2787]: I0912 17:16:35.327440 2787 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 17:16:35.327660 kubelet[2787]: E0912 17:16:35.327487 2787 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:16:35.335287 kubelet[2787]: W0912 17:16:35.335232 2787 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.19.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.109:6443: connect: connection refused Sep 12 17:16:35.335431 kubelet[2787]: E0912 17:16:35.335291 2787 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.19.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.19.109:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:16:35.355572 kubelet[2787]: I0912 17:16:35.355529 2787 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:16:35.355572 kubelet[2787]: I0912 17:16:35.355550 2787 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:16:35.355572 kubelet[2787]: I0912 17:16:35.355570 2787 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:16:35.358127 kubelet[2787]: I0912 17:16:35.358059 2787 policy_none.go:49] "None policy: Start" Sep 12 17:16:35.358127 kubelet[2787]: I0912 17:16:35.358091 2787 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:16:35.358127 kubelet[2787]: I0912 17:16:35.358106 2787 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:16:35.364659 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 17:16:35.373823 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 17:16:35.377847 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 17:16:35.387732 kubelet[2787]: I0912 17:16:35.387090 2787 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:16:35.387732 kubelet[2787]: I0912 17:16:35.387353 2787 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:16:35.387732 kubelet[2787]: I0912 17:16:35.387367 2787 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:16:35.387732 kubelet[2787]: I0912 17:16:35.387739 2787 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:16:35.389190 kubelet[2787]: E0912 17:16:35.389116 2787 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:16:35.389190 kubelet[2787]: E0912 17:16:35.389161 2787 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-19-109\" not found" Sep 12 17:16:35.407018 kubelet[2787]: E0912 17:16:35.406901 2787 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.19.109:6443/api/v1/namespaces/default/events\": dial tcp 172.31.19.109:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-19-109.18649877e40e199f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-19-109,UID:ip-172-31-19-109,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-19-109,},FirstTimestamp:2025-09-12 17:16:35.275995551 +0000 UTC m=+0.907906844,LastTimestamp:2025-09-12 17:16:35.275995551 +0000 UTC m=+0.907906844,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-19-109,}" Sep 12 17:16:35.447831 systemd[1]: Created slice kubepods-burstable-poda9cd0c1ce7748d8613ada2f3b9490e7c.slice - libcontainer container kubepods-burstable-poda9cd0c1ce7748d8613ada2f3b9490e7c.slice. Sep 12 17:16:35.460017 kubelet[2787]: E0912 17:16:35.459972 2787 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-109\" not found" node="ip-172-31-19-109" Sep 12 17:16:35.463372 systemd[1]: Created slice kubepods-burstable-podaa31053d11284276852736b218176c60.slice - libcontainer container kubepods-burstable-podaa31053d11284276852736b218176c60.slice. Sep 12 17:16:35.470196 systemd[1]: Created slice kubepods-burstable-pod2ad6d702212d02135fc86f931b70b1aa.slice - libcontainer container kubepods-burstable-pod2ad6d702212d02135fc86f931b70b1aa.slice. Sep 12 17:16:35.473417 kubelet[2787]: E0912 17:16:35.473373 2787 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-109\" not found" node="ip-172-31-19-109" Sep 12 17:16:35.476337 kubelet[2787]: E0912 17:16:35.476308 2787 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-109\" not found" node="ip-172-31-19-109" Sep 12 17:16:35.489143 kubelet[2787]: I0912 17:16:35.489105 2787 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-109" Sep 12 17:16:35.489543 kubelet[2787]: E0912 17:16:35.489506 2787 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.19.109:6443/api/v1/nodes\": dial tcp 172.31.19.109:6443: connect: connection refused" node="ip-172-31-19-109" Sep 12 17:16:35.504829 kubelet[2787]: I0912 17:16:35.504613 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a9cd0c1ce7748d8613ada2f3b9490e7c-ca-certs\") pod \"kube-apiserver-ip-172-31-19-109\" (UID: \"a9cd0c1ce7748d8613ada2f3b9490e7c\") " pod="kube-system/kube-apiserver-ip-172-31-19-109" Sep 12 17:16:35.504829 kubelet[2787]: I0912 17:16:35.504657 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a9cd0c1ce7748d8613ada2f3b9490e7c-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-109\" (UID: \"a9cd0c1ce7748d8613ada2f3b9490e7c\") " pod="kube-system/kube-apiserver-ip-172-31-19-109" Sep 12 17:16:35.504829 kubelet[2787]: I0912 17:16:35.504679 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2ad6d702212d02135fc86f931b70b1aa-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-109\" (UID: \"2ad6d702212d02135fc86f931b70b1aa\") " pod="kube-system/kube-controller-manager-ip-172-31-19-109" Sep 12 17:16:35.504829 kubelet[2787]: I0912 17:16:35.504695 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2ad6d702212d02135fc86f931b70b1aa-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-109\" (UID: \"2ad6d702212d02135fc86f931b70b1aa\") " pod="kube-system/kube-controller-manager-ip-172-31-19-109" Sep 12 17:16:35.504829 kubelet[2787]: I0912 17:16:35.504711 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2ad6d702212d02135fc86f931b70b1aa-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-109\" (UID: \"2ad6d702212d02135fc86f931b70b1aa\") " pod="kube-system/kube-controller-manager-ip-172-31-19-109" Sep 12 17:16:35.505125 kubelet[2787]: I0912 17:16:35.504725 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a9cd0c1ce7748d8613ada2f3b9490e7c-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-109\" (UID: \"a9cd0c1ce7748d8613ada2f3b9490e7c\") " pod="kube-system/kube-apiserver-ip-172-31-19-109" Sep 12 17:16:35.505125 kubelet[2787]: I0912 17:16:35.504742 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2ad6d702212d02135fc86f931b70b1aa-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-109\" (UID: \"2ad6d702212d02135fc86f931b70b1aa\") " pod="kube-system/kube-controller-manager-ip-172-31-19-109" Sep 12 17:16:35.505125 kubelet[2787]: E0912 17:16:35.504745 2787 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-109?timeout=10s\": dial tcp 172.31.19.109:6443: connect: connection refused" interval="400ms" Sep 12 17:16:35.505125 kubelet[2787]: I0912 17:16:35.504756 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2ad6d702212d02135fc86f931b70b1aa-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-109\" (UID: \"2ad6d702212d02135fc86f931b70b1aa\") " pod="kube-system/kube-controller-manager-ip-172-31-19-109" Sep 12 17:16:35.505125 kubelet[2787]: I0912 17:16:35.504796 2787 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aa31053d11284276852736b218176c60-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-109\" (UID: \"aa31053d11284276852736b218176c60\") " pod="kube-system/kube-scheduler-ip-172-31-19-109" Sep 12 17:16:35.691732 kubelet[2787]: I0912 17:16:35.691692 2787 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-109" Sep 12 17:16:35.692071 kubelet[2787]: E0912 17:16:35.692040 2787 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.19.109:6443/api/v1/nodes\": dial tcp 172.31.19.109:6443: connect: connection refused" node="ip-172-31-19-109" Sep 12 17:16:35.763357 containerd[1909]: time="2025-09-12T17:16:35.763115308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-109,Uid:a9cd0c1ce7748d8613ada2f3b9490e7c,Namespace:kube-system,Attempt:0,}" Sep 12 17:16:35.774862 containerd[1909]: time="2025-09-12T17:16:35.774813253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-109,Uid:aa31053d11284276852736b218176c60,Namespace:kube-system,Attempt:0,}" Sep 12 17:16:35.778251 containerd[1909]: time="2025-09-12T17:16:35.777848313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-109,Uid:2ad6d702212d02135fc86f931b70b1aa,Namespace:kube-system,Attempt:0,}" Sep 12 17:16:35.906351 kubelet[2787]: E0912 17:16:35.906302 2787 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-109?timeout=10s\": dial tcp 172.31.19.109:6443: connect: connection refused" interval="800ms" Sep 12 17:16:36.094292 kubelet[2787]: I0912 17:16:36.094188 2787 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-109" Sep 12 17:16:36.094560 kubelet[2787]: E0912 17:16:36.094466 2787 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.19.109:6443/api/v1/nodes\": dial tcp 172.31.19.109:6443: connect: connection refused" node="ip-172-31-19-109" Sep 12 17:16:36.222148 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2186732834.mount: Deactivated successfully. Sep 12 17:16:36.228280 containerd[1909]: time="2025-09-12T17:16:36.228229727Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:16:36.230774 containerd[1909]: time="2025-09-12T17:16:36.230720878Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 12 17:16:36.237729 containerd[1909]: time="2025-09-12T17:16:36.237684343Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:16:36.239501 containerd[1909]: time="2025-09-12T17:16:36.239282561Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:16:36.244107 containerd[1909]: time="2025-09-12T17:16:36.243258042Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:16:36.245652 containerd[1909]: time="2025-09-12T17:16:36.245604881Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:16:36.246649 containerd[1909]: time="2025-09-12T17:16:36.246616582Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 477.405979ms" Sep 12 17:16:36.248070 containerd[1909]: time="2025-09-12T17:16:36.248037242Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:16:36.248177 kubelet[2787]: W0912 17:16:36.248040 2787 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.19.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.109:6443: connect: connection refused Sep 12 17:16:36.248319 kubelet[2787]: E0912 17:16:36.248285 2787 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.19.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.19.109:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:16:36.250661 containerd[1909]: time="2025-09-12T17:16:36.250608722Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:16:36.255195 containerd[1909]: time="2025-09-12T17:16:36.255149428Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 480.241145ms" Sep 12 17:16:36.257749 containerd[1909]: time="2025-09-12T17:16:36.256971393Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 478.875591ms" Sep 12 17:16:36.436516 containerd[1909]: time="2025-09-12T17:16:36.435511655Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:16:36.436516 containerd[1909]: time="2025-09-12T17:16:36.435601918Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:16:36.436516 containerd[1909]: time="2025-09-12T17:16:36.435625063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:16:36.436516 containerd[1909]: time="2025-09-12T17:16:36.435719062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:16:36.453641 containerd[1909]: time="2025-09-12T17:16:36.450448425Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:16:36.453641 containerd[1909]: time="2025-09-12T17:16:36.452612410Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:16:36.453641 containerd[1909]: time="2025-09-12T17:16:36.452633802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:16:36.453641 containerd[1909]: time="2025-09-12T17:16:36.452756136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:16:36.455012 containerd[1909]: time="2025-09-12T17:16:36.454882672Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:16:36.455012 containerd[1909]: time="2025-09-12T17:16:36.454975085Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:16:36.455214 containerd[1909]: time="2025-09-12T17:16:36.455001929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:16:36.455512 containerd[1909]: time="2025-09-12T17:16:36.455366262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:16:36.491245 systemd[1]: Started cri-containerd-aa9e07d6a5675b732f07f30b5ee3c9447ef0d7097c7cfcd337ed3240aa9064d5.scope - libcontainer container aa9e07d6a5675b732f07f30b5ee3c9447ef0d7097c7cfcd337ed3240aa9064d5. Sep 12 17:16:36.496503 systemd[1]: Started cri-containerd-abd1580cb16be2c3b433bd40a2f357122328ce817a6c213d73423116ac38defb.scope - libcontainer container abd1580cb16be2c3b433bd40a2f357122328ce817a6c213d73423116ac38defb. Sep 12 17:16:36.514116 systemd[1]: Started cri-containerd-d96aa8c1d4077914def2ba1539240121a9dda2633ee6ba904b0d0b4d618034e1.scope - libcontainer container d96aa8c1d4077914def2ba1539240121a9dda2633ee6ba904b0d0b4d618034e1. Sep 12 17:16:36.586539 containerd[1909]: time="2025-09-12T17:16:36.586480525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-109,Uid:aa31053d11284276852736b218176c60,Namespace:kube-system,Attempt:0,} returns sandbox id \"abd1580cb16be2c3b433bd40a2f357122328ce817a6c213d73423116ac38defb\"" Sep 12 17:16:36.600585 containerd[1909]: time="2025-09-12T17:16:36.600414724Z" level=info msg="CreateContainer within sandbox \"abd1580cb16be2c3b433bd40a2f357122328ce817a6c213d73423116ac38defb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 17:16:36.612256 containerd[1909]: time="2025-09-12T17:16:36.612192589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-109,Uid:2ad6d702212d02135fc86f931b70b1aa,Namespace:kube-system,Attempt:0,} returns sandbox id \"d96aa8c1d4077914def2ba1539240121a9dda2633ee6ba904b0d0b4d618034e1\"" Sep 12 17:16:36.616870 containerd[1909]: time="2025-09-12T17:16:36.616643635Z" level=info msg="CreateContainer within sandbox \"d96aa8c1d4077914def2ba1539240121a9dda2633ee6ba904b0d0b4d618034e1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 17:16:36.620398 kubelet[2787]: W0912 17:16:36.620002 2787 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.19.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.109:6443: connect: connection refused Sep 12 17:16:36.620398 kubelet[2787]: E0912 17:16:36.620083 2787 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.19.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.19.109:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:16:36.627217 containerd[1909]: time="2025-09-12T17:16:36.626692787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-109,Uid:a9cd0c1ce7748d8613ada2f3b9490e7c,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa9e07d6a5675b732f07f30b5ee3c9447ef0d7097c7cfcd337ed3240aa9064d5\"" Sep 12 17:16:36.632017 containerd[1909]: time="2025-09-12T17:16:36.631925093Z" level=info msg="CreateContainer within sandbox \"aa9e07d6a5675b732f07f30b5ee3c9447ef0d7097c7cfcd337ed3240aa9064d5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 17:16:36.644011 containerd[1909]: time="2025-09-12T17:16:36.643967444Z" level=info msg="CreateContainer within sandbox \"d96aa8c1d4077914def2ba1539240121a9dda2633ee6ba904b0d0b4d618034e1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2ddd1ac9580a9d6e2b75a19567f2ea594eea6b1014fefe334679ef376f8dcb16\"" Sep 12 17:16:36.644973 containerd[1909]: time="2025-09-12T17:16:36.644841601Z" level=info msg="StartContainer for \"2ddd1ac9580a9d6e2b75a19567f2ea594eea6b1014fefe334679ef376f8dcb16\"" Sep 12 17:16:36.646914 containerd[1909]: time="2025-09-12T17:16:36.646878914Z" level=info msg="CreateContainer within sandbox \"abd1580cb16be2c3b433bd40a2f357122328ce817a6c213d73423116ac38defb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b0494fa378b423732561d636b10a575f75b7c986508e3ef2e09cc38b0e68400b\"" Sep 12 17:16:36.648323 containerd[1909]: time="2025-09-12T17:16:36.648299512Z" level=info msg="StartContainer for \"b0494fa378b423732561d636b10a575f75b7c986508e3ef2e09cc38b0e68400b\"" Sep 12 17:16:36.651050 kubelet[2787]: W0912 17:16:36.650954 2787 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.19.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-109&limit=500&resourceVersion=0": dial tcp 172.31.19.109:6443: connect: connection refused Sep 12 17:16:36.651050 kubelet[2787]: E0912 17:16:36.651015 2787 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.19.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-109&limit=500&resourceVersion=0\": dial tcp 172.31.19.109:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:16:36.652425 containerd[1909]: time="2025-09-12T17:16:36.652396504Z" level=info msg="CreateContainer within sandbox \"aa9e07d6a5675b732f07f30b5ee3c9447ef0d7097c7cfcd337ed3240aa9064d5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7b0ece005b536d38a0584ca0a752af881cf0dbc077ac5a391759e5c3271feeae\"" Sep 12 17:16:36.653153 containerd[1909]: time="2025-09-12T17:16:36.653131144Z" level=info msg="StartContainer for \"7b0ece005b536d38a0584ca0a752af881cf0dbc077ac5a391759e5c3271feeae\"" Sep 12 17:16:36.664832 kubelet[2787]: W0912 17:16:36.664741 2787 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.19.109:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.19.109:6443: connect: connection refused Sep 12 17:16:36.664832 kubelet[2787]: E0912 17:16:36.664796 2787 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.19.109:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.19.109:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:16:36.686134 systemd[1]: Started cri-containerd-2ddd1ac9580a9d6e2b75a19567f2ea594eea6b1014fefe334679ef376f8dcb16.scope - libcontainer container 2ddd1ac9580a9d6e2b75a19567f2ea594eea6b1014fefe334679ef376f8dcb16. Sep 12 17:16:36.688476 systemd[1]: Started cri-containerd-b0494fa378b423732561d636b10a575f75b7c986508e3ef2e09cc38b0e68400b.scope - libcontainer container b0494fa378b423732561d636b10a575f75b7c986508e3ef2e09cc38b0e68400b. Sep 12 17:16:36.707673 kubelet[2787]: E0912 17:16:36.707495 2787 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-109?timeout=10s\": dial tcp 172.31.19.109:6443: connect: connection refused" interval="1.6s" Sep 12 17:16:36.717623 systemd[1]: Started cri-containerd-7b0ece005b536d38a0584ca0a752af881cf0dbc077ac5a391759e5c3271feeae.scope - libcontainer container 7b0ece005b536d38a0584ca0a752af881cf0dbc077ac5a391759e5c3271feeae. Sep 12 17:16:36.796362 containerd[1909]: time="2025-09-12T17:16:36.796302036Z" level=info msg="StartContainer for \"b0494fa378b423732561d636b10a575f75b7c986508e3ef2e09cc38b0e68400b\" returns successfully" Sep 12 17:16:36.810130 containerd[1909]: time="2025-09-12T17:16:36.807866375Z" level=info msg="StartContainer for \"2ddd1ac9580a9d6e2b75a19567f2ea594eea6b1014fefe334679ef376f8dcb16\" returns successfully" Sep 12 17:16:36.823315 containerd[1909]: time="2025-09-12T17:16:36.823271684Z" level=info msg="StartContainer for \"7b0ece005b536d38a0584ca0a752af881cf0dbc077ac5a391759e5c3271feeae\" returns successfully" Sep 12 17:16:36.897850 kubelet[2787]: I0912 17:16:36.897824 2787 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-109" Sep 12 17:16:36.898532 kubelet[2787]: E0912 17:16:36.898493 2787 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.19.109:6443/api/v1/nodes\": dial tcp 172.31.19.109:6443: connect: connection refused" node="ip-172-31-19-109" Sep 12 17:16:37.227896 kubelet[2787]: E0912 17:16:37.227539 2787 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.19.109:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.19.109:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:16:37.364385 kubelet[2787]: E0912 17:16:37.364352 2787 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-109\" not found" node="ip-172-31-19-109" Sep 12 17:16:37.372063 kubelet[2787]: E0912 17:16:37.372034 2787 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-109\" not found" node="ip-172-31-19-109" Sep 12 17:16:37.376553 kubelet[2787]: E0912 17:16:37.376513 2787 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-109\" not found" node="ip-172-31-19-109" Sep 12 17:16:38.308973 kubelet[2787]: E0912 17:16:38.308805 2787 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-109?timeout=10s\": dial tcp 172.31.19.109:6443: connect: connection refused" interval="3.2s" Sep 12 17:16:38.376456 kubelet[2787]: E0912 17:16:38.376094 2787 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-109\" not found" node="ip-172-31-19-109" Sep 12 17:16:38.376456 kubelet[2787]: E0912 17:16:38.376108 2787 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-109\" not found" node="ip-172-31-19-109" Sep 12 17:16:38.501991 kubelet[2787]: I0912 17:16:38.501952 2787 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-109" Sep 12 17:16:39.378529 kubelet[2787]: E0912 17:16:39.378243 2787 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-109\" not found" node="ip-172-31-19-109" Sep 12 17:16:40.036966 kubelet[2787]: I0912 17:16:40.036741 2787 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-19-109" Sep 12 17:16:40.036966 kubelet[2787]: E0912 17:16:40.036782 2787 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-19-109\": node \"ip-172-31-19-109\" not found" Sep 12 17:16:40.103938 kubelet[2787]: I0912 17:16:40.102962 2787 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-19-109" Sep 12 17:16:40.112114 kubelet[2787]: E0912 17:16:40.112078 2787 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-19-109\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-19-109" Sep 12 17:16:40.112114 kubelet[2787]: I0912 17:16:40.112109 2787 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-19-109" Sep 12 17:16:40.114074 kubelet[2787]: E0912 17:16:40.113734 2787 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-19-109\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-19-109" Sep 12 17:16:40.114074 kubelet[2787]: I0912 17:16:40.113766 2787 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-19-109" Sep 12 17:16:40.115720 kubelet[2787]: E0912 17:16:40.115686 2787 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-19-109\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-19-109" Sep 12 17:16:40.261420 kubelet[2787]: I0912 17:16:40.261375 2787 apiserver.go:52] "Watching apiserver" Sep 12 17:16:40.304348 kubelet[2787]: I0912 17:16:40.304217 2787 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:16:40.445349 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 12 17:16:42.322689 systemd[1]: Reload requested from client PID 3062 ('systemctl') (unit session-7.scope)... Sep 12 17:16:42.322708 systemd[1]: Reloading... Sep 12 17:16:42.476017 zram_generator::config[3107]: No configuration found. Sep 12 17:16:42.673894 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:16:42.827391 systemd[1]: Reloading finished in 504 ms. Sep 12 17:16:42.856165 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:16:42.867655 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:16:42.868053 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:16:42.868219 systemd[1]: kubelet.service: Consumed 1.351s CPU time, 130.7M memory peak. Sep 12 17:16:42.876413 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:16:43.147084 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:16:43.161541 (kubelet)[3167]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:16:43.249608 kubelet[3167]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:16:43.252427 kubelet[3167]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:16:43.252427 kubelet[3167]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:16:43.252427 kubelet[3167]: I0912 17:16:43.249730 3167 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:16:43.259401 kubelet[3167]: I0912 17:16:43.259357 3167 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 17:16:43.259401 kubelet[3167]: I0912 17:16:43.259389 3167 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:16:43.259744 kubelet[3167]: I0912 17:16:43.259721 3167 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 17:16:43.263514 kubelet[3167]: I0912 17:16:43.263478 3167 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 17:16:43.269702 kubelet[3167]: I0912 17:16:43.269501 3167 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:16:43.273724 kubelet[3167]: E0912 17:16:43.273453 3167 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:16:43.273724 kubelet[3167]: I0912 17:16:43.273490 3167 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:16:43.277065 kubelet[3167]: I0912 17:16:43.277036 3167 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:16:43.279156 kubelet[3167]: I0912 17:16:43.278705 3167 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:16:43.279156 kubelet[3167]: I0912 17:16:43.278764 3167 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-19-109","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:16:43.279156 kubelet[3167]: I0912 17:16:43.279057 3167 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:16:43.279156 kubelet[3167]: I0912 17:16:43.279068 3167 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 17:16:43.282967 kubelet[3167]: I0912 17:16:43.282754 3167 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:16:43.283113 kubelet[3167]: I0912 17:16:43.282996 3167 kubelet.go:446] "Attempting to sync node with API server" Sep 12 17:16:43.284893 kubelet[3167]: I0912 17:16:43.283028 3167 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:16:43.285064 kubelet[3167]: I0912 17:16:43.284906 3167 kubelet.go:352] "Adding apiserver pod source" Sep 12 17:16:43.285064 kubelet[3167]: I0912 17:16:43.284922 3167 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:16:43.293386 kubelet[3167]: I0912 17:16:43.293251 3167 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 12 17:16:43.296428 kubelet[3167]: I0912 17:16:43.295530 3167 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:16:43.306615 kubelet[3167]: I0912 17:16:43.306388 3167 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:16:43.308106 kubelet[3167]: I0912 17:16:43.308038 3167 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:16:43.308443 kubelet[3167]: I0912 17:16:43.308422 3167 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:16:43.313485 kubelet[3167]: I0912 17:16:43.313140 3167 server.go:1287] "Started kubelet" Sep 12 17:16:43.320976 kubelet[3167]: I0912 17:16:43.319053 3167 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:16:43.322814 kubelet[3167]: I0912 17:16:43.322778 3167 server.go:479] "Adding debug handlers to kubelet server" Sep 12 17:16:43.341242 kubelet[3167]: I0912 17:16:43.340825 3167 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:16:43.343907 kubelet[3167]: I0912 17:16:43.343870 3167 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:16:43.354220 kubelet[3167]: I0912 17:16:43.354188 3167 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:16:43.354836 kubelet[3167]: I0912 17:16:43.354797 3167 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:16:43.355415 kubelet[3167]: I0912 17:16:43.344434 3167 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:16:43.355515 kubelet[3167]: E0912 17:16:43.344614 3167 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:16:43.358975 kubelet[3167]: I0912 17:16:43.344399 3167 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:16:43.360029 kubelet[3167]: I0912 17:16:43.359363 3167 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:16:43.363724 kubelet[3167]: I0912 17:16:43.363700 3167 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:16:43.367605 sudo[3182]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 12 17:16:43.368109 sudo[3182]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 12 17:16:43.382444 kubelet[3167]: I0912 17:16:43.382183 3167 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:16:43.385186 kubelet[3167]: I0912 17:16:43.385152 3167 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:16:43.385328 kubelet[3167]: I0912 17:16:43.385219 3167 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 17:16:43.385328 kubelet[3167]: I0912 17:16:43.385244 3167 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:16:43.385328 kubelet[3167]: I0912 17:16:43.385252 3167 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 17:16:43.385454 kubelet[3167]: E0912 17:16:43.385326 3167 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:16:43.462048 kubelet[3167]: I0912 17:16:43.461640 3167 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:16:43.462048 kubelet[3167]: I0912 17:16:43.461665 3167 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:16:43.462048 kubelet[3167]: I0912 17:16:43.461693 3167 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:16:43.462259 kubelet[3167]: I0912 17:16:43.462052 3167 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 17:16:43.462259 kubelet[3167]: I0912 17:16:43.462068 3167 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 17:16:43.462259 kubelet[3167]: I0912 17:16:43.462100 3167 policy_none.go:49] "None policy: Start" Sep 12 17:16:43.462259 kubelet[3167]: I0912 17:16:43.462114 3167 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:16:43.462259 kubelet[3167]: I0912 17:16:43.462128 3167 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:16:43.462452 kubelet[3167]: I0912 17:16:43.462315 3167 state_mem.go:75] "Updated machine memory state" Sep 12 17:16:43.470732 kubelet[3167]: I0912 17:16:43.470354 3167 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:16:43.471141 kubelet[3167]: I0912 17:16:43.471109 3167 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:16:43.471474 kubelet[3167]: I0912 17:16:43.471131 3167 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:16:43.472041 kubelet[3167]: I0912 17:16:43.471600 3167 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:16:43.477062 kubelet[3167]: E0912 17:16:43.476632 3167 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:16:43.491523 kubelet[3167]: I0912 17:16:43.487897 3167 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-19-109" Sep 12 17:16:43.491523 kubelet[3167]: I0912 17:16:43.489237 3167 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-19-109" Sep 12 17:16:43.492578 kubelet[3167]: I0912 17:16:43.487897 3167 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-19-109" Sep 12 17:16:43.564218 kubelet[3167]: I0912 17:16:43.564053 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a9cd0c1ce7748d8613ada2f3b9490e7c-ca-certs\") pod \"kube-apiserver-ip-172-31-19-109\" (UID: \"a9cd0c1ce7748d8613ada2f3b9490e7c\") " pod="kube-system/kube-apiserver-ip-172-31-19-109" Sep 12 17:16:43.564218 kubelet[3167]: I0912 17:16:43.564167 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a9cd0c1ce7748d8613ada2f3b9490e7c-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-109\" (UID: \"a9cd0c1ce7748d8613ada2f3b9490e7c\") " pod="kube-system/kube-apiserver-ip-172-31-19-109" Sep 12 17:16:43.564686 kubelet[3167]: I0912 17:16:43.564441 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2ad6d702212d02135fc86f931b70b1aa-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-109\" (UID: \"2ad6d702212d02135fc86f931b70b1aa\") " pod="kube-system/kube-controller-manager-ip-172-31-19-109" Sep 12 17:16:43.564686 kubelet[3167]: I0912 17:16:43.564477 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2ad6d702212d02135fc86f931b70b1aa-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-109\" (UID: \"2ad6d702212d02135fc86f931b70b1aa\") " pod="kube-system/kube-controller-manager-ip-172-31-19-109" Sep 12 17:16:43.564686 kubelet[3167]: I0912 17:16:43.564515 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2ad6d702212d02135fc86f931b70b1aa-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-109\" (UID: \"2ad6d702212d02135fc86f931b70b1aa\") " pod="kube-system/kube-controller-manager-ip-172-31-19-109" Sep 12 17:16:43.564686 kubelet[3167]: I0912 17:16:43.564542 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2ad6d702212d02135fc86f931b70b1aa-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-109\" (UID: \"2ad6d702212d02135fc86f931b70b1aa\") " pod="kube-system/kube-controller-manager-ip-172-31-19-109" Sep 12 17:16:43.564686 kubelet[3167]: I0912 17:16:43.564571 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aa31053d11284276852736b218176c60-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-109\" (UID: \"aa31053d11284276852736b218176c60\") " pod="kube-system/kube-scheduler-ip-172-31-19-109" Sep 12 17:16:43.564932 kubelet[3167]: I0912 17:16:43.564595 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a9cd0c1ce7748d8613ada2f3b9490e7c-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-109\" (UID: \"a9cd0c1ce7748d8613ada2f3b9490e7c\") " pod="kube-system/kube-apiserver-ip-172-31-19-109" Sep 12 17:16:43.564932 kubelet[3167]: I0912 17:16:43.564627 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2ad6d702212d02135fc86f931b70b1aa-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-109\" (UID: \"2ad6d702212d02135fc86f931b70b1aa\") " pod="kube-system/kube-controller-manager-ip-172-31-19-109" Sep 12 17:16:43.583980 kubelet[3167]: I0912 17:16:43.583919 3167 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-109" Sep 12 17:16:43.596161 kubelet[3167]: I0912 17:16:43.595811 3167 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-19-109" Sep 12 17:16:43.596161 kubelet[3167]: I0912 17:16:43.595894 3167 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-19-109" Sep 12 17:16:44.199386 sudo[3182]: pam_unix(sudo:session): session closed for user root Sep 12 17:16:44.292974 kubelet[3167]: I0912 17:16:44.292224 3167 apiserver.go:52] "Watching apiserver" Sep 12 17:16:44.356348 kubelet[3167]: I0912 17:16:44.355932 3167 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:16:44.397158 kubelet[3167]: I0912 17:16:44.396704 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-19-109" podStartSLOduration=1.396681912 podStartE2EDuration="1.396681912s" podCreationTimestamp="2025-09-12 17:16:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:16:44.382670914 +0000 UTC m=+1.213461410" watchObservedRunningTime="2025-09-12 17:16:44.396681912 +0000 UTC m=+1.227472410" Sep 12 17:16:44.410711 kubelet[3167]: I0912 17:16:44.410119 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-19-109" podStartSLOduration=1.410095254 podStartE2EDuration="1.410095254s" podCreationTimestamp="2025-09-12 17:16:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:16:44.397678765 +0000 UTC m=+1.228469269" watchObservedRunningTime="2025-09-12 17:16:44.410095254 +0000 UTC m=+1.240885755" Sep 12 17:16:44.427238 kubelet[3167]: I0912 17:16:44.425497 3167 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-19-109" Sep 12 17:16:44.427238 kubelet[3167]: I0912 17:16:44.426707 3167 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-19-109" Sep 12 17:16:44.442169 kubelet[3167]: E0912 17:16:44.441817 3167 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-19-109\" already exists" pod="kube-system/kube-apiserver-ip-172-31-19-109" Sep 12 17:16:44.446404 kubelet[3167]: E0912 17:16:44.444221 3167 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-19-109\" already exists" pod="kube-system/kube-scheduler-ip-172-31-19-109" Sep 12 17:16:44.457758 kubelet[3167]: I0912 17:16:44.457615 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-19-109" podStartSLOduration=1.4575934560000001 podStartE2EDuration="1.457593456s" podCreationTimestamp="2025-09-12 17:16:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:16:44.411115123 +0000 UTC m=+1.241905625" watchObservedRunningTime="2025-09-12 17:16:44.457593456 +0000 UTC m=+1.288383959" Sep 12 17:16:46.133713 sudo[2242]: pam_unix(sudo:session): session closed for user root Sep 12 17:16:46.156971 sshd[2241]: Connection closed by 139.178.89.65 port 49002 Sep 12 17:16:46.158160 sshd-session[2239]: pam_unix(sshd:session): session closed for user core Sep 12 17:16:46.161666 systemd[1]: sshd@6-172.31.19.109:22-139.178.89.65:49002.service: Deactivated successfully. Sep 12 17:16:46.164487 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 17:16:46.164748 systemd[1]: session-7.scope: Consumed 5.111s CPU time, 207.9M memory peak. Sep 12 17:16:46.167401 systemd-logind[1896]: Session 7 logged out. Waiting for processes to exit. Sep 12 17:16:46.169044 systemd-logind[1896]: Removed session 7. Sep 12 17:16:47.045244 kubelet[3167]: I0912 17:16:47.045211 3167 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 17:16:47.046486 containerd[1909]: time="2025-09-12T17:16:47.045808276Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 17:16:47.046863 kubelet[3167]: I0912 17:16:47.046221 3167 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 17:16:47.774217 kubelet[3167]: I0912 17:16:47.774076 3167 status_manager.go:890] "Failed to get status for pod" podUID="9f59dfb3-a3a8-4c68-9e76-77830ec22288" pod="kube-system/kube-proxy-q62fd" err="pods \"kube-proxy-q62fd\" is forbidden: User \"system:node:ip-172-31-19-109\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-19-109' and this object" Sep 12 17:16:47.779304 systemd[1]: Created slice kubepods-besteffort-pod9f59dfb3_a3a8_4c68_9e76_77830ec22288.slice - libcontainer container kubepods-besteffort-pod9f59dfb3_a3a8_4c68_9e76_77830ec22288.slice. Sep 12 17:16:47.791447 kubelet[3167]: I0912 17:16:47.791394 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9f59dfb3-a3a8-4c68-9e76-77830ec22288-lib-modules\") pod \"kube-proxy-q62fd\" (UID: \"9f59dfb3-a3a8-4c68-9e76-77830ec22288\") " pod="kube-system/kube-proxy-q62fd" Sep 12 17:16:47.791447 kubelet[3167]: I0912 17:16:47.791449 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9f59dfb3-a3a8-4c68-9e76-77830ec22288-xtables-lock\") pod \"kube-proxy-q62fd\" (UID: \"9f59dfb3-a3a8-4c68-9e76-77830ec22288\") " pod="kube-system/kube-proxy-q62fd" Sep 12 17:16:47.791696 kubelet[3167]: I0912 17:16:47.791474 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc82r\" (UniqueName: \"kubernetes.io/projected/9f59dfb3-a3a8-4c68-9e76-77830ec22288-kube-api-access-kc82r\") pod \"kube-proxy-q62fd\" (UID: \"9f59dfb3-a3a8-4c68-9e76-77830ec22288\") " pod="kube-system/kube-proxy-q62fd" Sep 12 17:16:47.791696 kubelet[3167]: I0912 17:16:47.791502 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9f59dfb3-a3a8-4c68-9e76-77830ec22288-kube-proxy\") pod \"kube-proxy-q62fd\" (UID: \"9f59dfb3-a3a8-4c68-9e76-77830ec22288\") " pod="kube-system/kube-proxy-q62fd" Sep 12 17:16:47.806289 systemd[1]: Created slice kubepods-burstable-pod69872b7d_98bb_4451_9d57_bd126960412b.slice - libcontainer container kubepods-burstable-pod69872b7d_98bb_4451_9d57_bd126960412b.slice. Sep 12 17:16:47.893086 kubelet[3167]: I0912 17:16:47.893036 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/69872b7d-98bb-4451-9d57-bd126960412b-etc-cni-netd\") pod \"cilium-qfht2\" (UID: \"69872b7d-98bb-4451-9d57-bd126960412b\") " pod="kube-system/cilium-qfht2" Sep 12 17:16:47.893086 kubelet[3167]: I0912 17:16:47.893079 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/69872b7d-98bb-4451-9d57-bd126960412b-bpf-maps\") pod \"cilium-qfht2\" (UID: \"69872b7d-98bb-4451-9d57-bd126960412b\") " pod="kube-system/cilium-qfht2" Sep 12 17:16:47.893329 kubelet[3167]: I0912 17:16:47.893105 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/69872b7d-98bb-4451-9d57-bd126960412b-cilium-run\") pod \"cilium-qfht2\" (UID: \"69872b7d-98bb-4451-9d57-bd126960412b\") " pod="kube-system/cilium-qfht2" Sep 12 17:16:47.893329 kubelet[3167]: I0912 17:16:47.893126 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/69872b7d-98bb-4451-9d57-bd126960412b-host-proc-sys-kernel\") pod \"cilium-qfht2\" (UID: \"69872b7d-98bb-4451-9d57-bd126960412b\") " pod="kube-system/cilium-qfht2" Sep 12 17:16:47.893329 kubelet[3167]: I0912 17:16:47.893168 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/69872b7d-98bb-4451-9d57-bd126960412b-hostproc\") pod \"cilium-qfht2\" (UID: \"69872b7d-98bb-4451-9d57-bd126960412b\") " pod="kube-system/cilium-qfht2" Sep 12 17:16:47.893329 kubelet[3167]: I0912 17:16:47.893188 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/69872b7d-98bb-4451-9d57-bd126960412b-cni-path\") pod \"cilium-qfht2\" (UID: \"69872b7d-98bb-4451-9d57-bd126960412b\") " pod="kube-system/cilium-qfht2" Sep 12 17:16:47.893329 kubelet[3167]: I0912 17:16:47.893211 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/69872b7d-98bb-4451-9d57-bd126960412b-host-proc-sys-net\") pod \"cilium-qfht2\" (UID: \"69872b7d-98bb-4451-9d57-bd126960412b\") " pod="kube-system/cilium-qfht2" Sep 12 17:16:47.893329 kubelet[3167]: I0912 17:16:47.893247 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/69872b7d-98bb-4451-9d57-bd126960412b-cilium-cgroup\") pod \"cilium-qfht2\" (UID: \"69872b7d-98bb-4451-9d57-bd126960412b\") " pod="kube-system/cilium-qfht2" Sep 12 17:16:47.893572 kubelet[3167]: I0912 17:16:47.893271 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/69872b7d-98bb-4451-9d57-bd126960412b-xtables-lock\") pod \"cilium-qfht2\" (UID: \"69872b7d-98bb-4451-9d57-bd126960412b\") " pod="kube-system/cilium-qfht2" Sep 12 17:16:47.893572 kubelet[3167]: I0912 17:16:47.893294 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/69872b7d-98bb-4451-9d57-bd126960412b-hubble-tls\") pod \"cilium-qfht2\" (UID: \"69872b7d-98bb-4451-9d57-bd126960412b\") " pod="kube-system/cilium-qfht2" Sep 12 17:16:47.893572 kubelet[3167]: I0912 17:16:47.893320 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/69872b7d-98bb-4451-9d57-bd126960412b-clustermesh-secrets\") pod \"cilium-qfht2\" (UID: \"69872b7d-98bb-4451-9d57-bd126960412b\") " pod="kube-system/cilium-qfht2" Sep 12 17:16:47.893572 kubelet[3167]: I0912 17:16:47.893346 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjz85\" (UniqueName: \"kubernetes.io/projected/69872b7d-98bb-4451-9d57-bd126960412b-kube-api-access-jjz85\") pod \"cilium-qfht2\" (UID: \"69872b7d-98bb-4451-9d57-bd126960412b\") " pod="kube-system/cilium-qfht2" Sep 12 17:16:47.893572 kubelet[3167]: I0912 17:16:47.893407 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/69872b7d-98bb-4451-9d57-bd126960412b-lib-modules\") pod \"cilium-qfht2\" (UID: \"69872b7d-98bb-4451-9d57-bd126960412b\") " pod="kube-system/cilium-qfht2" Sep 12 17:16:47.893572 kubelet[3167]: I0912 17:16:47.893445 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/69872b7d-98bb-4451-9d57-bd126960412b-cilium-config-path\") pod \"cilium-qfht2\" (UID: \"69872b7d-98bb-4451-9d57-bd126960412b\") " pod="kube-system/cilium-qfht2" Sep 12 17:16:48.097076 containerd[1909]: time="2025-09-12T17:16:48.095661930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q62fd,Uid:9f59dfb3-a3a8-4c68-9e76-77830ec22288,Namespace:kube-system,Attempt:0,}" Sep 12 17:16:48.114073 systemd[1]: Created slice kubepods-besteffort-pod11064375_191d_45a1_8158_d8a41208bead.slice - libcontainer container kubepods-besteffort-pod11064375_191d_45a1_8158_d8a41208bead.slice. Sep 12 17:16:48.120625 containerd[1909]: time="2025-09-12T17:16:48.120573302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qfht2,Uid:69872b7d-98bb-4451-9d57-bd126960412b,Namespace:kube-system,Attempt:0,}" Sep 12 17:16:48.195401 kubelet[3167]: I0912 17:16:48.195285 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/11064375-191d-45a1-8158-d8a41208bead-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-nswt9\" (UID: \"11064375-191d-45a1-8158-d8a41208bead\") " pod="kube-system/cilium-operator-6c4d7847fc-nswt9" Sep 12 17:16:48.195401 kubelet[3167]: I0912 17:16:48.195336 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm8ck\" (UniqueName: \"kubernetes.io/projected/11064375-191d-45a1-8158-d8a41208bead-kube-api-access-qm8ck\") pod \"cilium-operator-6c4d7847fc-nswt9\" (UID: \"11064375-191d-45a1-8158-d8a41208bead\") " pod="kube-system/cilium-operator-6c4d7847fc-nswt9" Sep 12 17:16:48.203431 containerd[1909]: time="2025-09-12T17:16:48.203283568Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:16:48.203893 containerd[1909]: time="2025-09-12T17:16:48.203828967Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:16:48.204199 containerd[1909]: time="2025-09-12T17:16:48.203887944Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:16:48.204415 containerd[1909]: time="2025-09-12T17:16:48.204361341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:16:48.204634 containerd[1909]: time="2025-09-12T17:16:48.204550381Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:16:48.204800 containerd[1909]: time="2025-09-12T17:16:48.204732548Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:16:48.207755 containerd[1909]: time="2025-09-12T17:16:48.207671305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:16:48.207978 containerd[1909]: time="2025-09-12T17:16:48.207809529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:16:48.236165 systemd[1]: Started cri-containerd-a1a7e70f807587f1cfb6ca02d800c5a231eaf2e0bb06329ea2174406b2d4de22.scope - libcontainer container a1a7e70f807587f1cfb6ca02d800c5a231eaf2e0bb06329ea2174406b2d4de22. Sep 12 17:16:48.241343 systemd[1]: Started cri-containerd-92c3b47a21583ba8d17b3be4ede88d2890bb30b5a5a613c35765edc4fa38e4f6.scope - libcontainer container 92c3b47a21583ba8d17b3be4ede88d2890bb30b5a5a613c35765edc4fa38e4f6. Sep 12 17:16:48.289728 containerd[1909]: time="2025-09-12T17:16:48.289673725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qfht2,Uid:69872b7d-98bb-4451-9d57-bd126960412b,Namespace:kube-system,Attempt:0,} returns sandbox id \"92c3b47a21583ba8d17b3be4ede88d2890bb30b5a5a613c35765edc4fa38e4f6\"" Sep 12 17:16:48.292585 containerd[1909]: time="2025-09-12T17:16:48.292544632Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 12 17:16:48.302561 containerd[1909]: time="2025-09-12T17:16:48.302099888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q62fd,Uid:9f59dfb3-a3a8-4c68-9e76-77830ec22288,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1a7e70f807587f1cfb6ca02d800c5a231eaf2e0bb06329ea2174406b2d4de22\"" Sep 12 17:16:48.310473 containerd[1909]: time="2025-09-12T17:16:48.310134683Z" level=info msg="CreateContainer within sandbox \"a1a7e70f807587f1cfb6ca02d800c5a231eaf2e0bb06329ea2174406b2d4de22\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 17:16:48.354534 containerd[1909]: time="2025-09-12T17:16:48.354338838Z" level=info msg="CreateContainer within sandbox \"a1a7e70f807587f1cfb6ca02d800c5a231eaf2e0bb06329ea2174406b2d4de22\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b1101f0542cd144f9c697aeb9dcb58739432362e99187e7e79b6b3e8a7fcc69d\"" Sep 12 17:16:48.356649 containerd[1909]: time="2025-09-12T17:16:48.356316846Z" level=info msg="StartContainer for \"b1101f0542cd144f9c697aeb9dcb58739432362e99187e7e79b6b3e8a7fcc69d\"" Sep 12 17:16:48.394249 systemd[1]: Started cri-containerd-b1101f0542cd144f9c697aeb9dcb58739432362e99187e7e79b6b3e8a7fcc69d.scope - libcontainer container b1101f0542cd144f9c697aeb9dcb58739432362e99187e7e79b6b3e8a7fcc69d. Sep 12 17:16:48.422201 containerd[1909]: time="2025-09-12T17:16:48.422161288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-nswt9,Uid:11064375-191d-45a1-8158-d8a41208bead,Namespace:kube-system,Attempt:0,}" Sep 12 17:16:48.439450 containerd[1909]: time="2025-09-12T17:16:48.439366133Z" level=info msg="StartContainer for \"b1101f0542cd144f9c697aeb9dcb58739432362e99187e7e79b6b3e8a7fcc69d\" returns successfully" Sep 12 17:16:48.465037 containerd[1909]: time="2025-09-12T17:16:48.464661846Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:16:48.465037 containerd[1909]: time="2025-09-12T17:16:48.464768604Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:16:48.465037 containerd[1909]: time="2025-09-12T17:16:48.464793739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:16:48.465037 containerd[1909]: time="2025-09-12T17:16:48.464907801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:16:48.486234 systemd[1]: Started cri-containerd-7375ba37b9bdb47491befe59a8c4348db5664b9a009a42f383516d1c2318dfa0.scope - libcontainer container 7375ba37b9bdb47491befe59a8c4348db5664b9a009a42f383516d1c2318dfa0. Sep 12 17:16:48.543095 containerd[1909]: time="2025-09-12T17:16:48.543055086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-nswt9,Uid:11064375-191d-45a1-8158-d8a41208bead,Namespace:kube-system,Attempt:0,} returns sandbox id \"7375ba37b9bdb47491befe59a8c4348db5664b9a009a42f383516d1c2318dfa0\"" Sep 12 17:16:50.983526 kubelet[3167]: I0912 17:16:50.983448 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-q62fd" podStartSLOduration=3.9834099 podStartE2EDuration="3.9834099s" podCreationTimestamp="2025-09-12 17:16:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:16:49.457655142 +0000 UTC m=+6.288445641" watchObservedRunningTime="2025-09-12 17:16:50.9834099 +0000 UTC m=+7.814200399" Sep 12 17:16:54.650270 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2429158672.mount: Deactivated successfully. Sep 12 17:16:55.201081 update_engine[1899]: I20250912 17:16:55.201002 1899 update_attempter.cc:509] Updating boot flags... Sep 12 17:16:55.344972 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3564) Sep 12 17:16:55.663248 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3563) Sep 12 17:16:57.740611 containerd[1909]: time="2025-09-12T17:16:57.740548322Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:16:57.742713 containerd[1909]: time="2025-09-12T17:16:57.742645859Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 12 17:16:57.744808 containerd[1909]: time="2025-09-12T17:16:57.744726010Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:16:57.747879 containerd[1909]: time="2025-09-12T17:16:57.747764488Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.455173713s" Sep 12 17:16:57.747879 containerd[1909]: time="2025-09-12T17:16:57.747796284Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 12 17:16:57.749096 containerd[1909]: time="2025-09-12T17:16:57.748875397Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 12 17:16:57.751031 containerd[1909]: time="2025-09-12T17:16:57.750900409Z" level=info msg="CreateContainer within sandbox \"92c3b47a21583ba8d17b3be4ede88d2890bb30b5a5a613c35765edc4fa38e4f6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 17:16:57.807750 containerd[1909]: time="2025-09-12T17:16:57.807669437Z" level=info msg="CreateContainer within sandbox \"92c3b47a21583ba8d17b3be4ede88d2890bb30b5a5a613c35765edc4fa38e4f6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b3163b08418624750093385561660eeccfbcfb0f3b5426b01a75f46b41b423d9\"" Sep 12 17:16:57.809140 containerd[1909]: time="2025-09-12T17:16:57.808429458Z" level=info msg="StartContainer for \"b3163b08418624750093385561660eeccfbcfb0f3b5426b01a75f46b41b423d9\"" Sep 12 17:16:57.939174 systemd[1]: Started cri-containerd-b3163b08418624750093385561660eeccfbcfb0f3b5426b01a75f46b41b423d9.scope - libcontainer container b3163b08418624750093385561660eeccfbcfb0f3b5426b01a75f46b41b423d9. Sep 12 17:16:57.974306 containerd[1909]: time="2025-09-12T17:16:57.974257220Z" level=info msg="StartContainer for \"b3163b08418624750093385561660eeccfbcfb0f3b5426b01a75f46b41b423d9\" returns successfully" Sep 12 17:16:57.986990 systemd[1]: cri-containerd-b3163b08418624750093385561660eeccfbcfb0f3b5426b01a75f46b41b423d9.scope: Deactivated successfully. Sep 12 17:16:58.186216 containerd[1909]: time="2025-09-12T17:16:58.161602533Z" level=info msg="shim disconnected" id=b3163b08418624750093385561660eeccfbcfb0f3b5426b01a75f46b41b423d9 namespace=k8s.io Sep 12 17:16:58.186515 containerd[1909]: time="2025-09-12T17:16:58.186220157Z" level=warning msg="cleaning up after shim disconnected" id=b3163b08418624750093385561660eeccfbcfb0f3b5426b01a75f46b41b423d9 namespace=k8s.io Sep 12 17:16:58.186515 containerd[1909]: time="2025-09-12T17:16:58.186240675Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:16:58.545118 containerd[1909]: time="2025-09-12T17:16:58.545022673Z" level=info msg="CreateContainer within sandbox \"92c3b47a21583ba8d17b3be4ede88d2890bb30b5a5a613c35765edc4fa38e4f6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 17:16:58.568961 containerd[1909]: time="2025-09-12T17:16:58.568884504Z" level=info msg="CreateContainer within sandbox \"92c3b47a21583ba8d17b3be4ede88d2890bb30b5a5a613c35765edc4fa38e4f6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b9766eef11851d4ef27a2e6fea72f2df817c906dc57553a8227ce408ec5dadf9\"" Sep 12 17:16:58.570371 containerd[1909]: time="2025-09-12T17:16:58.569749570Z" level=info msg="StartContainer for \"b9766eef11851d4ef27a2e6fea72f2df817c906dc57553a8227ce408ec5dadf9\"" Sep 12 17:16:58.618210 systemd[1]: Started cri-containerd-b9766eef11851d4ef27a2e6fea72f2df817c906dc57553a8227ce408ec5dadf9.scope - libcontainer container b9766eef11851d4ef27a2e6fea72f2df817c906dc57553a8227ce408ec5dadf9. Sep 12 17:16:58.659014 containerd[1909]: time="2025-09-12T17:16:58.658899260Z" level=info msg="StartContainer for \"b9766eef11851d4ef27a2e6fea72f2df817c906dc57553a8227ce408ec5dadf9\" returns successfully" Sep 12 17:16:58.675521 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:16:58.676896 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:16:58.677481 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:16:58.686335 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:16:58.686654 systemd[1]: cri-containerd-b9766eef11851d4ef27a2e6fea72f2df817c906dc57553a8227ce408ec5dadf9.scope: Deactivated successfully. Sep 12 17:16:58.727361 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:16:58.738165 containerd[1909]: time="2025-09-12T17:16:58.738083733Z" level=info msg="shim disconnected" id=b9766eef11851d4ef27a2e6fea72f2df817c906dc57553a8227ce408ec5dadf9 namespace=k8s.io Sep 12 17:16:58.738165 containerd[1909]: time="2025-09-12T17:16:58.738149865Z" level=warning msg="cleaning up after shim disconnected" id=b9766eef11851d4ef27a2e6fea72f2df817c906dc57553a8227ce408ec5dadf9 namespace=k8s.io Sep 12 17:16:58.738165 containerd[1909]: time="2025-09-12T17:16:58.738164299Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:16:58.801007 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b3163b08418624750093385561660eeccfbcfb0f3b5426b01a75f46b41b423d9-rootfs.mount: Deactivated successfully. Sep 12 17:16:59.012707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2400621482.mount: Deactivated successfully. Sep 12 17:16:59.552238 containerd[1909]: time="2025-09-12T17:16:59.551661051Z" level=info msg="CreateContainer within sandbox \"92c3b47a21583ba8d17b3be4ede88d2890bb30b5a5a613c35765edc4fa38e4f6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 17:16:59.634085 containerd[1909]: time="2025-09-12T17:16:59.633056213Z" level=info msg="CreateContainer within sandbox \"92c3b47a21583ba8d17b3be4ede88d2890bb30b5a5a613c35765edc4fa38e4f6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a310d2ee9536bef55ba586af69008ad17c57fd718dcac75401b147de2e718a22\"" Sep 12 17:16:59.636794 containerd[1909]: time="2025-09-12T17:16:59.634609092Z" level=info msg="StartContainer for \"a310d2ee9536bef55ba586af69008ad17c57fd718dcac75401b147de2e718a22\"" Sep 12 17:16:59.686342 systemd[1]: Started cri-containerd-a310d2ee9536bef55ba586af69008ad17c57fd718dcac75401b147de2e718a22.scope - libcontainer container a310d2ee9536bef55ba586af69008ad17c57fd718dcac75401b147de2e718a22. Sep 12 17:16:59.757696 containerd[1909]: time="2025-09-12T17:16:59.757169911Z" level=info msg="StartContainer for \"a310d2ee9536bef55ba586af69008ad17c57fd718dcac75401b147de2e718a22\" returns successfully" Sep 12 17:16:59.757885 systemd[1]: cri-containerd-a310d2ee9536bef55ba586af69008ad17c57fd718dcac75401b147de2e718a22.scope: Deactivated successfully. Sep 12 17:16:59.819070 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a310d2ee9536bef55ba586af69008ad17c57fd718dcac75401b147de2e718a22-rootfs.mount: Deactivated successfully. Sep 12 17:16:59.835937 containerd[1909]: time="2025-09-12T17:16:59.835699909Z" level=info msg="shim disconnected" id=a310d2ee9536bef55ba586af69008ad17c57fd718dcac75401b147de2e718a22 namespace=k8s.io Sep 12 17:16:59.835937 containerd[1909]: time="2025-09-12T17:16:59.835754822Z" level=warning msg="cleaning up after shim disconnected" id=a310d2ee9536bef55ba586af69008ad17c57fd718dcac75401b147de2e718a22 namespace=k8s.io Sep 12 17:16:59.835937 containerd[1909]: time="2025-09-12T17:16:59.835767382Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:17:00.282821 containerd[1909]: time="2025-09-12T17:17:00.282611668Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:17:00.284368 containerd[1909]: time="2025-09-12T17:17:00.284311522Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 12 17:17:00.286718 containerd[1909]: time="2025-09-12T17:17:00.286684312Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:17:00.288791 containerd[1909]: time="2025-09-12T17:17:00.288755047Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.539827175s" Sep 12 17:17:00.288791 containerd[1909]: time="2025-09-12T17:17:00.288791533Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 12 17:17:00.292098 containerd[1909]: time="2025-09-12T17:17:00.291207054Z" level=info msg="CreateContainer within sandbox \"7375ba37b9bdb47491befe59a8c4348db5664b9a009a42f383516d1c2318dfa0\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 12 17:17:00.323818 containerd[1909]: time="2025-09-12T17:17:00.323713460Z" level=info msg="CreateContainer within sandbox \"7375ba37b9bdb47491befe59a8c4348db5664b9a009a42f383516d1c2318dfa0\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"026ca7663704bcf74991842327fca8373f61ef0be2f60ca2da926a668e92a173\"" Sep 12 17:17:00.324576 containerd[1909]: time="2025-09-12T17:17:00.324446594Z" level=info msg="StartContainer for \"026ca7663704bcf74991842327fca8373f61ef0be2f60ca2da926a668e92a173\"" Sep 12 17:17:00.361211 systemd[1]: Started cri-containerd-026ca7663704bcf74991842327fca8373f61ef0be2f60ca2da926a668e92a173.scope - libcontainer container 026ca7663704bcf74991842327fca8373f61ef0be2f60ca2da926a668e92a173. Sep 12 17:17:00.403659 containerd[1909]: time="2025-09-12T17:17:00.403500883Z" level=info msg="StartContainer for \"026ca7663704bcf74991842327fca8373f61ef0be2f60ca2da926a668e92a173\" returns successfully" Sep 12 17:17:00.612034 containerd[1909]: time="2025-09-12T17:17:00.611991009Z" level=info msg="CreateContainer within sandbox \"92c3b47a21583ba8d17b3be4ede88d2890bb30b5a5a613c35765edc4fa38e4f6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 17:17:00.659231 containerd[1909]: time="2025-09-12T17:17:00.659066089Z" level=info msg="CreateContainer within sandbox \"92c3b47a21583ba8d17b3be4ede88d2890bb30b5a5a613c35765edc4fa38e4f6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5fc01fae2031980714616eb356cdab7c2db2032ea26c870b11540da37422ca06\"" Sep 12 17:17:00.659989 containerd[1909]: time="2025-09-12T17:17:00.659870131Z" level=info msg="StartContainer for \"5fc01fae2031980714616eb356cdab7c2db2032ea26c870b11540da37422ca06\"" Sep 12 17:17:00.726210 systemd[1]: Started cri-containerd-5fc01fae2031980714616eb356cdab7c2db2032ea26c870b11540da37422ca06.scope - libcontainer container 5fc01fae2031980714616eb356cdab7c2db2032ea26c870b11540da37422ca06. Sep 12 17:17:00.773271 systemd[1]: cri-containerd-5fc01fae2031980714616eb356cdab7c2db2032ea26c870b11540da37422ca06.scope: Deactivated successfully. Sep 12 17:17:00.780233 containerd[1909]: time="2025-09-12T17:17:00.779722869Z" level=info msg="StartContainer for \"5fc01fae2031980714616eb356cdab7c2db2032ea26c870b11540da37422ca06\" returns successfully" Sep 12 17:17:00.808697 systemd[1]: run-containerd-runc-k8s.io-026ca7663704bcf74991842327fca8373f61ef0be2f60ca2da926a668e92a173-runc.1mEkS8.mount: Deactivated successfully. Sep 12 17:17:00.848771 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5fc01fae2031980714616eb356cdab7c2db2032ea26c870b11540da37422ca06-rootfs.mount: Deactivated successfully. Sep 12 17:17:00.868636 containerd[1909]: time="2025-09-12T17:17:00.868416166Z" level=info msg="shim disconnected" id=5fc01fae2031980714616eb356cdab7c2db2032ea26c870b11540da37422ca06 namespace=k8s.io Sep 12 17:17:00.868636 containerd[1909]: time="2025-09-12T17:17:00.868483215Z" level=warning msg="cleaning up after shim disconnected" id=5fc01fae2031980714616eb356cdab7c2db2032ea26c870b11540da37422ca06 namespace=k8s.io Sep 12 17:17:00.868636 containerd[1909]: time="2025-09-12T17:17:00.868493881Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:17:01.572588 containerd[1909]: time="2025-09-12T17:17:01.572238128Z" level=info msg="CreateContainer within sandbox \"92c3b47a21583ba8d17b3be4ede88d2890bb30b5a5a613c35765edc4fa38e4f6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 17:17:01.609559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount552868579.mount: Deactivated successfully. Sep 12 17:17:01.618982 containerd[1909]: time="2025-09-12T17:17:01.617265410Z" level=info msg="CreateContainer within sandbox \"92c3b47a21583ba8d17b3be4ede88d2890bb30b5a5a613c35765edc4fa38e4f6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7d46fa011785fbdf808f52679e0a970854fd1080f1e96d7071f98639c231068e\"" Sep 12 17:17:01.626852 containerd[1909]: time="2025-09-12T17:17:01.625560291Z" level=info msg="StartContainer for \"7d46fa011785fbdf808f52679e0a970854fd1080f1e96d7071f98639c231068e\"" Sep 12 17:17:01.677650 kubelet[3167]: I0912 17:17:01.677568 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-nswt9" podStartSLOduration=1.930665396 podStartE2EDuration="13.675629745s" podCreationTimestamp="2025-09-12 17:16:48 +0000 UTC" firstStartedPulling="2025-09-12 17:16:48.544515343 +0000 UTC m=+5.375305841" lastFinishedPulling="2025-09-12 17:17:00.289479696 +0000 UTC m=+17.120270190" observedRunningTime="2025-09-12 17:17:00.615489411 +0000 UTC m=+17.446279914" watchObservedRunningTime="2025-09-12 17:17:01.675629745 +0000 UTC m=+18.506420260" Sep 12 17:17:01.686251 systemd[1]: Started cri-containerd-7d46fa011785fbdf808f52679e0a970854fd1080f1e96d7071f98639c231068e.scope - libcontainer container 7d46fa011785fbdf808f52679e0a970854fd1080f1e96d7071f98639c231068e. Sep 12 17:17:01.830059 containerd[1909]: time="2025-09-12T17:17:01.829435062Z" level=info msg="StartContainer for \"7d46fa011785fbdf808f52679e0a970854fd1080f1e96d7071f98639c231068e\" returns successfully" Sep 12 17:17:02.027685 systemd[1]: run-containerd-runc-k8s.io-7d46fa011785fbdf808f52679e0a970854fd1080f1e96d7071f98639c231068e-runc.Y34Cva.mount: Deactivated successfully. Sep 12 17:17:02.255639 kubelet[3167]: I0912 17:17:02.253299 3167 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 17:17:02.425559 kubelet[3167]: I0912 17:17:02.425521 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a8b9bdf5-76ad-4f68-bb7c-3d9719131140-config-volume\") pod \"coredns-668d6bf9bc-vqg2n\" (UID: \"a8b9bdf5-76ad-4f68-bb7c-3d9719131140\") " pod="kube-system/coredns-668d6bf9bc-vqg2n" Sep 12 17:17:02.425839 kubelet[3167]: I0912 17:17:02.425816 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqfdq\" (UniqueName: \"kubernetes.io/projected/a8b9bdf5-76ad-4f68-bb7c-3d9719131140-kube-api-access-kqfdq\") pod \"coredns-668d6bf9bc-vqg2n\" (UID: \"a8b9bdf5-76ad-4f68-bb7c-3d9719131140\") " pod="kube-system/coredns-668d6bf9bc-vqg2n" Sep 12 17:17:02.446770 systemd[1]: Created slice kubepods-burstable-poda8b9bdf5_76ad_4f68_bb7c_3d9719131140.slice - libcontainer container kubepods-burstable-poda8b9bdf5_76ad_4f68_bb7c_3d9719131140.slice. Sep 12 17:17:02.461580 systemd[1]: Created slice kubepods-burstable-pod58451589_060e_43dc_979c_fd0304735802.slice - libcontainer container kubepods-burstable-pod58451589_060e_43dc_979c_fd0304735802.slice. Sep 12 17:17:02.527862 kubelet[3167]: I0912 17:17:02.526610 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj9qm\" (UniqueName: \"kubernetes.io/projected/58451589-060e-43dc-979c-fd0304735802-kube-api-access-bj9qm\") pod \"coredns-668d6bf9bc-kpz9d\" (UID: \"58451589-060e-43dc-979c-fd0304735802\") " pod="kube-system/coredns-668d6bf9bc-kpz9d" Sep 12 17:17:02.527862 kubelet[3167]: I0912 17:17:02.526680 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58451589-060e-43dc-979c-fd0304735802-config-volume\") pod \"coredns-668d6bf9bc-kpz9d\" (UID: \"58451589-060e-43dc-979c-fd0304735802\") " pod="kube-system/coredns-668d6bf9bc-kpz9d" Sep 12 17:17:02.757282 containerd[1909]: time="2025-09-12T17:17:02.757232678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vqg2n,Uid:a8b9bdf5-76ad-4f68-bb7c-3d9719131140,Namespace:kube-system,Attempt:0,}" Sep 12 17:17:02.767071 containerd[1909]: time="2025-09-12T17:17:02.767020820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kpz9d,Uid:58451589-060e-43dc-979c-fd0304735802,Namespace:kube-system,Attempt:0,}" Sep 12 17:17:04.971072 systemd-networkd[1828]: cilium_host: Link UP Sep 12 17:17:04.972442 systemd-networkd[1828]: cilium_net: Link UP Sep 12 17:17:04.974146 (udev-worker)[4145]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:17:04.974167 systemd-networkd[1828]: cilium_net: Gained carrier Sep 12 17:17:04.974546 systemd-networkd[1828]: cilium_host: Gained carrier Sep 12 17:17:04.975727 (udev-worker)[4182]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:17:05.118747 (udev-worker)[4192]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:17:05.125051 systemd-networkd[1828]: cilium_host: Gained IPv6LL Sep 12 17:17:05.128212 systemd-networkd[1828]: cilium_vxlan: Link UP Sep 12 17:17:05.128220 systemd-networkd[1828]: cilium_vxlan: Gained carrier Sep 12 17:17:05.733193 systemd-networkd[1828]: cilium_net: Gained IPv6LL Sep 12 17:17:05.846988 kernel: NET: Registered PF_ALG protocol family Sep 12 17:17:06.309210 systemd-networkd[1828]: cilium_vxlan: Gained IPv6LL Sep 12 17:17:06.619581 systemd-networkd[1828]: lxc_health: Link UP Sep 12 17:17:06.621462 systemd-networkd[1828]: lxc_health: Gained carrier Sep 12 17:17:06.622418 (udev-worker)[4195]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:17:06.917568 systemd-networkd[1828]: lxc84fcf1771380: Link UP Sep 12 17:17:06.930076 kernel: eth0: renamed from tmp6c83a Sep 12 17:17:06.940361 kernel: eth0: renamed from tmp612e1 Sep 12 17:17:06.948027 systemd-networkd[1828]: lxc21692029e243: Link UP Sep 12 17:17:06.952788 systemd-networkd[1828]: lxc84fcf1771380: Gained carrier Sep 12 17:17:06.953542 systemd-networkd[1828]: lxc21692029e243: Gained carrier Sep 12 17:17:08.147028 kubelet[3167]: I0912 17:17:08.146960 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qfht2" podStartSLOduration=11.689883954999999 podStartE2EDuration="21.146919956s" podCreationTimestamp="2025-09-12 17:16:47 +0000 UTC" firstStartedPulling="2025-09-12 17:16:48.291662923 +0000 UTC m=+5.122453416" lastFinishedPulling="2025-09-12 17:16:57.748698918 +0000 UTC m=+14.579489417" observedRunningTime="2025-09-12 17:17:02.737151208 +0000 UTC m=+19.567941713" watchObservedRunningTime="2025-09-12 17:17:08.146919956 +0000 UTC m=+24.977710459" Sep 12 17:17:08.485121 systemd-networkd[1828]: lxc_health: Gained IPv6LL Sep 12 17:17:08.549637 systemd-networkd[1828]: lxc84fcf1771380: Gained IPv6LL Sep 12 17:17:08.805582 systemd-networkd[1828]: lxc21692029e243: Gained IPv6LL Sep 12 17:17:10.809355 ntpd[1889]: Listen normally on 7 cilium_host 192.168.0.26:123 Sep 12 17:17:10.811150 ntpd[1889]: 12 Sep 17:17:10 ntpd[1889]: Listen normally on 7 cilium_host 192.168.0.26:123 Sep 12 17:17:10.811150 ntpd[1889]: 12 Sep 17:17:10 ntpd[1889]: Listen normally on 8 cilium_net [fe80::74b5:86ff:feff:21e8%4]:123 Sep 12 17:17:10.811150 ntpd[1889]: 12 Sep 17:17:10 ntpd[1889]: Listen normally on 9 cilium_host [fe80::44da:5cff:fe3f:8c68%5]:123 Sep 12 17:17:10.811150 ntpd[1889]: 12 Sep 17:17:10 ntpd[1889]: Listen normally on 10 cilium_vxlan [fe80::c4d:b2ff:fe66:176%6]:123 Sep 12 17:17:10.811150 ntpd[1889]: 12 Sep 17:17:10 ntpd[1889]: Listen normally on 11 lxc_health [fe80::d41f:efff:fe6d:de06%8]:123 Sep 12 17:17:10.811150 ntpd[1889]: 12 Sep 17:17:10 ntpd[1889]: Listen normally on 12 lxc84fcf1771380 [fe80::c8a6:60ff:fe62:48e7%10]:123 Sep 12 17:17:10.811150 ntpd[1889]: 12 Sep 17:17:10 ntpd[1889]: Listen normally on 13 lxc21692029e243 [fe80::ac65:15ff:fe59:ebff%12]:123 Sep 12 17:17:10.809447 ntpd[1889]: Listen normally on 8 cilium_net [fe80::74b5:86ff:feff:21e8%4]:123 Sep 12 17:17:10.809504 ntpd[1889]: Listen normally on 9 cilium_host [fe80::44da:5cff:fe3f:8c68%5]:123 Sep 12 17:17:10.809545 ntpd[1889]: Listen normally on 10 cilium_vxlan [fe80::c4d:b2ff:fe66:176%6]:123 Sep 12 17:17:10.809584 ntpd[1889]: Listen normally on 11 lxc_health [fe80::d41f:efff:fe6d:de06%8]:123 Sep 12 17:17:10.809623 ntpd[1889]: Listen normally on 12 lxc84fcf1771380 [fe80::c8a6:60ff:fe62:48e7%10]:123 Sep 12 17:17:10.809662 ntpd[1889]: Listen normally on 13 lxc21692029e243 [fe80::ac65:15ff:fe59:ebff%12]:123 Sep 12 17:17:11.570342 containerd[1909]: time="2025-09-12T17:17:11.569773290Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:17:11.570342 containerd[1909]: time="2025-09-12T17:17:11.569865274Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:17:11.570342 containerd[1909]: time="2025-09-12T17:17:11.569890516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:17:11.570342 containerd[1909]: time="2025-09-12T17:17:11.570037651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:17:11.613015 containerd[1909]: time="2025-09-12T17:17:11.612587521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:17:11.613015 containerd[1909]: time="2025-09-12T17:17:11.612686311Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:17:11.613015 containerd[1909]: time="2025-09-12T17:17:11.612711324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:17:11.613015 containerd[1909]: time="2025-09-12T17:17:11.612834525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:17:11.674165 systemd[1]: Started cri-containerd-6c83a73c94477f3d2d142e8f96eb552413fcb5e28dc14a976d3cb63627777615.scope - libcontainer container 6c83a73c94477f3d2d142e8f96eb552413fcb5e28dc14a976d3cb63627777615. Sep 12 17:17:11.696374 systemd[1]: Started cri-containerd-612e14ede0fe878dbb7b35e51e842b4e5aa3a0f282c88fa93575a41a59967cff.scope - libcontainer container 612e14ede0fe878dbb7b35e51e842b4e5aa3a0f282c88fa93575a41a59967cff. Sep 12 17:17:11.779897 containerd[1909]: time="2025-09-12T17:17:11.779814246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kpz9d,Uid:58451589-060e-43dc-979c-fd0304735802,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c83a73c94477f3d2d142e8f96eb552413fcb5e28dc14a976d3cb63627777615\"" Sep 12 17:17:11.785639 containerd[1909]: time="2025-09-12T17:17:11.785438440Z" level=info msg="CreateContainer within sandbox \"6c83a73c94477f3d2d142e8f96eb552413fcb5e28dc14a976d3cb63627777615\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:17:11.817917 containerd[1909]: time="2025-09-12T17:17:11.817607798Z" level=info msg="CreateContainer within sandbox \"6c83a73c94477f3d2d142e8f96eb552413fcb5e28dc14a976d3cb63627777615\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"41e473cba505bae998ac2dfd0a8bff77bb03e76927e73682055847f53c1943ab\"" Sep 12 17:17:11.818862 containerd[1909]: time="2025-09-12T17:17:11.818828624Z" level=info msg="StartContainer for \"41e473cba505bae998ac2dfd0a8bff77bb03e76927e73682055847f53c1943ab\"" Sep 12 17:17:11.849387 containerd[1909]: time="2025-09-12T17:17:11.848557837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vqg2n,Uid:a8b9bdf5-76ad-4f68-bb7c-3d9719131140,Namespace:kube-system,Attempt:0,} returns sandbox id \"612e14ede0fe878dbb7b35e51e842b4e5aa3a0f282c88fa93575a41a59967cff\"" Sep 12 17:17:11.853677 containerd[1909]: time="2025-09-12T17:17:11.853642460Z" level=info msg="CreateContainer within sandbox \"612e14ede0fe878dbb7b35e51e842b4e5aa3a0f282c88fa93575a41a59967cff\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:17:11.878214 systemd[1]: Started cri-containerd-41e473cba505bae998ac2dfd0a8bff77bb03e76927e73682055847f53c1943ab.scope - libcontainer container 41e473cba505bae998ac2dfd0a8bff77bb03e76927e73682055847f53c1943ab. Sep 12 17:17:11.882345 containerd[1909]: time="2025-09-12T17:17:11.882306087Z" level=info msg="CreateContainer within sandbox \"612e14ede0fe878dbb7b35e51e842b4e5aa3a0f282c88fa93575a41a59967cff\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b4edc9839f74a6605e62e80689ff29609b5a67984e8ff1ecd1566561431adc05\"" Sep 12 17:17:11.883169 containerd[1909]: time="2025-09-12T17:17:11.883144096Z" level=info msg="StartContainer for \"b4edc9839f74a6605e62e80689ff29609b5a67984e8ff1ecd1566561431adc05\"" Sep 12 17:17:11.925158 systemd[1]: Started cri-containerd-b4edc9839f74a6605e62e80689ff29609b5a67984e8ff1ecd1566561431adc05.scope - libcontainer container b4edc9839f74a6605e62e80689ff29609b5a67984e8ff1ecd1566561431adc05. Sep 12 17:17:11.961548 containerd[1909]: time="2025-09-12T17:17:11.961507503Z" level=info msg="StartContainer for \"41e473cba505bae998ac2dfd0a8bff77bb03e76927e73682055847f53c1943ab\" returns successfully" Sep 12 17:17:11.971360 containerd[1909]: time="2025-09-12T17:17:11.971315538Z" level=info msg="StartContainer for \"b4edc9839f74a6605e62e80689ff29609b5a67984e8ff1ecd1566561431adc05\" returns successfully" Sep 12 17:17:12.581480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3783229775.mount: Deactivated successfully. Sep 12 17:17:12.644967 kubelet[3167]: I0912 17:17:12.644109 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-kpz9d" podStartSLOduration=24.64408877 podStartE2EDuration="24.64408877s" podCreationTimestamp="2025-09-12 17:16:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:17:12.641337873 +0000 UTC m=+29.472128378" watchObservedRunningTime="2025-09-12 17:17:12.64408877 +0000 UTC m=+29.474879273" Sep 12 17:17:13.650269 kubelet[3167]: I0912 17:17:13.650210 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-vqg2n" podStartSLOduration=25.650194232 podStartE2EDuration="25.650194232s" podCreationTimestamp="2025-09-12 17:16:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:17:12.66377414 +0000 UTC m=+29.494564644" watchObservedRunningTime="2025-09-12 17:17:13.650194232 +0000 UTC m=+30.480984734" Sep 12 17:17:16.076327 systemd[1]: Started sshd@7-172.31.19.109:22-139.178.89.65:33652.service - OpenSSH per-connection server daemon (139.178.89.65:33652). Sep 12 17:17:16.263421 sshd[4728]: Accepted publickey for core from 139.178.89.65 port 33652 ssh2: RSA SHA256:y2CKJkWUYShnRPQHaX6GVCzN7kSZ4Mn9aBLXYnNVJUA Sep 12 17:17:16.265583 sshd-session[4728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:17:16.273916 systemd-logind[1896]: New session 8 of user core. Sep 12 17:17:16.277156 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 17:17:17.068896 sshd[4730]: Connection closed by 139.178.89.65 port 33652 Sep 12 17:17:17.069543 sshd-session[4728]: pam_unix(sshd:session): session closed for user core Sep 12 17:17:17.074256 systemd[1]: sshd@7-172.31.19.109:22-139.178.89.65:33652.service: Deactivated successfully. Sep 12 17:17:17.079216 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 17:17:17.081259 systemd-logind[1896]: Session 8 logged out. Waiting for processes to exit. Sep 12 17:17:17.083412 systemd-logind[1896]: Removed session 8. Sep 12 17:17:22.107263 systemd[1]: Started sshd@8-172.31.19.109:22-139.178.89.65:46364.service - OpenSSH per-connection server daemon (139.178.89.65:46364). Sep 12 17:17:22.279100 sshd[4746]: Accepted publickey for core from 139.178.89.65 port 46364 ssh2: RSA SHA256:y2CKJkWUYShnRPQHaX6GVCzN7kSZ4Mn9aBLXYnNVJUA Sep 12 17:17:22.280548 sshd-session[4746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:17:22.286078 systemd-logind[1896]: New session 9 of user core. Sep 12 17:17:22.292260 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 17:17:22.495217 sshd[4749]: Connection closed by 139.178.89.65 port 46364 Sep 12 17:17:22.496074 sshd-session[4746]: pam_unix(sshd:session): session closed for user core Sep 12 17:17:22.509694 systemd[1]: sshd@8-172.31.19.109:22-139.178.89.65:46364.service: Deactivated successfully. Sep 12 17:17:22.525470 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 17:17:22.530083 systemd-logind[1896]: Session 9 logged out. Waiting for processes to exit. Sep 12 17:17:22.535394 systemd-logind[1896]: Removed session 9. Sep 12 17:17:27.532383 systemd[1]: Started sshd@9-172.31.19.109:22-139.178.89.65:46366.service - OpenSSH per-connection server daemon (139.178.89.65:46366). Sep 12 17:17:27.706197 sshd[4761]: Accepted publickey for core from 139.178.89.65 port 46366 ssh2: RSA SHA256:y2CKJkWUYShnRPQHaX6GVCzN7kSZ4Mn9aBLXYnNVJUA Sep 12 17:17:27.707621 sshd-session[4761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:17:27.714088 systemd-logind[1896]: New session 10 of user core. Sep 12 17:17:27.722209 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 17:17:27.910108 sshd[4763]: Connection closed by 139.178.89.65 port 46366 Sep 12 17:17:27.911612 sshd-session[4761]: pam_unix(sshd:session): session closed for user core Sep 12 17:17:27.915447 systemd[1]: sshd@9-172.31.19.109:22-139.178.89.65:46366.service: Deactivated successfully. Sep 12 17:17:27.918016 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 17:17:27.918775 systemd-logind[1896]: Session 10 logged out. Waiting for processes to exit. Sep 12 17:17:27.919689 systemd-logind[1896]: Removed session 10. Sep 12 17:17:32.946371 systemd[1]: Started sshd@10-172.31.19.109:22-139.178.89.65:39660.service - OpenSSH per-connection server daemon (139.178.89.65:39660). Sep 12 17:17:33.106637 sshd[4776]: Accepted publickey for core from 139.178.89.65 port 39660 ssh2: RSA SHA256:y2CKJkWUYShnRPQHaX6GVCzN7kSZ4Mn9aBLXYnNVJUA Sep 12 17:17:33.108043 sshd-session[4776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:17:33.114426 systemd-logind[1896]: New session 11 of user core. Sep 12 17:17:33.118141 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 17:17:33.310296 sshd[4778]: Connection closed by 139.178.89.65 port 39660 Sep 12 17:17:33.310896 sshd-session[4776]: pam_unix(sshd:session): session closed for user core Sep 12 17:17:33.314041 systemd[1]: sshd@10-172.31.19.109:22-139.178.89.65:39660.service: Deactivated successfully. Sep 12 17:17:33.316061 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 17:17:33.317581 systemd-logind[1896]: Session 11 logged out. Waiting for processes to exit. Sep 12 17:17:33.318745 systemd-logind[1896]: Removed session 11. Sep 12 17:17:33.348400 systemd[1]: Started sshd@11-172.31.19.109:22-139.178.89.65:39672.service - OpenSSH per-connection server daemon (139.178.89.65:39672). Sep 12 17:17:33.524129 sshd[4791]: Accepted publickey for core from 139.178.89.65 port 39672 ssh2: RSA SHA256:y2CKJkWUYShnRPQHaX6GVCzN7kSZ4Mn9aBLXYnNVJUA Sep 12 17:17:33.525536 sshd-session[4791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:17:33.530740 systemd-logind[1896]: New session 12 of user core. Sep 12 17:17:33.537174 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 17:17:33.818378 sshd[4793]: Connection closed by 139.178.89.65 port 39672 Sep 12 17:17:33.820256 sshd-session[4791]: pam_unix(sshd:session): session closed for user core Sep 12 17:17:33.826646 systemd-logind[1896]: Session 12 logged out. Waiting for processes to exit. Sep 12 17:17:33.828330 systemd[1]: sshd@11-172.31.19.109:22-139.178.89.65:39672.service: Deactivated successfully. Sep 12 17:17:33.833614 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 17:17:33.866113 systemd[1]: Started sshd@12-172.31.19.109:22-139.178.89.65:39682.service - OpenSSH per-connection server daemon (139.178.89.65:39682). Sep 12 17:17:33.868797 systemd-logind[1896]: Removed session 12. Sep 12 17:17:34.076359 sshd[4802]: Accepted publickey for core from 139.178.89.65 port 39682 ssh2: RSA SHA256:y2CKJkWUYShnRPQHaX6GVCzN7kSZ4Mn9aBLXYnNVJUA Sep 12 17:17:34.078400 sshd-session[4802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:17:34.084273 systemd-logind[1896]: New session 13 of user core. Sep 12 17:17:34.091225 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 17:17:34.294681 sshd[4805]: Connection closed by 139.178.89.65 port 39682 Sep 12 17:17:34.295359 sshd-session[4802]: pam_unix(sshd:session): session closed for user core Sep 12 17:17:34.298427 systemd-logind[1896]: Session 13 logged out. Waiting for processes to exit. Sep 12 17:17:34.298688 systemd[1]: sshd@12-172.31.19.109:22-139.178.89.65:39682.service: Deactivated successfully. Sep 12 17:17:34.300537 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 17:17:34.303657 systemd-logind[1896]: Removed session 13. Sep 12 17:17:39.333383 systemd[1]: Started sshd@13-172.31.19.109:22-139.178.89.65:39692.service - OpenSSH per-connection server daemon (139.178.89.65:39692). Sep 12 17:17:39.497579 sshd[4817]: Accepted publickey for core from 139.178.89.65 port 39692 ssh2: RSA SHA256:y2CKJkWUYShnRPQHaX6GVCzN7kSZ4Mn9aBLXYnNVJUA Sep 12 17:17:39.499443 sshd-session[4817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:17:39.518123 systemd-logind[1896]: New session 14 of user core. Sep 12 17:17:39.527234 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 17:17:39.723508 sshd[4819]: Connection closed by 139.178.89.65 port 39692 Sep 12 17:17:39.724832 sshd-session[4817]: pam_unix(sshd:session): session closed for user core Sep 12 17:17:39.729616 systemd[1]: sshd@13-172.31.19.109:22-139.178.89.65:39692.service: Deactivated successfully. Sep 12 17:17:39.732583 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 17:17:39.733911 systemd-logind[1896]: Session 14 logged out. Waiting for processes to exit. Sep 12 17:17:39.735220 systemd-logind[1896]: Removed session 14. Sep 12 17:17:44.763764 systemd[1]: Started sshd@14-172.31.19.109:22-139.178.89.65:39726.service - OpenSSH per-connection server daemon (139.178.89.65:39726). Sep 12 17:17:44.926196 sshd[4833]: Accepted publickey for core from 139.178.89.65 port 39726 ssh2: RSA SHA256:y2CKJkWUYShnRPQHaX6GVCzN7kSZ4Mn9aBLXYnNVJUA Sep 12 17:17:44.927601 sshd-session[4833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:17:44.933582 systemd-logind[1896]: New session 15 of user core. Sep 12 17:17:44.950207 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 17:17:45.144182 sshd[4835]: Connection closed by 139.178.89.65 port 39726 Sep 12 17:17:45.145262 sshd-session[4833]: pam_unix(sshd:session): session closed for user core Sep 12 17:17:45.150519 systemd[1]: sshd@14-172.31.19.109:22-139.178.89.65:39726.service: Deactivated successfully. Sep 12 17:17:45.154619 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 17:17:45.156079 systemd-logind[1896]: Session 15 logged out. Waiting for processes to exit. Sep 12 17:17:45.158264 systemd-logind[1896]: Removed session 15. Sep 12 17:17:45.191474 systemd[1]: Started sshd@15-172.31.19.109:22-139.178.89.65:39742.service - OpenSSH per-connection server daemon (139.178.89.65:39742). Sep 12 17:17:45.371733 sshd[4847]: Accepted publickey for core from 139.178.89.65 port 39742 ssh2: RSA SHA256:y2CKJkWUYShnRPQHaX6GVCzN7kSZ4Mn9aBLXYnNVJUA Sep 12 17:17:45.373401 sshd-session[4847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:17:45.379020 systemd-logind[1896]: New session 16 of user core. Sep 12 17:17:45.384241 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 17:17:45.911635 sshd[4849]: Connection closed by 139.178.89.65 port 39742 Sep 12 17:17:45.912599 sshd-session[4847]: pam_unix(sshd:session): session closed for user core Sep 12 17:17:45.918472 systemd[1]: sshd@15-172.31.19.109:22-139.178.89.65:39742.service: Deactivated successfully. Sep 12 17:17:45.924661 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 17:17:45.926830 systemd-logind[1896]: Session 16 logged out. Waiting for processes to exit. Sep 12 17:17:45.928634 systemd-logind[1896]: Removed session 16. Sep 12 17:17:45.953394 systemd[1]: Started sshd@16-172.31.19.109:22-139.178.89.65:39754.service - OpenSSH per-connection server daemon (139.178.89.65:39754). Sep 12 17:17:46.132816 sshd[4858]: Accepted publickey for core from 139.178.89.65 port 39754 ssh2: RSA SHA256:y2CKJkWUYShnRPQHaX6GVCzN7kSZ4Mn9aBLXYnNVJUA Sep 12 17:17:46.134441 sshd-session[4858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:17:46.140051 systemd-logind[1896]: New session 17 of user core. Sep 12 17:17:46.146169 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 17:17:46.921266 sshd[4860]: Connection closed by 139.178.89.65 port 39754 Sep 12 17:17:46.924315 sshd-session[4858]: pam_unix(sshd:session): session closed for user core Sep 12 17:17:46.931116 systemd[1]: sshd@16-172.31.19.109:22-139.178.89.65:39754.service: Deactivated successfully. Sep 12 17:17:46.935871 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 17:17:46.939598 systemd-logind[1896]: Session 17 logged out. Waiting for processes to exit. Sep 12 17:17:46.942429 systemd-logind[1896]: Removed session 17. Sep 12 17:17:46.960369 systemd[1]: Started sshd@17-172.31.19.109:22-139.178.89.65:39764.service - OpenSSH per-connection server daemon (139.178.89.65:39764). Sep 12 17:17:47.131618 sshd[4877]: Accepted publickey for core from 139.178.89.65 port 39764 ssh2: RSA SHA256:y2CKJkWUYShnRPQHaX6GVCzN7kSZ4Mn9aBLXYnNVJUA Sep 12 17:17:47.133092 sshd-session[4877]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:17:47.139718 systemd-logind[1896]: New session 18 of user core. Sep 12 17:17:47.152175 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 17:17:47.520976 sshd[4879]: Connection closed by 139.178.89.65 port 39764 Sep 12 17:17:47.521645 sshd-session[4877]: pam_unix(sshd:session): session closed for user core Sep 12 17:17:47.531433 systemd[1]: sshd@17-172.31.19.109:22-139.178.89.65:39764.service: Deactivated successfully. Sep 12 17:17:47.534106 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 17:17:47.535297 systemd-logind[1896]: Session 18 logged out. Waiting for processes to exit. Sep 12 17:17:47.536887 systemd-logind[1896]: Removed session 18. Sep 12 17:17:47.561364 systemd[1]: Started sshd@18-172.31.19.109:22-139.178.89.65:39772.service - OpenSSH per-connection server daemon (139.178.89.65:39772). Sep 12 17:17:47.722985 sshd[4889]: Accepted publickey for core from 139.178.89.65 port 39772 ssh2: RSA SHA256:y2CKJkWUYShnRPQHaX6GVCzN7kSZ4Mn9aBLXYnNVJUA Sep 12 17:17:47.723668 sshd-session[4889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:17:47.729288 systemd-logind[1896]: New session 19 of user core. Sep 12 17:17:47.739142 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 17:17:47.944911 sshd[4891]: Connection closed by 139.178.89.65 port 39772 Sep 12 17:17:47.946008 sshd-session[4889]: pam_unix(sshd:session): session closed for user core Sep 12 17:17:47.950147 systemd[1]: sshd@18-172.31.19.109:22-139.178.89.65:39772.service: Deactivated successfully. Sep 12 17:17:47.952621 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 17:17:47.954517 systemd-logind[1896]: Session 19 logged out. Waiting for processes to exit. Sep 12 17:17:47.955680 systemd-logind[1896]: Removed session 19. Sep 12 17:17:52.989433 systemd[1]: Started sshd@19-172.31.19.109:22-139.178.89.65:47374.service - OpenSSH per-connection server daemon (139.178.89.65:47374). Sep 12 17:17:53.157343 sshd[4908]: Accepted publickey for core from 139.178.89.65 port 47374 ssh2: RSA SHA256:y2CKJkWUYShnRPQHaX6GVCzN7kSZ4Mn9aBLXYnNVJUA Sep 12 17:17:53.159291 sshd-session[4908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:17:53.164405 systemd-logind[1896]: New session 20 of user core. Sep 12 17:17:53.170240 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 17:17:53.360055 sshd[4910]: Connection closed by 139.178.89.65 port 47374 Sep 12 17:17:53.361051 sshd-session[4908]: pam_unix(sshd:session): session closed for user core Sep 12 17:17:53.365995 systemd-logind[1896]: Session 20 logged out. Waiting for processes to exit. Sep 12 17:17:53.366761 systemd[1]: sshd@19-172.31.19.109:22-139.178.89.65:47374.service: Deactivated successfully. Sep 12 17:17:53.369419 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 17:17:53.370883 systemd-logind[1896]: Removed session 20. Sep 12 17:17:58.398342 systemd[1]: Started sshd@20-172.31.19.109:22-139.178.89.65:47380.service - OpenSSH per-connection server daemon (139.178.89.65:47380). Sep 12 17:17:58.560129 sshd[4922]: Accepted publickey for core from 139.178.89.65 port 47380 ssh2: RSA SHA256:y2CKJkWUYShnRPQHaX6GVCzN7kSZ4Mn9aBLXYnNVJUA Sep 12 17:17:58.563144 sshd-session[4922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:17:58.569394 systemd-logind[1896]: New session 21 of user core. Sep 12 17:17:58.575178 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 17:17:58.762170 sshd[4924]: Connection closed by 139.178.89.65 port 47380 Sep 12 17:17:58.763261 sshd-session[4922]: pam_unix(sshd:session): session closed for user core Sep 12 17:17:58.767564 systemd[1]: sshd@20-172.31.19.109:22-139.178.89.65:47380.service: Deactivated successfully. Sep 12 17:17:58.771045 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 17:17:58.772082 systemd-logind[1896]: Session 21 logged out. Waiting for processes to exit. Sep 12 17:17:58.773321 systemd-logind[1896]: Removed session 21. Sep 12 17:18:03.798411 systemd[1]: Started sshd@21-172.31.19.109:22-139.178.89.65:40126.service - OpenSSH per-connection server daemon (139.178.89.65:40126). Sep 12 17:18:03.970889 sshd[4936]: Accepted publickey for core from 139.178.89.65 port 40126 ssh2: RSA SHA256:y2CKJkWUYShnRPQHaX6GVCzN7kSZ4Mn9aBLXYnNVJUA Sep 12 17:18:03.972578 sshd-session[4936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:18:03.979254 systemd-logind[1896]: New session 22 of user core. Sep 12 17:18:03.985235 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 17:18:04.174649 sshd[4938]: Connection closed by 139.178.89.65 port 40126 Sep 12 17:18:04.175914 sshd-session[4936]: pam_unix(sshd:session): session closed for user core Sep 12 17:18:04.180796 systemd[1]: sshd@21-172.31.19.109:22-139.178.89.65:40126.service: Deactivated successfully. Sep 12 17:18:04.184516 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 17:18:04.186079 systemd-logind[1896]: Session 22 logged out. Waiting for processes to exit. Sep 12 17:18:04.187361 systemd-logind[1896]: Removed session 22. Sep 12 17:18:09.212330 systemd[1]: Started sshd@22-172.31.19.109:22-139.178.89.65:40130.service - OpenSSH per-connection server daemon (139.178.89.65:40130). Sep 12 17:18:09.381567 sshd[4951]: Accepted publickey for core from 139.178.89.65 port 40130 ssh2: RSA SHA256:y2CKJkWUYShnRPQHaX6GVCzN7kSZ4Mn9aBLXYnNVJUA Sep 12 17:18:09.383187 sshd-session[4951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:18:09.390363 systemd-logind[1896]: New session 23 of user core. Sep 12 17:18:09.397211 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 17:18:09.581065 sshd[4953]: Connection closed by 139.178.89.65 port 40130 Sep 12 17:18:09.581835 sshd-session[4951]: pam_unix(sshd:session): session closed for user core Sep 12 17:18:09.585127 systemd[1]: sshd@22-172.31.19.109:22-139.178.89.65:40130.service: Deactivated successfully. Sep 12 17:18:09.587466 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 17:18:09.590344 systemd-logind[1896]: Session 23 logged out. Waiting for processes to exit. Sep 12 17:18:09.591619 systemd-logind[1896]: Removed session 23. Sep 12 17:18:09.623575 systemd[1]: Started sshd@23-172.31.19.109:22-139.178.89.65:40132.service - OpenSSH per-connection server daemon (139.178.89.65:40132). Sep 12 17:18:09.784526 sshd[4965]: Accepted publickey for core from 139.178.89.65 port 40132 ssh2: RSA SHA256:y2CKJkWUYShnRPQHaX6GVCzN7kSZ4Mn9aBLXYnNVJUA Sep 12 17:18:09.786250 sshd-session[4965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:18:09.792079 systemd-logind[1896]: New session 24 of user core. Sep 12 17:18:09.798142 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 17:18:11.402561 containerd[1909]: time="2025-09-12T17:18:11.402495395Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:18:11.431569 containerd[1909]: time="2025-09-12T17:18:11.431526164Z" level=info msg="StopContainer for \"7d46fa011785fbdf808f52679e0a970854fd1080f1e96d7071f98639c231068e\" with timeout 2 (s)" Sep 12 17:18:11.431716 containerd[1909]: time="2025-09-12T17:18:11.431692465Z" level=info msg="StopContainer for \"026ca7663704bcf74991842327fca8373f61ef0be2f60ca2da926a668e92a173\" with timeout 30 (s)" Sep 12 17:18:11.432282 containerd[1909]: time="2025-09-12T17:18:11.432113517Z" level=info msg="Stop container \"7d46fa011785fbdf808f52679e0a970854fd1080f1e96d7071f98639c231068e\" with signal terminated" Sep 12 17:18:11.432282 containerd[1909]: time="2025-09-12T17:18:11.432217865Z" level=info msg="Stop container \"026ca7663704bcf74991842327fca8373f61ef0be2f60ca2da926a668e92a173\" with signal terminated" Sep 12 17:18:11.443725 systemd-networkd[1828]: lxc_health: Link DOWN Sep 12 17:18:11.443733 systemd-networkd[1828]: lxc_health: Lost carrier Sep 12 17:18:11.453221 systemd[1]: cri-containerd-026ca7663704bcf74991842327fca8373f61ef0be2f60ca2da926a668e92a173.scope: Deactivated successfully. Sep 12 17:18:11.470260 systemd[1]: cri-containerd-7d46fa011785fbdf808f52679e0a970854fd1080f1e96d7071f98639c231068e.scope: Deactivated successfully. Sep 12 17:18:11.471115 systemd[1]: cri-containerd-7d46fa011785fbdf808f52679e0a970854fd1080f1e96d7071f98639c231068e.scope: Consumed 8.485s CPU time, 197.4M memory peak, 73.5M read from disk, 13.3M written to disk. Sep 12 17:18:11.500166 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-026ca7663704bcf74991842327fca8373f61ef0be2f60ca2da926a668e92a173-rootfs.mount: Deactivated successfully. Sep 12 17:18:11.512370 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d46fa011785fbdf808f52679e0a970854fd1080f1e96d7071f98639c231068e-rootfs.mount: Deactivated successfully. Sep 12 17:18:11.531623 containerd[1909]: time="2025-09-12T17:18:11.531540363Z" level=info msg="shim disconnected" id=026ca7663704bcf74991842327fca8373f61ef0be2f60ca2da926a668e92a173 namespace=k8s.io Sep 12 17:18:11.531623 containerd[1909]: time="2025-09-12T17:18:11.531599772Z" level=warning msg="cleaning up after shim disconnected" id=026ca7663704bcf74991842327fca8373f61ef0be2f60ca2da926a668e92a173 namespace=k8s.io Sep 12 17:18:11.531623 containerd[1909]: time="2025-09-12T17:18:11.531609654Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:18:11.532201 containerd[1909]: time="2025-09-12T17:18:11.531887506Z" level=info msg="shim disconnected" id=7d46fa011785fbdf808f52679e0a970854fd1080f1e96d7071f98639c231068e namespace=k8s.io Sep 12 17:18:11.532201 containerd[1909]: time="2025-09-12T17:18:11.531915796Z" level=warning msg="cleaning up after shim disconnected" id=7d46fa011785fbdf808f52679e0a970854fd1080f1e96d7071f98639c231068e namespace=k8s.io Sep 12 17:18:11.532201 containerd[1909]: time="2025-09-12T17:18:11.531922546Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:18:11.563630 containerd[1909]: time="2025-09-12T17:18:11.563579896Z" level=info msg="StopContainer for \"7d46fa011785fbdf808f52679e0a970854fd1080f1e96d7071f98639c231068e\" returns successfully" Sep 12 17:18:11.565556 containerd[1909]: time="2025-09-12T17:18:11.565502643Z" level=info msg="StopContainer for \"026ca7663704bcf74991842327fca8373f61ef0be2f60ca2da926a668e92a173\" returns successfully" Sep 12 17:18:11.570309 containerd[1909]: time="2025-09-12T17:18:11.570202655Z" level=info msg="StopPodSandbox for \"7375ba37b9bdb47491befe59a8c4348db5664b9a009a42f383516d1c2318dfa0\"" Sep 12 17:18:11.571356 containerd[1909]: time="2025-09-12T17:18:11.571272829Z" level=info msg="StopPodSandbox for \"92c3b47a21583ba8d17b3be4ede88d2890bb30b5a5a613c35765edc4fa38e4f6\"" Sep 12 17:18:11.579357 containerd[1909]: time="2025-09-12T17:18:11.577092956Z" level=info msg="Container to stop \"b9766eef11851d4ef27a2e6fea72f2df817c906dc57553a8227ce408ec5dadf9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:18:11.579357 containerd[1909]: time="2025-09-12T17:18:11.579349008Z" level=info msg="Container to stop \"5fc01fae2031980714616eb356cdab7c2db2032ea26c870b11540da37422ca06\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:18:11.579357 containerd[1909]: time="2025-09-12T17:18:11.579366549Z" level=info msg="Container to stop \"b3163b08418624750093385561660eeccfbcfb0f3b5426b01a75f46b41b423d9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:18:11.579607 containerd[1909]: time="2025-09-12T17:18:11.579378799Z" level=info msg="Container to stop \"a310d2ee9536bef55ba586af69008ad17c57fd718dcac75401b147de2e718a22\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:18:11.579607 containerd[1909]: time="2025-09-12T17:18:11.579390903Z" level=info msg="Container to stop \"7d46fa011785fbdf808f52679e0a970854fd1080f1e96d7071f98639c231068e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:18:11.585478 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-92c3b47a21583ba8d17b3be4ede88d2890bb30b5a5a613c35765edc4fa38e4f6-shm.mount: Deactivated successfully. Sep 12 17:18:11.585845 containerd[1909]: time="2025-09-12T17:18:11.577092954Z" level=info msg="Container to stop \"026ca7663704bcf74991842327fca8373f61ef0be2f60ca2da926a668e92a173\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:18:11.591351 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7375ba37b9bdb47491befe59a8c4348db5664b9a009a42f383516d1c2318dfa0-shm.mount: Deactivated successfully. Sep 12 17:18:11.599176 systemd[1]: cri-containerd-7375ba37b9bdb47491befe59a8c4348db5664b9a009a42f383516d1c2318dfa0.scope: Deactivated successfully. Sep 12 17:18:11.603836 systemd[1]: cri-containerd-92c3b47a21583ba8d17b3be4ede88d2890bb30b5a5a613c35765edc4fa38e4f6.scope: Deactivated successfully. Sep 12 17:18:11.656971 containerd[1909]: time="2025-09-12T17:18:11.656017727Z" level=info msg="shim disconnected" id=92c3b47a21583ba8d17b3be4ede88d2890bb30b5a5a613c35765edc4fa38e4f6 namespace=k8s.io Sep 12 17:18:11.656971 containerd[1909]: time="2025-09-12T17:18:11.656177132Z" level=warning msg="cleaning up after shim disconnected" id=92c3b47a21583ba8d17b3be4ede88d2890bb30b5a5a613c35765edc4fa38e4f6 namespace=k8s.io Sep 12 17:18:11.656971 containerd[1909]: time="2025-09-12T17:18:11.656187259Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:18:11.656971 containerd[1909]: time="2025-09-12T17:18:11.656332329Z" level=info msg="shim disconnected" id=7375ba37b9bdb47491befe59a8c4348db5664b9a009a42f383516d1c2318dfa0 namespace=k8s.io Sep 12 17:18:11.656971 containerd[1909]: time="2025-09-12T17:18:11.656892509Z" level=warning msg="cleaning up after shim disconnected" id=7375ba37b9bdb47491befe59a8c4348db5664b9a009a42f383516d1c2318dfa0 namespace=k8s.io Sep 12 17:18:11.656971 containerd[1909]: time="2025-09-12T17:18:11.656900024Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:18:11.681264 containerd[1909]: time="2025-09-12T17:18:11.681073617Z" level=info msg="TearDown network for sandbox \"7375ba37b9bdb47491befe59a8c4348db5664b9a009a42f383516d1c2318dfa0\" successfully" Sep 12 17:18:11.681264 containerd[1909]: time="2025-09-12T17:18:11.681116076Z" level=info msg="StopPodSandbox for \"7375ba37b9bdb47491befe59a8c4348db5664b9a009a42f383516d1c2318dfa0\" returns successfully" Sep 12 17:18:11.683150 containerd[1909]: time="2025-09-12T17:18:11.683101184Z" level=warning msg="cleanup warnings time=\"2025-09-12T17:18:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 12 17:18:11.684465 containerd[1909]: time="2025-09-12T17:18:11.684318026Z" level=info msg="TearDown network for sandbox \"92c3b47a21583ba8d17b3be4ede88d2890bb30b5a5a613c35765edc4fa38e4f6\" successfully" Sep 12 17:18:11.684465 containerd[1909]: time="2025-09-12T17:18:11.684350149Z" level=info msg="StopPodSandbox for \"92c3b47a21583ba8d17b3be4ede88d2890bb30b5a5a613c35765edc4fa38e4f6\" returns successfully" Sep 12 17:18:11.761834 kubelet[3167]: I0912 17:18:11.761787 3167 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qm8ck\" (UniqueName: \"kubernetes.io/projected/11064375-191d-45a1-8158-d8a41208bead-kube-api-access-qm8ck\") pod \"11064375-191d-45a1-8158-d8a41208bead\" (UID: \"11064375-191d-45a1-8158-d8a41208bead\") " Sep 12 17:18:11.762368 kubelet[3167]: I0912 17:18:11.761868 3167 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/11064375-191d-45a1-8158-d8a41208bead-cilium-config-path\") pod \"11064375-191d-45a1-8158-d8a41208bead\" (UID: \"11064375-191d-45a1-8158-d8a41208bead\") " Sep 12 17:18:11.777460 kubelet[3167]: I0912 17:18:11.775786 3167 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11064375-191d-45a1-8158-d8a41208bead-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "11064375-191d-45a1-8158-d8a41208bead" (UID: "11064375-191d-45a1-8158-d8a41208bead"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 17:18:11.780599 kubelet[3167]: I0912 17:18:11.780545 3167 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11064375-191d-45a1-8158-d8a41208bead-kube-api-access-qm8ck" (OuterVolumeSpecName: "kube-api-access-qm8ck") pod "11064375-191d-45a1-8158-d8a41208bead" (UID: "11064375-191d-45a1-8158-d8a41208bead"). InnerVolumeSpecName "kube-api-access-qm8ck". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:18:11.798701 kubelet[3167]: I0912 17:18:11.798666 3167 scope.go:117] "RemoveContainer" containerID="7d46fa011785fbdf808f52679e0a970854fd1080f1e96d7071f98639c231068e" Sep 12 17:18:11.806918 containerd[1909]: time="2025-09-12T17:18:11.806832756Z" level=info msg="RemoveContainer for \"7d46fa011785fbdf808f52679e0a970854fd1080f1e96d7071f98639c231068e\"" Sep 12 17:18:11.808673 systemd[1]: Removed slice kubepods-besteffort-pod11064375_191d_45a1_8158_d8a41208bead.slice - libcontainer container kubepods-besteffort-pod11064375_191d_45a1_8158_d8a41208bead.slice. Sep 12 17:18:11.816589 containerd[1909]: time="2025-09-12T17:18:11.816462841Z" level=info msg="RemoveContainer for \"7d46fa011785fbdf808f52679e0a970854fd1080f1e96d7071f98639c231068e\" returns successfully" Sep 12 17:18:11.831566 kubelet[3167]: I0912 17:18:11.831179 3167 scope.go:117] "RemoveContainer" containerID="5fc01fae2031980714616eb356cdab7c2db2032ea26c870b11540da37422ca06" Sep 12 17:18:11.832729 containerd[1909]: time="2025-09-12T17:18:11.832451789Z" level=info msg="RemoveContainer for \"5fc01fae2031980714616eb356cdab7c2db2032ea26c870b11540da37422ca06\"" Sep 12 17:18:11.838440 containerd[1909]: time="2025-09-12T17:18:11.838393393Z" level=info msg="RemoveContainer for \"5fc01fae2031980714616eb356cdab7c2db2032ea26c870b11540da37422ca06\" returns successfully" Sep 12 17:18:11.840183 kubelet[3167]: I0912 17:18:11.839457 3167 scope.go:117] "RemoveContainer" containerID="a310d2ee9536bef55ba586af69008ad17c57fd718dcac75401b147de2e718a22" Sep 12 17:18:11.843344 containerd[1909]: time="2025-09-12T17:18:11.843170297Z" level=info msg="RemoveContainer for \"a310d2ee9536bef55ba586af69008ad17c57fd718dcac75401b147de2e718a22\"" Sep 12 17:18:11.849289 containerd[1909]: time="2025-09-12T17:18:11.849164586Z" level=info msg="RemoveContainer for \"a310d2ee9536bef55ba586af69008ad17c57fd718dcac75401b147de2e718a22\" returns successfully" Sep 12 17:18:11.849553 kubelet[3167]: I0912 17:18:11.849452 3167 scope.go:117] "RemoveContainer" containerID="b9766eef11851d4ef27a2e6fea72f2df817c906dc57553a8227ce408ec5dadf9" Sep 12 17:18:11.850789 containerd[1909]: time="2025-09-12T17:18:11.850758083Z" level=info msg="RemoveContainer for \"b9766eef11851d4ef27a2e6fea72f2df817c906dc57553a8227ce408ec5dadf9\"" Sep 12 17:18:11.862496 kubelet[3167]: I0912 17:18:11.862450 3167 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/69872b7d-98bb-4451-9d57-bd126960412b-clustermesh-secrets\") pod \"69872b7d-98bb-4451-9d57-bd126960412b\" (UID: \"69872b7d-98bb-4451-9d57-bd126960412b\") " Sep 12 17:18:11.865199 kubelet[3167]: I0912 17:18:11.862600 3167 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/69872b7d-98bb-4451-9d57-bd126960412b-xtables-lock\") pod \"69872b7d-98bb-4451-9d57-bd126960412b\" (UID: \"69872b7d-98bb-4451-9d57-bd126960412b\") " Sep 12 17:18:11.865406 containerd[1909]: time="2025-09-12T17:18:11.865077206Z" level=info msg="RemoveContainer for \"b9766eef11851d4ef27a2e6fea72f2df817c906dc57553a8227ce408ec5dadf9\" returns successfully" Sep 12 17:18:11.865470 kubelet[3167]: I0912 17:18:11.865040 3167 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/69872b7d-98bb-4451-9d57-bd126960412b-etc-cni-netd\") pod \"69872b7d-98bb-4451-9d57-bd126960412b\" (UID: \"69872b7d-98bb-4451-9d57-bd126960412b\") " Sep 12 17:18:11.865698 kubelet[3167]: I0912 17:18:11.865544 3167 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/69872b7d-98bb-4451-9d57-bd126960412b-host-proc-sys-net\") pod \"69872b7d-98bb-4451-9d57-bd126960412b\" (UID: \"69872b7d-98bb-4451-9d57-bd126960412b\") " Sep 12 17:18:11.865698 kubelet[3167]: I0912 17:18:11.865594 3167 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/69872b7d-98bb-4451-9d57-bd126960412b-cilium-cgroup\") pod \"69872b7d-98bb-4451-9d57-bd126960412b\" (UID: \"69872b7d-98bb-4451-9d57-bd126960412b\") " Sep 12 17:18:11.865698 kubelet[3167]: I0912 17:18:11.865668 3167 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/69872b7d-98bb-4451-9d57-bd126960412b-hubble-tls\") pod \"69872b7d-98bb-4451-9d57-bd126960412b\" (UID: \"69872b7d-98bb-4451-9d57-bd126960412b\") " Sep 12 17:18:11.866422 kubelet[3167]: I0912 17:18:11.865743 3167 scope.go:117] "RemoveContainer" containerID="b3163b08418624750093385561660eeccfbcfb0f3b5426b01a75f46b41b423d9" Sep 12 17:18:11.866422 kubelet[3167]: I0912 17:18:11.865875 3167 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/69872b7d-98bb-4451-9d57-bd126960412b-cni-path\") pod \"69872b7d-98bb-4451-9d57-bd126960412b\" (UID: \"69872b7d-98bb-4451-9d57-bd126960412b\") " Sep 12 17:18:11.866422 kubelet[3167]: I0912 17:18:11.865907 3167 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjz85\" (UniqueName: \"kubernetes.io/projected/69872b7d-98bb-4451-9d57-bd126960412b-kube-api-access-jjz85\") pod \"69872b7d-98bb-4451-9d57-bd126960412b\" (UID: \"69872b7d-98bb-4451-9d57-bd126960412b\") " Sep 12 17:18:11.867289 kubelet[3167]: I0912 17:18:11.867002 3167 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/69872b7d-98bb-4451-9d57-bd126960412b-bpf-maps\") pod \"69872b7d-98bb-4451-9d57-bd126960412b\" (UID: \"69872b7d-98bb-4451-9d57-bd126960412b\") " Sep 12 17:18:11.867289 kubelet[3167]: I0912 17:18:11.867146 3167 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/69872b7d-98bb-4451-9d57-bd126960412b-cilium-config-path\") pod \"69872b7d-98bb-4451-9d57-bd126960412b\" (UID: \"69872b7d-98bb-4451-9d57-bd126960412b\") " Sep 12 17:18:11.867289 kubelet[3167]: I0912 17:18:11.867174 3167 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/69872b7d-98bb-4451-9d57-bd126960412b-cilium-run\") pod \"69872b7d-98bb-4451-9d57-bd126960412b\" (UID: \"69872b7d-98bb-4451-9d57-bd126960412b\") " Sep 12 17:18:11.868130 kubelet[3167]: I0912 17:18:11.867729 3167 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/69872b7d-98bb-4451-9d57-bd126960412b-hostproc\") pod \"69872b7d-98bb-4451-9d57-bd126960412b\" (UID: \"69872b7d-98bb-4451-9d57-bd126960412b\") " Sep 12 17:18:11.868130 kubelet[3167]: I0912 17:18:11.867887 3167 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/69872b7d-98bb-4451-9d57-bd126960412b-host-proc-sys-kernel\") pod \"69872b7d-98bb-4451-9d57-bd126960412b\" (UID: \"69872b7d-98bb-4451-9d57-bd126960412b\") " Sep 12 17:18:11.868130 kubelet[3167]: I0912 17:18:11.867912 3167 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/69872b7d-98bb-4451-9d57-bd126960412b-lib-modules\") pod \"69872b7d-98bb-4451-9d57-bd126960412b\" (UID: \"69872b7d-98bb-4451-9d57-bd126960412b\") " Sep 12 17:18:11.868483 kubelet[3167]: I0912 17:18:11.865874 3167 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69872b7d-98bb-4451-9d57-bd126960412b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "69872b7d-98bb-4451-9d57-bd126960412b" (UID: "69872b7d-98bb-4451-9d57-bd126960412b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:18:11.868483 kubelet[3167]: I0912 17:18:11.865896 3167 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69872b7d-98bb-4451-9d57-bd126960412b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "69872b7d-98bb-4451-9d57-bd126960412b" (UID: "69872b7d-98bb-4451-9d57-bd126960412b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:18:11.868483 kubelet[3167]: I0912 17:18:11.867055 3167 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69872b7d-98bb-4451-9d57-bd126960412b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "69872b7d-98bb-4451-9d57-bd126960412b" (UID: "69872b7d-98bb-4451-9d57-bd126960412b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:18:11.868483 kubelet[3167]: I0912 17:18:11.867078 3167 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69872b7d-98bb-4451-9d57-bd126960412b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "69872b7d-98bb-4451-9d57-bd126960412b" (UID: "69872b7d-98bb-4451-9d57-bd126960412b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:18:11.868483 kubelet[3167]: I0912 17:18:11.868200 3167 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69872b7d-98bb-4451-9d57-bd126960412b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "69872b7d-98bb-4451-9d57-bd126960412b" (UID: "69872b7d-98bb-4451-9d57-bd126960412b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:18:11.868813 kubelet[3167]: I0912 17:18:11.868316 3167 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69872b7d-98bb-4451-9d57-bd126960412b-cni-path" (OuterVolumeSpecName: "cni-path") pod "69872b7d-98bb-4451-9d57-bd126960412b" (UID: "69872b7d-98bb-4451-9d57-bd126960412b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:18:11.869531 kubelet[3167]: I0912 17:18:11.869437 3167 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69872b7d-98bb-4451-9d57-bd126960412b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "69872b7d-98bb-4451-9d57-bd126960412b" (UID: "69872b7d-98bb-4451-9d57-bd126960412b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:18:11.869531 kubelet[3167]: I0912 17:18:11.869490 3167 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69872b7d-98bb-4451-9d57-bd126960412b-hostproc" (OuterVolumeSpecName: "hostproc") pod "69872b7d-98bb-4451-9d57-bd126960412b" (UID: "69872b7d-98bb-4451-9d57-bd126960412b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:18:11.869714 kubelet[3167]: I0912 17:18:11.869512 3167 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69872b7d-98bb-4451-9d57-bd126960412b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "69872b7d-98bb-4451-9d57-bd126960412b" (UID: "69872b7d-98bb-4451-9d57-bd126960412b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:18:11.869714 kubelet[3167]: I0912 17:18:11.869556 3167 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69872b7d-98bb-4451-9d57-bd126960412b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "69872b7d-98bb-4451-9d57-bd126960412b" (UID: "69872b7d-98bb-4451-9d57-bd126960412b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:18:11.869714 kubelet[3167]: I0912 17:18:11.869680 3167 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qm8ck\" (UniqueName: \"kubernetes.io/projected/11064375-191d-45a1-8158-d8a41208bead-kube-api-access-qm8ck\") on node \"ip-172-31-19-109\" DevicePath \"\"" Sep 12 17:18:11.869714 kubelet[3167]: I0912 17:18:11.869700 3167 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/69872b7d-98bb-4451-9d57-bd126960412b-xtables-lock\") on node \"ip-172-31-19-109\" DevicePath \"\"" Sep 12 17:18:11.869714 kubelet[3167]: I0912 17:18:11.869715 3167 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/69872b7d-98bb-4451-9d57-bd126960412b-etc-cni-netd\") on node \"ip-172-31-19-109\" DevicePath \"\"" Sep 12 17:18:11.870022 kubelet[3167]: I0912 17:18:11.869728 3167 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/69872b7d-98bb-4451-9d57-bd126960412b-host-proc-sys-net\") on node \"ip-172-31-19-109\" DevicePath \"\"" Sep 12 17:18:11.870022 kubelet[3167]: I0912 17:18:11.869746 3167 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/69872b7d-98bb-4451-9d57-bd126960412b-cilium-cgroup\") on node \"ip-172-31-19-109\" DevicePath \"\"" Sep 12 17:18:11.870022 kubelet[3167]: I0912 17:18:11.869762 3167 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/11064375-191d-45a1-8158-d8a41208bead-cilium-config-path\") on node \"ip-172-31-19-109\" DevicePath \"\"" Sep 12 17:18:11.870022 kubelet[3167]: I0912 17:18:11.869774 3167 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/69872b7d-98bb-4451-9d57-bd126960412b-cni-path\") on node \"ip-172-31-19-109\" DevicePath \"\"" Sep 12 17:18:11.870022 kubelet[3167]: I0912 17:18:11.869786 3167 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/69872b7d-98bb-4451-9d57-bd126960412b-bpf-maps\") on node \"ip-172-31-19-109\" DevicePath \"\"" Sep 12 17:18:11.870022 kubelet[3167]: I0912 17:18:11.869799 3167 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/69872b7d-98bb-4451-9d57-bd126960412b-cilium-run\") on node \"ip-172-31-19-109\" DevicePath \"\"" Sep 12 17:18:11.870022 kubelet[3167]: I0912 17:18:11.869811 3167 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/69872b7d-98bb-4451-9d57-bd126960412b-hostproc\") on node \"ip-172-31-19-109\" DevicePath \"\"" Sep 12 17:18:11.870022 kubelet[3167]: I0912 17:18:11.869823 3167 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/69872b7d-98bb-4451-9d57-bd126960412b-host-proc-sys-kernel\") on node \"ip-172-31-19-109\" DevicePath \"\"" Sep 12 17:18:11.870471 kubelet[3167]: I0912 17:18:11.869838 3167 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/69872b7d-98bb-4451-9d57-bd126960412b-lib-modules\") on node \"ip-172-31-19-109\" DevicePath \"\"" Sep 12 17:18:11.873381 kubelet[3167]: I0912 17:18:11.873231 3167 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69872b7d-98bb-4451-9d57-bd126960412b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "69872b7d-98bb-4451-9d57-bd126960412b" (UID: "69872b7d-98bb-4451-9d57-bd126960412b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 17:18:11.874047 containerd[1909]: time="2025-09-12T17:18:11.873984932Z" level=info msg="RemoveContainer for \"b3163b08418624750093385561660eeccfbcfb0f3b5426b01a75f46b41b423d9\"" Sep 12 17:18:11.876761 kubelet[3167]: I0912 17:18:11.876724 3167 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69872b7d-98bb-4451-9d57-bd126960412b-kube-api-access-jjz85" (OuterVolumeSpecName: "kube-api-access-jjz85") pod "69872b7d-98bb-4451-9d57-bd126960412b" (UID: "69872b7d-98bb-4451-9d57-bd126960412b"). InnerVolumeSpecName "kube-api-access-jjz85". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:18:11.878754 kubelet[3167]: I0912 17:18:11.878716 3167 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69872b7d-98bb-4451-9d57-bd126960412b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "69872b7d-98bb-4451-9d57-bd126960412b" (UID: "69872b7d-98bb-4451-9d57-bd126960412b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:18:11.879474 kubelet[3167]: I0912 17:18:11.879432 3167 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69872b7d-98bb-4451-9d57-bd126960412b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "69872b7d-98bb-4451-9d57-bd126960412b" (UID: "69872b7d-98bb-4451-9d57-bd126960412b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 17:18:11.880539 containerd[1909]: time="2025-09-12T17:18:11.880491314Z" level=info msg="RemoveContainer for \"b3163b08418624750093385561660eeccfbcfb0f3b5426b01a75f46b41b423d9\" returns successfully" Sep 12 17:18:11.880782 kubelet[3167]: I0912 17:18:11.880762 3167 scope.go:117] "RemoveContainer" containerID="7d46fa011785fbdf808f52679e0a970854fd1080f1e96d7071f98639c231068e" Sep 12 17:18:11.881262 containerd[1909]: time="2025-09-12T17:18:11.881093000Z" level=error msg="ContainerStatus for \"7d46fa011785fbdf808f52679e0a970854fd1080f1e96d7071f98639c231068e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7d46fa011785fbdf808f52679e0a970854fd1080f1e96d7071f98639c231068e\": not found" Sep 12 17:18:11.884312 kubelet[3167]: E0912 17:18:11.884021 3167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7d46fa011785fbdf808f52679e0a970854fd1080f1e96d7071f98639c231068e\": not found" containerID="7d46fa011785fbdf808f52679e0a970854fd1080f1e96d7071f98639c231068e" Sep 12 17:18:11.901055 kubelet[3167]: I0912 17:18:11.884196 3167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7d46fa011785fbdf808f52679e0a970854fd1080f1e96d7071f98639c231068e"} err="failed to get container status \"7d46fa011785fbdf808f52679e0a970854fd1080f1e96d7071f98639c231068e\": rpc error: code = NotFound desc = an error occurred when try to find container \"7d46fa011785fbdf808f52679e0a970854fd1080f1e96d7071f98639c231068e\": not found" Sep 12 17:18:11.901055 kubelet[3167]: I0912 17:18:11.901056 3167 scope.go:117] "RemoveContainer" containerID="5fc01fae2031980714616eb356cdab7c2db2032ea26c870b11540da37422ca06" Sep 12 17:18:11.901375 containerd[1909]: time="2025-09-12T17:18:11.901337808Z" level=error msg="ContainerStatus for \"5fc01fae2031980714616eb356cdab7c2db2032ea26c870b11540da37422ca06\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5fc01fae2031980714616eb356cdab7c2db2032ea26c870b11540da37422ca06\": not found" Sep 12 17:18:11.901831 kubelet[3167]: E0912 17:18:11.901538 3167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5fc01fae2031980714616eb356cdab7c2db2032ea26c870b11540da37422ca06\": not found" containerID="5fc01fae2031980714616eb356cdab7c2db2032ea26c870b11540da37422ca06" Sep 12 17:18:11.901831 kubelet[3167]: I0912 17:18:11.901571 3167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5fc01fae2031980714616eb356cdab7c2db2032ea26c870b11540da37422ca06"} err="failed to get container status \"5fc01fae2031980714616eb356cdab7c2db2032ea26c870b11540da37422ca06\": rpc error: code = NotFound desc = an error occurred when try to find container \"5fc01fae2031980714616eb356cdab7c2db2032ea26c870b11540da37422ca06\": not found" Sep 12 17:18:11.901831 kubelet[3167]: I0912 17:18:11.901672 3167 scope.go:117] "RemoveContainer" containerID="a310d2ee9536bef55ba586af69008ad17c57fd718dcac75401b147de2e718a22" Sep 12 17:18:11.902678 kubelet[3167]: E0912 17:18:11.902194 3167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a310d2ee9536bef55ba586af69008ad17c57fd718dcac75401b147de2e718a22\": not found" containerID="a310d2ee9536bef55ba586af69008ad17c57fd718dcac75401b147de2e718a22" Sep 12 17:18:11.902678 kubelet[3167]: I0912 17:18:11.902217 3167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a310d2ee9536bef55ba586af69008ad17c57fd718dcac75401b147de2e718a22"} err="failed to get container status \"a310d2ee9536bef55ba586af69008ad17c57fd718dcac75401b147de2e718a22\": rpc error: code = NotFound desc = an error occurred when try to find container \"a310d2ee9536bef55ba586af69008ad17c57fd718dcac75401b147de2e718a22\": not found" Sep 12 17:18:11.902678 kubelet[3167]: I0912 17:18:11.902287 3167 scope.go:117] "RemoveContainer" containerID="b9766eef11851d4ef27a2e6fea72f2df817c906dc57553a8227ce408ec5dadf9" Sep 12 17:18:11.902678 kubelet[3167]: E0912 17:18:11.902539 3167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b9766eef11851d4ef27a2e6fea72f2df817c906dc57553a8227ce408ec5dadf9\": not found" containerID="b9766eef11851d4ef27a2e6fea72f2df817c906dc57553a8227ce408ec5dadf9" Sep 12 17:18:11.902678 kubelet[3167]: I0912 17:18:11.902557 3167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b9766eef11851d4ef27a2e6fea72f2df817c906dc57553a8227ce408ec5dadf9"} err="failed to get container status \"b9766eef11851d4ef27a2e6fea72f2df817c906dc57553a8227ce408ec5dadf9\": rpc error: code = NotFound desc = an error occurred when try to find container \"b9766eef11851d4ef27a2e6fea72f2df817c906dc57553a8227ce408ec5dadf9\": not found" Sep 12 17:18:11.902678 kubelet[3167]: I0912 17:18:11.902571 3167 scope.go:117] "RemoveContainer" containerID="b3163b08418624750093385561660eeccfbcfb0f3b5426b01a75f46b41b423d9" Sep 12 17:18:11.902854 containerd[1909]: time="2025-09-12T17:18:11.902036269Z" level=error msg="ContainerStatus for \"a310d2ee9536bef55ba586af69008ad17c57fd718dcac75401b147de2e718a22\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a310d2ee9536bef55ba586af69008ad17c57fd718dcac75401b147de2e718a22\": not found" Sep 12 17:18:11.902854 containerd[1909]: time="2025-09-12T17:18:11.902446973Z" level=error msg="ContainerStatus for \"b9766eef11851d4ef27a2e6fea72f2df817c906dc57553a8227ce408ec5dadf9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b9766eef11851d4ef27a2e6fea72f2df817c906dc57553a8227ce408ec5dadf9\": not found" Sep 12 17:18:11.902919 containerd[1909]: time="2025-09-12T17:18:11.902790638Z" level=error msg="ContainerStatus for \"b3163b08418624750093385561660eeccfbcfb0f3b5426b01a75f46b41b423d9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b3163b08418624750093385561660eeccfbcfb0f3b5426b01a75f46b41b423d9\": not found" Sep 12 17:18:11.903073 kubelet[3167]: E0912 17:18:11.903049 3167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b3163b08418624750093385561660eeccfbcfb0f3b5426b01a75f46b41b423d9\": not found" containerID="b3163b08418624750093385561660eeccfbcfb0f3b5426b01a75f46b41b423d9" Sep 12 17:18:11.903107 kubelet[3167]: I0912 17:18:11.903072 3167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b3163b08418624750093385561660eeccfbcfb0f3b5426b01a75f46b41b423d9"} err="failed to get container status \"b3163b08418624750093385561660eeccfbcfb0f3b5426b01a75f46b41b423d9\": rpc error: code = NotFound desc = an error occurred when try to find container \"b3163b08418624750093385561660eeccfbcfb0f3b5426b01a75f46b41b423d9\": not found" Sep 12 17:18:11.903107 kubelet[3167]: I0912 17:18:11.903085 3167 scope.go:117] "RemoveContainer" containerID="026ca7663704bcf74991842327fca8373f61ef0be2f60ca2da926a668e92a173" Sep 12 17:18:11.904087 containerd[1909]: time="2025-09-12T17:18:11.904059621Z" level=info msg="RemoveContainer for \"026ca7663704bcf74991842327fca8373f61ef0be2f60ca2da926a668e92a173\"" Sep 12 17:18:11.909451 containerd[1909]: time="2025-09-12T17:18:11.909287099Z" level=info msg="RemoveContainer for \"026ca7663704bcf74991842327fca8373f61ef0be2f60ca2da926a668e92a173\" returns successfully" Sep 12 17:18:11.911080 kubelet[3167]: I0912 17:18:11.911045 3167 scope.go:117] "RemoveContainer" containerID="026ca7663704bcf74991842327fca8373f61ef0be2f60ca2da926a668e92a173" Sep 12 17:18:11.912186 containerd[1909]: time="2025-09-12T17:18:11.912124799Z" level=error msg="ContainerStatus for \"026ca7663704bcf74991842327fca8373f61ef0be2f60ca2da926a668e92a173\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"026ca7663704bcf74991842327fca8373f61ef0be2f60ca2da926a668e92a173\": not found" Sep 12 17:18:11.912382 kubelet[3167]: E0912 17:18:11.912354 3167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"026ca7663704bcf74991842327fca8373f61ef0be2f60ca2da926a668e92a173\": not found" containerID="026ca7663704bcf74991842327fca8373f61ef0be2f60ca2da926a668e92a173" Sep 12 17:18:11.912449 kubelet[3167]: I0912 17:18:11.912390 3167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"026ca7663704bcf74991842327fca8373f61ef0be2f60ca2da926a668e92a173"} err="failed to get container status \"026ca7663704bcf74991842327fca8373f61ef0be2f60ca2da926a668e92a173\": rpc error: code = NotFound desc = an error occurred when try to find container \"026ca7663704bcf74991842327fca8373f61ef0be2f60ca2da926a668e92a173\": not found" Sep 12 17:18:11.970692 kubelet[3167]: I0912 17:18:11.970644 3167 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/69872b7d-98bb-4451-9d57-bd126960412b-clustermesh-secrets\") on node \"ip-172-31-19-109\" DevicePath \"\"" Sep 12 17:18:11.970692 kubelet[3167]: I0912 17:18:11.970681 3167 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/69872b7d-98bb-4451-9d57-bd126960412b-hubble-tls\") on node \"ip-172-31-19-109\" DevicePath \"\"" Sep 12 17:18:11.970692 kubelet[3167]: I0912 17:18:11.970697 3167 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jjz85\" (UniqueName: \"kubernetes.io/projected/69872b7d-98bb-4451-9d57-bd126960412b-kube-api-access-jjz85\") on node \"ip-172-31-19-109\" DevicePath \"\"" Sep 12 17:18:11.970892 kubelet[3167]: I0912 17:18:11.970709 3167 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/69872b7d-98bb-4451-9d57-bd126960412b-cilium-config-path\") on node \"ip-172-31-19-109\" DevicePath \"\"" Sep 12 17:18:12.105211 systemd[1]: Removed slice kubepods-burstable-pod69872b7d_98bb_4451_9d57_bd126960412b.slice - libcontainer container kubepods-burstable-pod69872b7d_98bb_4451_9d57_bd126960412b.slice. Sep 12 17:18:12.106273 systemd[1]: kubepods-burstable-pod69872b7d_98bb_4451_9d57_bd126960412b.slice: Consumed 8.590s CPU time, 197.7M memory peak, 73.5M read from disk, 13.3M written to disk. Sep 12 17:18:12.381178 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7375ba37b9bdb47491befe59a8c4348db5664b9a009a42f383516d1c2318dfa0-rootfs.mount: Deactivated successfully. Sep 12 17:18:12.381301 systemd[1]: var-lib-kubelet-pods-11064375\x2d191d\x2d45a1\x2d8158\x2dd8a41208bead-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqm8ck.mount: Deactivated successfully. Sep 12 17:18:12.381381 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-92c3b47a21583ba8d17b3be4ede88d2890bb30b5a5a613c35765edc4fa38e4f6-rootfs.mount: Deactivated successfully. Sep 12 17:18:12.381441 systemd[1]: var-lib-kubelet-pods-69872b7d\x2d98bb\x2d4451\x2d9d57\x2dbd126960412b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djjz85.mount: Deactivated successfully. Sep 12 17:18:12.381505 systemd[1]: var-lib-kubelet-pods-69872b7d\x2d98bb\x2d4451\x2d9d57\x2dbd126960412b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 12 17:18:12.381569 systemd[1]: var-lib-kubelet-pods-69872b7d\x2d98bb\x2d4451\x2d9d57\x2dbd126960412b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 12 17:18:13.279882 sshd[4967]: Connection closed by 139.178.89.65 port 40132 Sep 12 17:18:13.281114 sshd-session[4965]: pam_unix(sshd:session): session closed for user core Sep 12 17:18:13.285202 systemd-logind[1896]: Session 24 logged out. Waiting for processes to exit. Sep 12 17:18:13.286088 systemd[1]: sshd@23-172.31.19.109:22-139.178.89.65:40132.service: Deactivated successfully. Sep 12 17:18:13.288459 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 17:18:13.289824 systemd-logind[1896]: Removed session 24. Sep 12 17:18:13.316332 systemd[1]: Started sshd@24-172.31.19.109:22-139.178.89.65:37924.service - OpenSSH per-connection server daemon (139.178.89.65:37924). Sep 12 17:18:13.391041 kubelet[3167]: I0912 17:18:13.390437 3167 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11064375-191d-45a1-8158-d8a41208bead" path="/var/lib/kubelet/pods/11064375-191d-45a1-8158-d8a41208bead/volumes" Sep 12 17:18:13.391740 kubelet[3167]: I0912 17:18:13.391221 3167 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69872b7d-98bb-4451-9d57-bd126960412b" path="/var/lib/kubelet/pods/69872b7d-98bb-4451-9d57-bd126960412b/volumes" Sep 12 17:18:13.497702 sshd[5123]: Accepted publickey for core from 139.178.89.65 port 37924 ssh2: RSA SHA256:y2CKJkWUYShnRPQHaX6GVCzN7kSZ4Mn9aBLXYnNVJUA Sep 12 17:18:13.499304 sshd-session[5123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:18:13.505902 systemd-logind[1896]: New session 25 of user core. Sep 12 17:18:13.511141 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 17:18:13.514465 kubelet[3167]: E0912 17:18:13.514332 3167 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 12 17:18:13.809199 ntpd[1889]: Deleting interface #11 lxc_health, fe80::d41f:efff:fe6d:de06%8#123, interface stats: received=0, sent=0, dropped=0, active_time=63 secs Sep 12 17:18:13.809733 ntpd[1889]: 12 Sep 17:18:13 ntpd[1889]: Deleting interface #11 lxc_health, fe80::d41f:efff:fe6d:de06%8#123, interface stats: received=0, sent=0, dropped=0, active_time=63 secs Sep 12 17:18:14.371097 sshd[5125]: Connection closed by 139.178.89.65 port 37924 Sep 12 17:18:14.372278 sshd-session[5123]: pam_unix(sshd:session): session closed for user core Sep 12 17:18:14.377631 kubelet[3167]: I0912 17:18:14.377591 3167 memory_manager.go:355] "RemoveStaleState removing state" podUID="69872b7d-98bb-4451-9d57-bd126960412b" containerName="cilium-agent" Sep 12 17:18:14.377631 kubelet[3167]: I0912 17:18:14.377627 3167 memory_manager.go:355] "RemoveStaleState removing state" podUID="11064375-191d-45a1-8158-d8a41208bead" containerName="cilium-operator" Sep 12 17:18:14.382313 systemd[1]: sshd@24-172.31.19.109:22-139.178.89.65:37924.service: Deactivated successfully. Sep 12 17:18:14.391252 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 17:18:14.397660 systemd-logind[1896]: Session 25 logged out. Waiting for processes to exit. Sep 12 17:18:14.427850 systemd[1]: Started sshd@25-172.31.19.109:22-139.178.89.65:37934.service - OpenSSH per-connection server daemon (139.178.89.65:37934). Sep 12 17:18:14.431793 systemd-logind[1896]: Removed session 25. Sep 12 17:18:14.483793 systemd[1]: Created slice kubepods-burstable-pod9305b199_3097_4611_aeca_41932b686f87.slice - libcontainer container kubepods-burstable-pod9305b199_3097_4611_aeca_41932b686f87.slice. Sep 12 17:18:14.492825 kubelet[3167]: I0912 17:18:14.492780 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76bsl\" (UniqueName: \"kubernetes.io/projected/9305b199-3097-4611-aeca-41932b686f87-kube-api-access-76bsl\") pod \"cilium-4fg8j\" (UID: \"9305b199-3097-4611-aeca-41932b686f87\") " pod="kube-system/cilium-4fg8j" Sep 12 17:18:14.493299 kubelet[3167]: I0912 17:18:14.492849 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9305b199-3097-4611-aeca-41932b686f87-cilium-ipsec-secrets\") pod \"cilium-4fg8j\" (UID: \"9305b199-3097-4611-aeca-41932b686f87\") " pod="kube-system/cilium-4fg8j" Sep 12 17:18:14.493299 kubelet[3167]: I0912 17:18:14.492878 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9305b199-3097-4611-aeca-41932b686f87-host-proc-sys-net\") pod \"cilium-4fg8j\" (UID: \"9305b199-3097-4611-aeca-41932b686f87\") " pod="kube-system/cilium-4fg8j" Sep 12 17:18:14.493299 kubelet[3167]: I0912 17:18:14.492900 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9305b199-3097-4611-aeca-41932b686f87-host-proc-sys-kernel\") pod \"cilium-4fg8j\" (UID: \"9305b199-3097-4611-aeca-41932b686f87\") " pod="kube-system/cilium-4fg8j" Sep 12 17:18:14.493299 kubelet[3167]: I0912 17:18:14.492926 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9305b199-3097-4611-aeca-41932b686f87-cilium-run\") pod \"cilium-4fg8j\" (UID: \"9305b199-3097-4611-aeca-41932b686f87\") " pod="kube-system/cilium-4fg8j" Sep 12 17:18:14.493299 kubelet[3167]: I0912 17:18:14.492971 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9305b199-3097-4611-aeca-41932b686f87-bpf-maps\") pod \"cilium-4fg8j\" (UID: \"9305b199-3097-4611-aeca-41932b686f87\") " pod="kube-system/cilium-4fg8j" Sep 12 17:18:14.493299 kubelet[3167]: I0912 17:18:14.492997 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9305b199-3097-4611-aeca-41932b686f87-cilium-config-path\") pod \"cilium-4fg8j\" (UID: \"9305b199-3097-4611-aeca-41932b686f87\") " pod="kube-system/cilium-4fg8j" Sep 12 17:18:14.493459 kubelet[3167]: I0912 17:18:14.493020 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9305b199-3097-4611-aeca-41932b686f87-xtables-lock\") pod \"cilium-4fg8j\" (UID: \"9305b199-3097-4611-aeca-41932b686f87\") " pod="kube-system/cilium-4fg8j" Sep 12 17:18:14.493459 kubelet[3167]: I0912 17:18:14.493043 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9305b199-3097-4611-aeca-41932b686f87-clustermesh-secrets\") pod \"cilium-4fg8j\" (UID: \"9305b199-3097-4611-aeca-41932b686f87\") " pod="kube-system/cilium-4fg8j" Sep 12 17:18:14.493459 kubelet[3167]: I0912 17:18:14.493071 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9305b199-3097-4611-aeca-41932b686f87-hostproc\") pod \"cilium-4fg8j\" (UID: \"9305b199-3097-4611-aeca-41932b686f87\") " pod="kube-system/cilium-4fg8j" Sep 12 17:18:14.493459 kubelet[3167]: I0912 17:18:14.493095 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9305b199-3097-4611-aeca-41932b686f87-lib-modules\") pod \"cilium-4fg8j\" (UID: \"9305b199-3097-4611-aeca-41932b686f87\") " pod="kube-system/cilium-4fg8j" Sep 12 17:18:14.493459 kubelet[3167]: I0912 17:18:14.493124 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9305b199-3097-4611-aeca-41932b686f87-cni-path\") pod \"cilium-4fg8j\" (UID: \"9305b199-3097-4611-aeca-41932b686f87\") " pod="kube-system/cilium-4fg8j" Sep 12 17:18:14.493459 kubelet[3167]: I0912 17:18:14.493148 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9305b199-3097-4611-aeca-41932b686f87-etc-cni-netd\") pod \"cilium-4fg8j\" (UID: \"9305b199-3097-4611-aeca-41932b686f87\") " pod="kube-system/cilium-4fg8j" Sep 12 17:18:14.493662 kubelet[3167]: I0912 17:18:14.493168 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9305b199-3097-4611-aeca-41932b686f87-hubble-tls\") pod \"cilium-4fg8j\" (UID: \"9305b199-3097-4611-aeca-41932b686f87\") " pod="kube-system/cilium-4fg8j" Sep 12 17:18:14.493662 kubelet[3167]: I0912 17:18:14.493195 3167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9305b199-3097-4611-aeca-41932b686f87-cilium-cgroup\") pod \"cilium-4fg8j\" (UID: \"9305b199-3097-4611-aeca-41932b686f87\") " pod="kube-system/cilium-4fg8j" Sep 12 17:18:14.630226 sshd[5135]: Accepted publickey for core from 139.178.89.65 port 37934 ssh2: RSA SHA256:y2CKJkWUYShnRPQHaX6GVCzN7kSZ4Mn9aBLXYnNVJUA Sep 12 17:18:14.638349 sshd-session[5135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:18:14.648421 systemd-logind[1896]: New session 26 of user core. Sep 12 17:18:14.656200 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 17:18:14.773270 sshd[5142]: Connection closed by 139.178.89.65 port 37934 Sep 12 17:18:14.773737 sshd-session[5135]: pam_unix(sshd:session): session closed for user core Sep 12 17:18:14.778273 systemd[1]: sshd@25-172.31.19.109:22-139.178.89.65:37934.service: Deactivated successfully. Sep 12 17:18:14.780883 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 17:18:14.782261 systemd-logind[1896]: Session 26 logged out. Waiting for processes to exit. Sep 12 17:18:14.783620 systemd-logind[1896]: Removed session 26. Sep 12 17:18:14.804900 containerd[1909]: time="2025-09-12T17:18:14.804855735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4fg8j,Uid:9305b199-3097-4611-aeca-41932b686f87,Namespace:kube-system,Attempt:0,}" Sep 12 17:18:14.813200 systemd[1]: Started sshd@26-172.31.19.109:22-139.178.89.65:37938.service - OpenSSH per-connection server daemon (139.178.89.65:37938). Sep 12 17:18:14.875088 containerd[1909]: time="2025-09-12T17:18:14.874912196Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:18:14.875088 containerd[1909]: time="2025-09-12T17:18:14.875025661Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:18:14.875262 containerd[1909]: time="2025-09-12T17:18:14.875050278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:18:14.876876 containerd[1909]: time="2025-09-12T17:18:14.876668489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:18:14.899197 systemd[1]: Started cri-containerd-38839cfdfe99e01680b79e7efb286855286946c21c20fdefcce40462843b1f17.scope - libcontainer container 38839cfdfe99e01680b79e7efb286855286946c21c20fdefcce40462843b1f17. Sep 12 17:18:14.933249 containerd[1909]: time="2025-09-12T17:18:14.933178146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4fg8j,Uid:9305b199-3097-4611-aeca-41932b686f87,Namespace:kube-system,Attempt:0,} returns sandbox id \"38839cfdfe99e01680b79e7efb286855286946c21c20fdefcce40462843b1f17\"" Sep 12 17:18:14.938289 containerd[1909]: time="2025-09-12T17:18:14.938158861Z" level=info msg="CreateContainer within sandbox \"38839cfdfe99e01680b79e7efb286855286946c21c20fdefcce40462843b1f17\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 17:18:14.962968 containerd[1909]: time="2025-09-12T17:18:14.960292922Z" level=info msg="CreateContainer within sandbox \"38839cfdfe99e01680b79e7efb286855286946c21c20fdefcce40462843b1f17\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"54d68abd5f25693c5432649c4dcf3d21646b16e626415250e8396971ae504e5b\"" Sep 12 17:18:14.970180 containerd[1909]: time="2025-09-12T17:18:14.970010051Z" level=info msg="StartContainer for \"54d68abd5f25693c5432649c4dcf3d21646b16e626415250e8396971ae504e5b\"" Sep 12 17:18:15.008346 systemd[1]: Started cri-containerd-54d68abd5f25693c5432649c4dcf3d21646b16e626415250e8396971ae504e5b.scope - libcontainer container 54d68abd5f25693c5432649c4dcf3d21646b16e626415250e8396971ae504e5b. Sep 12 17:18:15.034598 sshd[5150]: Accepted publickey for core from 139.178.89.65 port 37938 ssh2: RSA SHA256:y2CKJkWUYShnRPQHaX6GVCzN7kSZ4Mn9aBLXYnNVJUA Sep 12 17:18:15.037203 sshd-session[5150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:18:15.042512 containerd[1909]: time="2025-09-12T17:18:15.042411439Z" level=info msg="StartContainer for \"54d68abd5f25693c5432649c4dcf3d21646b16e626415250e8396971ae504e5b\" returns successfully" Sep 12 17:18:15.047798 systemd-logind[1896]: New session 27 of user core. Sep 12 17:18:15.054215 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 12 17:18:15.065457 systemd[1]: cri-containerd-54d68abd5f25693c5432649c4dcf3d21646b16e626415250e8396971ae504e5b.scope: Deactivated successfully. Sep 12 17:18:15.065993 systemd[1]: cri-containerd-54d68abd5f25693c5432649c4dcf3d21646b16e626415250e8396971ae504e5b.scope: Consumed 25ms CPU time, 9M memory peak, 2.7M read from disk. Sep 12 17:18:15.117179 containerd[1909]: time="2025-09-12T17:18:15.117118330Z" level=info msg="shim disconnected" id=54d68abd5f25693c5432649c4dcf3d21646b16e626415250e8396971ae504e5b namespace=k8s.io Sep 12 17:18:15.117179 containerd[1909]: time="2025-09-12T17:18:15.117169969Z" level=warning msg="cleaning up after shim disconnected" id=54d68abd5f25693c5432649c4dcf3d21646b16e626415250e8396971ae504e5b namespace=k8s.io Sep 12 17:18:15.117179 containerd[1909]: time="2025-09-12T17:18:15.117178296Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:18:15.506506 kubelet[3167]: I0912 17:18:15.506435 3167 setters.go:602] "Node became not ready" node="ip-172-31-19-109" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-12T17:18:15Z","lastTransitionTime":"2025-09-12T17:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 12 17:18:15.821867 containerd[1909]: time="2025-09-12T17:18:15.821821624Z" level=info msg="CreateContainer within sandbox \"38839cfdfe99e01680b79e7efb286855286946c21c20fdefcce40462843b1f17\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 17:18:15.849170 containerd[1909]: time="2025-09-12T17:18:15.847939433Z" level=info msg="CreateContainer within sandbox \"38839cfdfe99e01680b79e7efb286855286946c21c20fdefcce40462843b1f17\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8c61da3d3b50bccbf11495ed10d5afd6fddf151032b8541977630d83038fab8a\"" Sep 12 17:18:15.850216 containerd[1909]: time="2025-09-12T17:18:15.850057445Z" level=info msg="StartContainer for \"8c61da3d3b50bccbf11495ed10d5afd6fddf151032b8541977630d83038fab8a\"" Sep 12 17:18:15.902215 systemd[1]: Started cri-containerd-8c61da3d3b50bccbf11495ed10d5afd6fddf151032b8541977630d83038fab8a.scope - libcontainer container 8c61da3d3b50bccbf11495ed10d5afd6fddf151032b8541977630d83038fab8a. Sep 12 17:18:15.945789 containerd[1909]: time="2025-09-12T17:18:15.945496167Z" level=info msg="StartContainer for \"8c61da3d3b50bccbf11495ed10d5afd6fddf151032b8541977630d83038fab8a\" returns successfully" Sep 12 17:18:15.959301 systemd[1]: cri-containerd-8c61da3d3b50bccbf11495ed10d5afd6fddf151032b8541977630d83038fab8a.scope: Deactivated successfully. Sep 12 17:18:15.959982 systemd[1]: cri-containerd-8c61da3d3b50bccbf11495ed10d5afd6fddf151032b8541977630d83038fab8a.scope: Consumed 21ms CPU time, 7.5M memory peak, 2.1M read from disk. Sep 12 17:18:15.999768 containerd[1909]: time="2025-09-12T17:18:15.999681395Z" level=info msg="shim disconnected" id=8c61da3d3b50bccbf11495ed10d5afd6fddf151032b8541977630d83038fab8a namespace=k8s.io Sep 12 17:18:15.999768 containerd[1909]: time="2025-09-12T17:18:15.999761822Z" level=warning msg="cleaning up after shim disconnected" id=8c61da3d3b50bccbf11495ed10d5afd6fddf151032b8541977630d83038fab8a namespace=k8s.io Sep 12 17:18:15.999768 containerd[1909]: time="2025-09-12T17:18:15.999773962Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:18:16.603055 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c61da3d3b50bccbf11495ed10d5afd6fddf151032b8541977630d83038fab8a-rootfs.mount: Deactivated successfully. Sep 12 17:18:16.831906 containerd[1909]: time="2025-09-12T17:18:16.831741123Z" level=info msg="CreateContainer within sandbox \"38839cfdfe99e01680b79e7efb286855286946c21c20fdefcce40462843b1f17\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 17:18:16.868771 containerd[1909]: time="2025-09-12T17:18:16.868597741Z" level=info msg="CreateContainer within sandbox \"38839cfdfe99e01680b79e7efb286855286946c21c20fdefcce40462843b1f17\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3db6319435574c774af79dd48d9d678a3c26d4c29c72c5bed652d8c83012e62f\"" Sep 12 17:18:16.870873 containerd[1909]: time="2025-09-12T17:18:16.869220919Z" level=info msg="StartContainer for \"3db6319435574c774af79dd48d9d678a3c26d4c29c72c5bed652d8c83012e62f\"" Sep 12 17:18:16.918266 systemd[1]: Started cri-containerd-3db6319435574c774af79dd48d9d678a3c26d4c29c72c5bed652d8c83012e62f.scope - libcontainer container 3db6319435574c774af79dd48d9d678a3c26d4c29c72c5bed652d8c83012e62f. Sep 12 17:18:16.967655 containerd[1909]: time="2025-09-12T17:18:16.967602466Z" level=info msg="StartContainer for \"3db6319435574c774af79dd48d9d678a3c26d4c29c72c5bed652d8c83012e62f\" returns successfully" Sep 12 17:18:16.978776 systemd[1]: cri-containerd-3db6319435574c774af79dd48d9d678a3c26d4c29c72c5bed652d8c83012e62f.scope: Deactivated successfully. Sep 12 17:18:17.018743 containerd[1909]: time="2025-09-12T17:18:17.018675215Z" level=info msg="shim disconnected" id=3db6319435574c774af79dd48d9d678a3c26d4c29c72c5bed652d8c83012e62f namespace=k8s.io Sep 12 17:18:17.018743 containerd[1909]: time="2025-09-12T17:18:17.018727739Z" level=warning msg="cleaning up after shim disconnected" id=3db6319435574c774af79dd48d9d678a3c26d4c29c72c5bed652d8c83012e62f namespace=k8s.io Sep 12 17:18:17.018743 containerd[1909]: time="2025-09-12T17:18:17.018736428Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:18:17.601522 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3db6319435574c774af79dd48d9d678a3c26d4c29c72c5bed652d8c83012e62f-rootfs.mount: Deactivated successfully. Sep 12 17:18:17.829879 containerd[1909]: time="2025-09-12T17:18:17.829470092Z" level=info msg="CreateContainer within sandbox \"38839cfdfe99e01680b79e7efb286855286946c21c20fdefcce40462843b1f17\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 17:18:17.857393 containerd[1909]: time="2025-09-12T17:18:17.857111756Z" level=info msg="CreateContainer within sandbox \"38839cfdfe99e01680b79e7efb286855286946c21c20fdefcce40462843b1f17\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d2130dfa8c5e8555b58c4dc34e40ec54fc162a4e57350e82502f3479879c752b\"" Sep 12 17:18:17.859259 containerd[1909]: time="2025-09-12T17:18:17.859183541Z" level=info msg="StartContainer for \"d2130dfa8c5e8555b58c4dc34e40ec54fc162a4e57350e82502f3479879c752b\"" Sep 12 17:18:17.913200 systemd[1]: Started cri-containerd-d2130dfa8c5e8555b58c4dc34e40ec54fc162a4e57350e82502f3479879c752b.scope - libcontainer container d2130dfa8c5e8555b58c4dc34e40ec54fc162a4e57350e82502f3479879c752b. Sep 12 17:18:17.952573 systemd[1]: cri-containerd-d2130dfa8c5e8555b58c4dc34e40ec54fc162a4e57350e82502f3479879c752b.scope: Deactivated successfully. Sep 12 17:18:17.955636 containerd[1909]: time="2025-09-12T17:18:17.955006720Z" level=info msg="StartContainer for \"d2130dfa8c5e8555b58c4dc34e40ec54fc162a4e57350e82502f3479879c752b\" returns successfully" Sep 12 17:18:17.999703 containerd[1909]: time="2025-09-12T17:18:17.999638863Z" level=info msg="shim disconnected" id=d2130dfa8c5e8555b58c4dc34e40ec54fc162a4e57350e82502f3479879c752b namespace=k8s.io Sep 12 17:18:17.999703 containerd[1909]: time="2025-09-12T17:18:17.999692043Z" level=warning msg="cleaning up after shim disconnected" id=d2130dfa8c5e8555b58c4dc34e40ec54fc162a4e57350e82502f3479879c752b namespace=k8s.io Sep 12 17:18:17.999703 containerd[1909]: time="2025-09-12T17:18:17.999704498Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:18:18.386462 kubelet[3167]: E0912 17:18:18.386383 3167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-kpz9d" podUID="58451589-060e-43dc-979c-fd0304735802" Sep 12 17:18:18.516118 kubelet[3167]: E0912 17:18:18.516069 3167 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 12 17:18:18.602342 systemd[1]: run-containerd-runc-k8s.io-d2130dfa8c5e8555b58c4dc34e40ec54fc162a4e57350e82502f3479879c752b-runc.EIdwEl.mount: Deactivated successfully. Sep 12 17:18:18.602503 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d2130dfa8c5e8555b58c4dc34e40ec54fc162a4e57350e82502f3479879c752b-rootfs.mount: Deactivated successfully. Sep 12 17:18:18.833394 containerd[1909]: time="2025-09-12T17:18:18.833358595Z" level=info msg="CreateContainer within sandbox \"38839cfdfe99e01680b79e7efb286855286946c21c20fdefcce40462843b1f17\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 17:18:18.867171 containerd[1909]: time="2025-09-12T17:18:18.866936061Z" level=info msg="CreateContainer within sandbox \"38839cfdfe99e01680b79e7efb286855286946c21c20fdefcce40462843b1f17\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"278409d0bc2386ecadec43aee2f8686755a241c962a541ad78266fed913066fe\"" Sep 12 17:18:18.871326 containerd[1909]: time="2025-09-12T17:18:18.868039884Z" level=info msg="StartContainer for \"278409d0bc2386ecadec43aee2f8686755a241c962a541ad78266fed913066fe\"" Sep 12 17:18:18.920193 systemd[1]: Started cri-containerd-278409d0bc2386ecadec43aee2f8686755a241c962a541ad78266fed913066fe.scope - libcontainer container 278409d0bc2386ecadec43aee2f8686755a241c962a541ad78266fed913066fe. Sep 12 17:18:18.964713 containerd[1909]: time="2025-09-12T17:18:18.964667747Z" level=info msg="StartContainer for \"278409d0bc2386ecadec43aee2f8686755a241c962a541ad78266fed913066fe\" returns successfully" Sep 12 17:18:19.646045 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 12 17:18:19.855382 kubelet[3167]: I0912 17:18:19.855317 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4fg8j" podStartSLOduration=5.855300377 podStartE2EDuration="5.855300377s" podCreationTimestamp="2025-09-12 17:18:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:18:19.85495805 +0000 UTC m=+96.685748543" watchObservedRunningTime="2025-09-12 17:18:19.855300377 +0000 UTC m=+96.686090879" Sep 12 17:18:20.386335 kubelet[3167]: E0912 17:18:20.386269 3167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-kpz9d" podUID="58451589-060e-43dc-979c-fd0304735802" Sep 12 17:18:21.692087 systemd[1]: run-containerd-runc-k8s.io-278409d0bc2386ecadec43aee2f8686755a241c962a541ad78266fed913066fe-runc.Bh2tS0.mount: Deactivated successfully. Sep 12 17:18:22.386554 kubelet[3167]: E0912 17:18:22.386381 3167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-kpz9d" podUID="58451589-060e-43dc-979c-fd0304735802" Sep 12 17:18:22.868314 (udev-worker)[5999]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:18:22.868931 (udev-worker)[6001]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:18:22.869687 systemd-networkd[1828]: lxc_health: Link UP Sep 12 17:18:22.878646 systemd-networkd[1828]: lxc_health: Gained carrier Sep 12 17:18:24.711116 systemd-networkd[1828]: lxc_health: Gained IPv6LL Sep 12 17:18:26.809270 ntpd[1889]: Listen normally on 14 lxc_health [fe80::443:47ff:fe98:fcc4%14]:123 Sep 12 17:18:26.810612 ntpd[1889]: 12 Sep 17:18:26 ntpd[1889]: Listen normally on 14 lxc_health [fe80::443:47ff:fe98:fcc4%14]:123 Sep 12 17:18:28.469596 sshd[5231]: Connection closed by 139.178.89.65 port 37938 Sep 12 17:18:28.473128 sshd-session[5150]: pam_unix(sshd:session): session closed for user core Sep 12 17:18:28.478067 systemd[1]: sshd@26-172.31.19.109:22-139.178.89.65:37938.service: Deactivated successfully. Sep 12 17:18:28.482246 systemd[1]: session-27.scope: Deactivated successfully. Sep 12 17:18:28.483326 systemd-logind[1896]: Session 27 logged out. Waiting for processes to exit. Sep 12 17:18:28.485231 systemd-logind[1896]: Removed session 27. Sep 12 17:18:43.342754 containerd[1909]: time="2025-09-12T17:18:43.342711123Z" level=info msg="StopPodSandbox for \"7375ba37b9bdb47491befe59a8c4348db5664b9a009a42f383516d1c2318dfa0\"" Sep 12 17:18:43.343269 containerd[1909]: time="2025-09-12T17:18:43.342827973Z" level=info msg="TearDown network for sandbox \"7375ba37b9bdb47491befe59a8c4348db5664b9a009a42f383516d1c2318dfa0\" successfully" Sep 12 17:18:43.343269 containerd[1909]: time="2025-09-12T17:18:43.342843548Z" level=info msg="StopPodSandbox for \"7375ba37b9bdb47491befe59a8c4348db5664b9a009a42f383516d1c2318dfa0\" returns successfully" Sep 12 17:18:43.343805 containerd[1909]: time="2025-09-12T17:18:43.343773968Z" level=info msg="RemovePodSandbox for \"7375ba37b9bdb47491befe59a8c4348db5664b9a009a42f383516d1c2318dfa0\"" Sep 12 17:18:43.343912 containerd[1909]: time="2025-09-12T17:18:43.343821627Z" level=info msg="Forcibly stopping sandbox \"7375ba37b9bdb47491befe59a8c4348db5664b9a009a42f383516d1c2318dfa0\"" Sep 12 17:18:43.344020 containerd[1909]: time="2025-09-12T17:18:43.343891207Z" level=info msg="TearDown network for sandbox \"7375ba37b9bdb47491befe59a8c4348db5664b9a009a42f383516d1c2318dfa0\" successfully" Sep 12 17:18:43.356286 containerd[1909]: time="2025-09-12T17:18:43.356219890Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7375ba37b9bdb47491befe59a8c4348db5664b9a009a42f383516d1c2318dfa0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:18:43.356704 containerd[1909]: time="2025-09-12T17:18:43.356305747Z" level=info msg="RemovePodSandbox \"7375ba37b9bdb47491befe59a8c4348db5664b9a009a42f383516d1c2318dfa0\" returns successfully" Sep 12 17:18:43.361364 containerd[1909]: time="2025-09-12T17:18:43.361305442Z" level=info msg="StopPodSandbox for \"92c3b47a21583ba8d17b3be4ede88d2890bb30b5a5a613c35765edc4fa38e4f6\"" Sep 12 17:18:43.361660 containerd[1909]: time="2025-09-12T17:18:43.361431313Z" level=info msg="TearDown network for sandbox \"92c3b47a21583ba8d17b3be4ede88d2890bb30b5a5a613c35765edc4fa38e4f6\" successfully" Sep 12 17:18:43.361796 containerd[1909]: time="2025-09-12T17:18:43.361662673Z" level=info msg="StopPodSandbox for \"92c3b47a21583ba8d17b3be4ede88d2890bb30b5a5a613c35765edc4fa38e4f6\" returns successfully" Sep 12 17:18:43.363808 containerd[1909]: time="2025-09-12T17:18:43.362133835Z" level=info msg="RemovePodSandbox for \"92c3b47a21583ba8d17b3be4ede88d2890bb30b5a5a613c35765edc4fa38e4f6\"" Sep 12 17:18:43.363808 containerd[1909]: time="2025-09-12T17:18:43.362171225Z" level=info msg="Forcibly stopping sandbox \"92c3b47a21583ba8d17b3be4ede88d2890bb30b5a5a613c35765edc4fa38e4f6\"" Sep 12 17:18:43.363808 containerd[1909]: time="2025-09-12T17:18:43.362243349Z" level=info msg="TearDown network for sandbox \"92c3b47a21583ba8d17b3be4ede88d2890bb30b5a5a613c35765edc4fa38e4f6\" successfully" Sep 12 17:18:43.368218 containerd[1909]: time="2025-09-12T17:18:43.368141968Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"92c3b47a21583ba8d17b3be4ede88d2890bb30b5a5a613c35765edc4fa38e4f6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:18:43.368218 containerd[1909]: time="2025-09-12T17:18:43.368203636Z" level=info msg="RemovePodSandbox \"92c3b47a21583ba8d17b3be4ede88d2890bb30b5a5a613c35765edc4fa38e4f6\" returns successfully" Sep 12 17:18:43.791721 systemd[1]: cri-containerd-2ddd1ac9580a9d6e2b75a19567f2ea594eea6b1014fefe334679ef376f8dcb16.scope: Deactivated successfully. Sep 12 17:18:43.792592 systemd[1]: cri-containerd-2ddd1ac9580a9d6e2b75a19567f2ea594eea6b1014fefe334679ef376f8dcb16.scope: Consumed 2.802s CPU time, 82.2M memory peak, 37.5M read from disk. Sep 12 17:18:43.835170 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ddd1ac9580a9d6e2b75a19567f2ea594eea6b1014fefe334679ef376f8dcb16-rootfs.mount: Deactivated successfully. Sep 12 17:18:43.854993 containerd[1909]: time="2025-09-12T17:18:43.854901216Z" level=info msg="shim disconnected" id=2ddd1ac9580a9d6e2b75a19567f2ea594eea6b1014fefe334679ef376f8dcb16 namespace=k8s.io Sep 12 17:18:43.854993 containerd[1909]: time="2025-09-12T17:18:43.854979825Z" level=warning msg="cleaning up after shim disconnected" id=2ddd1ac9580a9d6e2b75a19567f2ea594eea6b1014fefe334679ef376f8dcb16 namespace=k8s.io Sep 12 17:18:43.854993 containerd[1909]: time="2025-09-12T17:18:43.854991515Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:18:43.888317 kubelet[3167]: I0912 17:18:43.888262 3167 scope.go:117] "RemoveContainer" containerID="2ddd1ac9580a9d6e2b75a19567f2ea594eea6b1014fefe334679ef376f8dcb16" Sep 12 17:18:43.897656 containerd[1909]: time="2025-09-12T17:18:43.897420053Z" level=info msg="CreateContainer within sandbox \"d96aa8c1d4077914def2ba1539240121a9dda2633ee6ba904b0d0b4d618034e1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 12 17:18:43.920006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2708382269.mount: Deactivated successfully. Sep 12 17:18:43.928233 containerd[1909]: time="2025-09-12T17:18:43.928166591Z" level=info msg="CreateContainer within sandbox \"d96aa8c1d4077914def2ba1539240121a9dda2633ee6ba904b0d0b4d618034e1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"9d4ba7cecd08daacec73e37b37c9e43bfaf6fa305118c2bccd10a8fbab2b9f4f\"" Sep 12 17:18:43.928733 containerd[1909]: time="2025-09-12T17:18:43.928662125Z" level=info msg="StartContainer for \"9d4ba7cecd08daacec73e37b37c9e43bfaf6fa305118c2bccd10a8fbab2b9f4f\"" Sep 12 17:18:43.969676 systemd[1]: Started cri-containerd-9d4ba7cecd08daacec73e37b37c9e43bfaf6fa305118c2bccd10a8fbab2b9f4f.scope - libcontainer container 9d4ba7cecd08daacec73e37b37c9e43bfaf6fa305118c2bccd10a8fbab2b9f4f. Sep 12 17:18:44.025108 containerd[1909]: time="2025-09-12T17:18:44.025062702Z" level=info msg="StartContainer for \"9d4ba7cecd08daacec73e37b37c9e43bfaf6fa305118c2bccd10a8fbab2b9f4f\" returns successfully" Sep 12 17:18:45.317137 kubelet[3167]: E0912 17:18:45.317057 3167 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-109?timeout=10s\": context deadline exceeded" Sep 12 17:18:48.339508 systemd[1]: cri-containerd-b0494fa378b423732561d636b10a575f75b7c986508e3ef2e09cc38b0e68400b.scope: Deactivated successfully. Sep 12 17:18:48.340327 systemd[1]: cri-containerd-b0494fa378b423732561d636b10a575f75b7c986508e3ef2e09cc38b0e68400b.scope: Consumed 2.336s CPU time, 31.6M memory peak, 13.1M read from disk. Sep 12 17:18:48.368753 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b0494fa378b423732561d636b10a575f75b7c986508e3ef2e09cc38b0e68400b-rootfs.mount: Deactivated successfully. Sep 12 17:18:48.389879 containerd[1909]: time="2025-09-12T17:18:48.389805863Z" level=info msg="shim disconnected" id=b0494fa378b423732561d636b10a575f75b7c986508e3ef2e09cc38b0e68400b namespace=k8s.io Sep 12 17:18:48.389879 containerd[1909]: time="2025-09-12T17:18:48.389858648Z" level=warning msg="cleaning up after shim disconnected" id=b0494fa378b423732561d636b10a575f75b7c986508e3ef2e09cc38b0e68400b namespace=k8s.io Sep 12 17:18:48.389879 containerd[1909]: time="2025-09-12T17:18:48.389870838Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:18:48.902664 kubelet[3167]: I0912 17:18:48.902633 3167 scope.go:117] "RemoveContainer" containerID="b0494fa378b423732561d636b10a575f75b7c986508e3ef2e09cc38b0e68400b" Sep 12 17:18:48.904796 containerd[1909]: time="2025-09-12T17:18:48.904765518Z" level=info msg="CreateContainer within sandbox \"abd1580cb16be2c3b433bd40a2f357122328ce817a6c213d73423116ac38defb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 12 17:18:48.928080 containerd[1909]: time="2025-09-12T17:18:48.928035771Z" level=info msg="CreateContainer within sandbox \"abd1580cb16be2c3b433bd40a2f357122328ce817a6c213d73423116ac38defb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"06c329a3e2bcc3aad1475d191616910787515fef186daafaf50bf5e95167c69c\"" Sep 12 17:18:48.928631 containerd[1909]: time="2025-09-12T17:18:48.928596867Z" level=info msg="StartContainer for \"06c329a3e2bcc3aad1475d191616910787515fef186daafaf50bf5e95167c69c\"" Sep 12 17:18:48.969195 systemd[1]: Started cri-containerd-06c329a3e2bcc3aad1475d191616910787515fef186daafaf50bf5e95167c69c.scope - libcontainer container 06c329a3e2bcc3aad1475d191616910787515fef186daafaf50bf5e95167c69c. Sep 12 17:18:49.021142 containerd[1909]: time="2025-09-12T17:18:49.021068781Z" level=info msg="StartContainer for \"06c329a3e2bcc3aad1475d191616910787515fef186daafaf50bf5e95167c69c\" returns successfully" Sep 12 17:18:55.318661 kubelet[3167]: E0912 17:18:55.318355 3167 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-109?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"