Sep 16 04:59:53.849530 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Sep 16 03:05:42 -00 2025 Sep 16 04:59:53.849567 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=0b876f86a632750e9937176808a48c2452d5168964273bcfc3c72f2a26140c06 Sep 16 04:59:53.849581 kernel: BIOS-provided physical RAM map: Sep 16 04:59:53.849592 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 16 04:59:53.849601 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Sep 16 04:59:53.849612 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Sep 16 04:59:53.849624 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Sep 16 04:59:53.849635 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Sep 16 04:59:53.849648 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Sep 16 04:59:53.849659 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Sep 16 04:59:53.849701 kernel: NX (Execute Disable) protection: active Sep 16 04:59:53.849712 kernel: APIC: Static calls initialized Sep 16 04:59:53.849723 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Sep 16 04:59:53.849734 kernel: extended physical RAM map: Sep 16 04:59:53.849751 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 16 04:59:53.849763 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Sep 16 04:59:53.849775 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Sep 16 04:59:53.849787 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Sep 16 04:59:53.849799 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Sep 16 04:59:53.849811 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Sep 16 04:59:53.849823 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Sep 16 04:59:53.849835 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Sep 16 04:59:53.849846 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Sep 16 04:59:53.849858 kernel: efi: EFI v2.7 by EDK II Sep 16 04:59:53.849872 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 Sep 16 04:59:53.849884 kernel: secureboot: Secure boot disabled Sep 16 04:59:53.849895 kernel: SMBIOS 2.7 present. Sep 16 04:59:53.849907 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Sep 16 04:59:53.849919 kernel: DMI: Memory slots populated: 1/1 Sep 16 04:59:53.849931 kernel: Hypervisor detected: KVM Sep 16 04:59:53.849943 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 16 04:59:53.849955 kernel: kvm-clock: using sched offset of 5173917796 cycles Sep 16 04:59:53.849967 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 16 04:59:53.849980 kernel: tsc: Detected 2500.004 MHz processor Sep 16 04:59:53.849992 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 16 04:59:53.850007 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 16 04:59:53.850019 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Sep 16 04:59:53.850031 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 16 04:59:53.850043 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 16 04:59:53.850056 kernel: Using GB pages for direct mapping Sep 16 04:59:53.850073 kernel: ACPI: Early table checksum verification disabled Sep 16 04:59:53.850089 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Sep 16 04:59:53.850102 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Sep 16 04:59:53.850115 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 16 04:59:53.850128 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Sep 16 04:59:53.850140 kernel: ACPI: FACS 0x00000000789D0000 000040 Sep 16 04:59:53.850153 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Sep 16 04:59:53.850166 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 16 04:59:53.850179 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 16 04:59:53.850194 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Sep 16 04:59:53.850207 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Sep 16 04:59:53.850220 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 16 04:59:53.850233 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 16 04:59:53.850245 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Sep 16 04:59:53.850258 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Sep 16 04:59:53.850271 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Sep 16 04:59:53.850284 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Sep 16 04:59:53.850296 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Sep 16 04:59:53.850312 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Sep 16 04:59:53.850325 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Sep 16 04:59:53.850338 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Sep 16 04:59:53.850351 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Sep 16 04:59:53.850364 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Sep 16 04:59:53.850376 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Sep 16 04:59:53.850695 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Sep 16 04:59:53.850713 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Sep 16 04:59:53.850728 kernel: NUMA: Initialized distance table, cnt=1 Sep 16 04:59:53.850747 kernel: NODE_DATA(0) allocated [mem 0x7a8eddc0-0x7a8f4fff] Sep 16 04:59:53.850762 kernel: Zone ranges: Sep 16 04:59:53.850776 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 16 04:59:53.850789 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Sep 16 04:59:53.850803 kernel: Normal empty Sep 16 04:59:53.850817 kernel: Device empty Sep 16 04:59:53.850832 kernel: Movable zone start for each node Sep 16 04:59:53.850846 kernel: Early memory node ranges Sep 16 04:59:53.850860 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 16 04:59:53.850877 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Sep 16 04:59:53.850891 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Sep 16 04:59:53.850904 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Sep 16 04:59:53.850918 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 16 04:59:53.851013 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 16 04:59:53.851027 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 16 04:59:53.851040 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Sep 16 04:59:53.851052 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 16 04:59:53.851065 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 16 04:59:53.851081 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Sep 16 04:59:53.851092 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 16 04:59:53.851104 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 16 04:59:53.851117 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 16 04:59:53.851129 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 16 04:59:53.851141 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 16 04:59:53.851153 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 16 04:59:53.851165 kernel: TSC deadline timer available Sep 16 04:59:53.851177 kernel: CPU topo: Max. logical packages: 1 Sep 16 04:59:53.851189 kernel: CPU topo: Max. logical dies: 1 Sep 16 04:59:53.851206 kernel: CPU topo: Max. dies per package: 1 Sep 16 04:59:53.851219 kernel: CPU topo: Max. threads per core: 2 Sep 16 04:59:53.851232 kernel: CPU topo: Num. cores per package: 1 Sep 16 04:59:53.851246 kernel: CPU topo: Num. threads per package: 2 Sep 16 04:59:53.851259 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Sep 16 04:59:53.851273 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 16 04:59:53.851285 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Sep 16 04:59:53.851299 kernel: Booting paravirtualized kernel on KVM Sep 16 04:59:53.851313 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 16 04:59:53.851331 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 16 04:59:53.851344 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Sep 16 04:59:53.851357 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Sep 16 04:59:53.851369 kernel: pcpu-alloc: [0] 0 1 Sep 16 04:59:53.851383 kernel: kvm-guest: PV spinlocks enabled Sep 16 04:59:53.851397 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 16 04:59:53.851415 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=0b876f86a632750e9937176808a48c2452d5168964273bcfc3c72f2a26140c06 Sep 16 04:59:53.851429 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 16 04:59:53.851446 kernel: random: crng init done Sep 16 04:59:53.851460 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 16 04:59:53.851474 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 16 04:59:53.851488 kernel: Fallback order for Node 0: 0 Sep 16 04:59:53.851502 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Sep 16 04:59:53.851517 kernel: Policy zone: DMA32 Sep 16 04:59:53.851542 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 16 04:59:53.851559 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 16 04:59:53.851575 kernel: Kernel/User page tables isolation: enabled Sep 16 04:59:53.851589 kernel: ftrace: allocating 40125 entries in 157 pages Sep 16 04:59:53.851604 kernel: ftrace: allocated 157 pages with 5 groups Sep 16 04:59:53.851622 kernel: Dynamic Preempt: voluntary Sep 16 04:59:53.851637 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 16 04:59:53.851653 kernel: rcu: RCU event tracing is enabled. Sep 16 04:59:53.853701 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 16 04:59:53.853727 kernel: Trampoline variant of Tasks RCU enabled. Sep 16 04:59:53.853743 kernel: Rude variant of Tasks RCU enabled. Sep 16 04:59:53.853762 kernel: Tracing variant of Tasks RCU enabled. Sep 16 04:59:53.853776 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 16 04:59:53.853790 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 16 04:59:53.853804 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 16 04:59:53.853816 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 16 04:59:53.853830 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 16 04:59:53.853843 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 16 04:59:53.853857 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 16 04:59:53.853877 kernel: Console: colour dummy device 80x25 Sep 16 04:59:53.853892 kernel: printk: legacy console [tty0] enabled Sep 16 04:59:53.853907 kernel: printk: legacy console [ttyS0] enabled Sep 16 04:59:53.853922 kernel: ACPI: Core revision 20240827 Sep 16 04:59:53.853937 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Sep 16 04:59:53.853952 kernel: APIC: Switch to symmetric I/O mode setup Sep 16 04:59:53.853967 kernel: x2apic enabled Sep 16 04:59:53.853983 kernel: APIC: Switched APIC routing to: physical x2apic Sep 16 04:59:53.853998 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns Sep 16 04:59:53.854014 kernel: Calibrating delay loop (skipped) preset value.. 5000.00 BogoMIPS (lpj=2500004) Sep 16 04:59:53.854032 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 16 04:59:53.854047 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 16 04:59:53.854062 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 16 04:59:53.854077 kernel: Spectre V2 : Mitigation: Retpolines Sep 16 04:59:53.854092 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 16 04:59:53.854107 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 16 04:59:53.854122 kernel: RETBleed: Vulnerable Sep 16 04:59:53.854136 kernel: Speculative Store Bypass: Vulnerable Sep 16 04:59:53.854151 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Sep 16 04:59:53.854165 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 16 04:59:53.854183 kernel: GDS: Unknown: Dependent on hypervisor status Sep 16 04:59:53.854197 kernel: active return thunk: its_return_thunk Sep 16 04:59:53.854212 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 16 04:59:53.854228 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 16 04:59:53.854243 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 16 04:59:53.854257 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 16 04:59:53.854271 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Sep 16 04:59:53.854286 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Sep 16 04:59:53.854301 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 16 04:59:53.854315 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 16 04:59:53.854330 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 16 04:59:53.854348 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Sep 16 04:59:53.854363 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 16 04:59:53.854378 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Sep 16 04:59:53.854392 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Sep 16 04:59:53.854407 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Sep 16 04:59:53.854422 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Sep 16 04:59:53.854436 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Sep 16 04:59:53.854449 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Sep 16 04:59:53.854461 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Sep 16 04:59:53.854473 kernel: Freeing SMP alternatives memory: 32K Sep 16 04:59:53.854486 kernel: pid_max: default: 32768 minimum: 301 Sep 16 04:59:53.854503 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 16 04:59:53.854515 kernel: landlock: Up and running. Sep 16 04:59:53.854528 kernel: SELinux: Initializing. Sep 16 04:59:53.854540 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 16 04:59:53.854553 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 16 04:59:53.854566 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Sep 16 04:59:53.854579 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Sep 16 04:59:53.854593 kernel: signal: max sigframe size: 3632 Sep 16 04:59:53.854607 kernel: rcu: Hierarchical SRCU implementation. Sep 16 04:59:53.854624 kernel: rcu: Max phase no-delay instances is 400. Sep 16 04:59:53.854638 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 16 04:59:53.854655 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 16 04:59:53.854776 kernel: smp: Bringing up secondary CPUs ... Sep 16 04:59:53.854792 kernel: smpboot: x86: Booting SMP configuration: Sep 16 04:59:53.854805 kernel: .... node #0, CPUs: #1 Sep 16 04:59:53.854821 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 16 04:59:53.854838 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 16 04:59:53.854853 kernel: smp: Brought up 1 node, 2 CPUs Sep 16 04:59:53.854868 kernel: smpboot: Total of 2 processors activated (10000.01 BogoMIPS) Sep 16 04:59:53.854889 kernel: Memory: 1908060K/2037804K available (14336K kernel code, 2432K rwdata, 9992K rodata, 54096K init, 2868K bss, 125188K reserved, 0K cma-reserved) Sep 16 04:59:53.854904 kernel: devtmpfs: initialized Sep 16 04:59:53.854919 kernel: x86/mm: Memory block size: 128MB Sep 16 04:59:53.854944 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Sep 16 04:59:53.854959 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 16 04:59:53.854974 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 16 04:59:53.854989 kernel: pinctrl core: initialized pinctrl subsystem Sep 16 04:59:53.855005 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 16 04:59:53.855020 kernel: audit: initializing netlink subsys (disabled) Sep 16 04:59:53.855040 kernel: audit: type=2000 audit(1757998791.979:1): state=initialized audit_enabled=0 res=1 Sep 16 04:59:53.855054 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 16 04:59:53.855070 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 16 04:59:53.855085 kernel: cpuidle: using governor menu Sep 16 04:59:53.855100 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 16 04:59:53.855116 kernel: dca service started, version 1.12.1 Sep 16 04:59:53.855131 kernel: PCI: Using configuration type 1 for base access Sep 16 04:59:53.855146 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 16 04:59:53.855161 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 16 04:59:53.855179 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 16 04:59:53.855194 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 16 04:59:53.855209 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 16 04:59:53.855224 kernel: ACPI: Added _OSI(Module Device) Sep 16 04:59:53.855238 kernel: ACPI: Added _OSI(Processor Device) Sep 16 04:59:53.855253 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 16 04:59:53.855268 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Sep 16 04:59:53.855283 kernel: ACPI: Interpreter enabled Sep 16 04:59:53.855298 kernel: ACPI: PM: (supports S0 S5) Sep 16 04:59:53.855316 kernel: ACPI: Using IOAPIC for interrupt routing Sep 16 04:59:53.855331 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 16 04:59:53.855346 kernel: PCI: Using E820 reservations for host bridge windows Sep 16 04:59:53.855362 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 16 04:59:53.855375 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 16 04:59:53.855602 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 16 04:59:53.857833 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 16 04:59:53.858009 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 16 04:59:53.858031 kernel: acpiphp: Slot [3] registered Sep 16 04:59:53.858049 kernel: acpiphp: Slot [4] registered Sep 16 04:59:53.858065 kernel: acpiphp: Slot [5] registered Sep 16 04:59:53.858082 kernel: acpiphp: Slot [6] registered Sep 16 04:59:53.858098 kernel: acpiphp: Slot [7] registered Sep 16 04:59:53.858114 kernel: acpiphp: Slot [8] registered Sep 16 04:59:53.858130 kernel: acpiphp: Slot [9] registered Sep 16 04:59:53.858146 kernel: acpiphp: Slot [10] registered Sep 16 04:59:53.858167 kernel: acpiphp: Slot [11] registered Sep 16 04:59:53.858184 kernel: acpiphp: Slot [12] registered Sep 16 04:59:53.858201 kernel: acpiphp: Slot [13] registered Sep 16 04:59:53.858217 kernel: acpiphp: Slot [14] registered Sep 16 04:59:53.858233 kernel: acpiphp: Slot [15] registered Sep 16 04:59:53.858249 kernel: acpiphp: Slot [16] registered Sep 16 04:59:53.858265 kernel: acpiphp: Slot [17] registered Sep 16 04:59:53.858282 kernel: acpiphp: Slot [18] registered Sep 16 04:59:53.858298 kernel: acpiphp: Slot [19] registered Sep 16 04:59:53.858314 kernel: acpiphp: Slot [20] registered Sep 16 04:59:53.858334 kernel: acpiphp: Slot [21] registered Sep 16 04:59:53.858350 kernel: acpiphp: Slot [22] registered Sep 16 04:59:53.858367 kernel: acpiphp: Slot [23] registered Sep 16 04:59:53.858383 kernel: acpiphp: Slot [24] registered Sep 16 04:59:53.858399 kernel: acpiphp: Slot [25] registered Sep 16 04:59:53.858416 kernel: acpiphp: Slot [26] registered Sep 16 04:59:53.858431 kernel: acpiphp: Slot [27] registered Sep 16 04:59:53.858446 kernel: acpiphp: Slot [28] registered Sep 16 04:59:53.858461 kernel: acpiphp: Slot [29] registered Sep 16 04:59:53.858480 kernel: acpiphp: Slot [30] registered Sep 16 04:59:53.858496 kernel: acpiphp: Slot [31] registered Sep 16 04:59:53.858512 kernel: PCI host bridge to bus 0000:00 Sep 16 04:59:53.858682 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 16 04:59:53.858816 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 16 04:59:53.858968 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 16 04:59:53.859105 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 16 04:59:53.859238 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Sep 16 04:59:53.859363 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 16 04:59:53.859537 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Sep 16 04:59:53.860752 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Sep 16 04:59:53.860927 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Sep 16 04:59:53.861063 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 16 04:59:53.861194 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Sep 16 04:59:53.861329 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Sep 16 04:59:53.861458 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Sep 16 04:59:53.861595 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Sep 16 04:59:53.864819 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Sep 16 04:59:53.864994 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Sep 16 04:59:53.865155 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Sep 16 04:59:53.865296 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Sep 16 04:59:53.865443 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Sep 16 04:59:53.865584 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 16 04:59:53.865753 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Sep 16 04:59:53.865895 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Sep 16 04:59:53.866042 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Sep 16 04:59:53.866179 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Sep 16 04:59:53.866205 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 16 04:59:53.866222 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 16 04:59:53.866238 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 16 04:59:53.866255 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 16 04:59:53.866273 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 16 04:59:53.866290 kernel: iommu: Default domain type: Translated Sep 16 04:59:53.866307 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 16 04:59:53.866324 kernel: efivars: Registered efivars operations Sep 16 04:59:53.866341 kernel: PCI: Using ACPI for IRQ routing Sep 16 04:59:53.866361 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 16 04:59:53.866377 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Sep 16 04:59:53.866392 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Sep 16 04:59:53.866408 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Sep 16 04:59:53.866547 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Sep 16 04:59:53.867747 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Sep 16 04:59:53.867919 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 16 04:59:53.867941 kernel: vgaarb: loaded Sep 16 04:59:53.867964 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Sep 16 04:59:53.867980 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Sep 16 04:59:53.867996 kernel: clocksource: Switched to clocksource kvm-clock Sep 16 04:59:53.868012 kernel: VFS: Disk quotas dquot_6.6.0 Sep 16 04:59:53.868028 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 16 04:59:53.868044 kernel: pnp: PnP ACPI init Sep 16 04:59:53.868060 kernel: pnp: PnP ACPI: found 5 devices Sep 16 04:59:53.868076 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 16 04:59:53.868092 kernel: NET: Registered PF_INET protocol family Sep 16 04:59:53.868111 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 16 04:59:53.868128 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 16 04:59:53.868144 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 16 04:59:53.868161 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 16 04:59:53.868178 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 16 04:59:53.868193 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 16 04:59:53.868209 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 16 04:59:53.868226 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 16 04:59:53.868242 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 16 04:59:53.868261 kernel: NET: Registered PF_XDP protocol family Sep 16 04:59:53.868393 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 16 04:59:53.868517 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 16 04:59:53.868640 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 16 04:59:53.869833 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 16 04:59:53.869972 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Sep 16 04:59:53.870117 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 16 04:59:53.870137 kernel: PCI: CLS 0 bytes, default 64 Sep 16 04:59:53.870156 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 16 04:59:53.870171 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns Sep 16 04:59:53.870186 kernel: clocksource: Switched to clocksource tsc Sep 16 04:59:53.870200 kernel: Initialise system trusted keyrings Sep 16 04:59:53.870214 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 16 04:59:53.870228 kernel: Key type asymmetric registered Sep 16 04:59:53.870241 kernel: Asymmetric key parser 'x509' registered Sep 16 04:59:53.870255 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 16 04:59:53.870269 kernel: io scheduler mq-deadline registered Sep 16 04:59:53.870286 kernel: io scheduler kyber registered Sep 16 04:59:53.870300 kernel: io scheduler bfq registered Sep 16 04:59:53.870314 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 16 04:59:53.870328 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 16 04:59:53.870342 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 16 04:59:53.870356 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 16 04:59:53.870370 kernel: i8042: Warning: Keylock active Sep 16 04:59:53.870384 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 16 04:59:53.870398 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 16 04:59:53.870536 kernel: rtc_cmos 00:00: RTC can wake from S4 Sep 16 04:59:53.870662 kernel: rtc_cmos 00:00: registered as rtc0 Sep 16 04:59:53.870800 kernel: rtc_cmos 00:00: setting system clock to 2025-09-16T04:59:53 UTC (1757998793) Sep 16 04:59:53.870919 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Sep 16 04:59:53.870969 kernel: intel_pstate: CPU model not supported Sep 16 04:59:53.870987 kernel: efifb: probing for efifb Sep 16 04:59:53.871003 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Sep 16 04:59:53.871017 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Sep 16 04:59:53.871035 kernel: efifb: scrolling: redraw Sep 16 04:59:53.871050 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 16 04:59:53.871065 kernel: Console: switching to colour frame buffer device 100x37 Sep 16 04:59:53.871079 kernel: fb0: EFI VGA frame buffer device Sep 16 04:59:53.871094 kernel: pstore: Using crash dump compression: deflate Sep 16 04:59:53.871109 kernel: pstore: Registered efi_pstore as persistent store backend Sep 16 04:59:53.871124 kernel: NET: Registered PF_INET6 protocol family Sep 16 04:59:53.871138 kernel: Segment Routing with IPv6 Sep 16 04:59:53.871153 kernel: In-situ OAM (IOAM) with IPv6 Sep 16 04:59:53.871171 kernel: NET: Registered PF_PACKET protocol family Sep 16 04:59:53.871185 kernel: Key type dns_resolver registered Sep 16 04:59:53.871199 kernel: IPI shorthand broadcast: enabled Sep 16 04:59:53.871214 kernel: sched_clock: Marking stable (2585002789, 142852211)->(2818664280, -90809280) Sep 16 04:59:53.871228 kernel: registered taskstats version 1 Sep 16 04:59:53.871243 kernel: Loading compiled-in X.509 certificates Sep 16 04:59:53.871257 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: d1d5b0d56b9b23dabf19e645632ff93bf659b3bf' Sep 16 04:59:53.871271 kernel: Demotion targets for Node 0: null Sep 16 04:59:53.871286 kernel: Key type .fscrypt registered Sep 16 04:59:53.871304 kernel: Key type fscrypt-provisioning registered Sep 16 04:59:53.871319 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 16 04:59:53.871334 kernel: ima: Allocated hash algorithm: sha1 Sep 16 04:59:53.871348 kernel: ima: No architecture policies found Sep 16 04:59:53.871363 kernel: clk: Disabling unused clocks Sep 16 04:59:53.871377 kernel: Warning: unable to open an initial console. Sep 16 04:59:53.871393 kernel: Freeing unused kernel image (initmem) memory: 54096K Sep 16 04:59:53.871407 kernel: Write protecting the kernel read-only data: 24576k Sep 16 04:59:53.871422 kernel: Freeing unused kernel image (rodata/data gap) memory: 248K Sep 16 04:59:53.871443 kernel: Run /init as init process Sep 16 04:59:53.871457 kernel: with arguments: Sep 16 04:59:53.871472 kernel: /init Sep 16 04:59:53.871486 kernel: with environment: Sep 16 04:59:53.871500 kernel: HOME=/ Sep 16 04:59:53.871518 kernel: TERM=linux Sep 16 04:59:53.871532 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 16 04:59:53.871548 systemd[1]: Successfully made /usr/ read-only. Sep 16 04:59:53.871567 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 16 04:59:53.871583 systemd[1]: Detected virtualization amazon. Sep 16 04:59:53.871598 systemd[1]: Detected architecture x86-64. Sep 16 04:59:53.871613 systemd[1]: Running in initrd. Sep 16 04:59:53.871630 systemd[1]: No hostname configured, using default hostname. Sep 16 04:59:53.871645 systemd[1]: Hostname set to . Sep 16 04:59:53.871660 systemd[1]: Initializing machine ID from VM UUID. Sep 16 04:59:53.872061 systemd[1]: Queued start job for default target initrd.target. Sep 16 04:59:53.872082 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 04:59:53.872098 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 04:59:53.872114 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 16 04:59:53.872131 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 16 04:59:53.872150 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 16 04:59:53.872167 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 16 04:59:53.872184 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 16 04:59:53.872200 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 16 04:59:53.872215 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 04:59:53.872232 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 16 04:59:53.872247 systemd[1]: Reached target paths.target - Path Units. Sep 16 04:59:53.872267 systemd[1]: Reached target slices.target - Slice Units. Sep 16 04:59:53.872284 systemd[1]: Reached target swap.target - Swaps. Sep 16 04:59:53.872301 systemd[1]: Reached target timers.target - Timer Units. Sep 16 04:59:53.872320 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 16 04:59:53.872335 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 16 04:59:53.872352 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 16 04:59:53.872370 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 16 04:59:53.872388 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 16 04:59:53.872405 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 16 04:59:53.872426 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 04:59:53.872444 systemd[1]: Reached target sockets.target - Socket Units. Sep 16 04:59:53.872462 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 16 04:59:53.872480 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 16 04:59:53.872497 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 16 04:59:53.872516 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 16 04:59:53.872534 systemd[1]: Starting systemd-fsck-usr.service... Sep 16 04:59:53.872552 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 16 04:59:53.872573 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 16 04:59:53.872591 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:59:53.872641 systemd-journald[207]: Collecting audit messages is disabled. Sep 16 04:59:53.873721 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 16 04:59:53.873746 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 04:59:53.873763 systemd[1]: Finished systemd-fsck-usr.service. Sep 16 04:59:53.873781 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 16 04:59:53.873800 systemd-journald[207]: Journal started Sep 16 04:59:53.873839 systemd-journald[207]: Runtime Journal (/run/log/journal/ec2d5267806a3121a7d0e42c2a83558a) is 4.8M, max 38.4M, 33.6M free. Sep 16 04:59:53.886729 systemd[1]: Started systemd-journald.service - Journal Service. Sep 16 04:59:53.888287 systemd-modules-load[209]: Inserted module 'overlay' Sep 16 04:59:53.892403 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:59:53.901910 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 16 04:59:53.912845 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 16 04:59:53.916982 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 16 04:59:53.924137 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 16 04:59:53.936695 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 16 04:59:53.940694 systemd-tmpfiles[224]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 16 04:59:53.946573 kernel: Bridge firewalling registered Sep 16 04:59:53.946849 systemd-modules-load[209]: Inserted module 'br_netfilter' Sep 16 04:59:53.949979 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 16 04:59:53.955841 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 16 04:59:53.958787 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 16 04:59:53.961038 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 04:59:53.965843 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 04:59:53.969872 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 16 04:59:53.975811 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:59:53.979806 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 16 04:59:53.989761 dracut-cmdline[243]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=0b876f86a632750e9937176808a48c2452d5168964273bcfc3c72f2a26140c06 Sep 16 04:59:54.004715 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 16 04:59:54.028326 systemd-resolved[246]: Positive Trust Anchors: Sep 16 04:59:54.028345 systemd-resolved[246]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 16 04:59:54.028411 systemd-resolved[246]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 16 04:59:54.036638 systemd-resolved[246]: Defaulting to hostname 'linux'. Sep 16 04:59:54.039838 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 16 04:59:54.041298 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 16 04:59:54.090714 kernel: SCSI subsystem initialized Sep 16 04:59:54.100699 kernel: Loading iSCSI transport class v2.0-870. Sep 16 04:59:54.112711 kernel: iscsi: registered transport (tcp) Sep 16 04:59:54.134239 kernel: iscsi: registered transport (qla4xxx) Sep 16 04:59:54.134318 kernel: QLogic iSCSI HBA Driver Sep 16 04:59:54.153511 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 16 04:59:54.177594 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 04:59:54.179461 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 16 04:59:54.226065 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 16 04:59:54.228528 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 16 04:59:54.280702 kernel: raid6: avx512x4 gen() 18187 MB/s Sep 16 04:59:54.298695 kernel: raid6: avx512x2 gen() 18117 MB/s Sep 16 04:59:54.316698 kernel: raid6: avx512x1 gen() 17938 MB/s Sep 16 04:59:54.334693 kernel: raid6: avx2x4 gen() 18030 MB/s Sep 16 04:59:54.352693 kernel: raid6: avx2x2 gen() 18096 MB/s Sep 16 04:59:54.370954 kernel: raid6: avx2x1 gen() 13943 MB/s Sep 16 04:59:54.371025 kernel: raid6: using algorithm avx512x4 gen() 18187 MB/s Sep 16 04:59:54.389894 kernel: raid6: .... xor() 7721 MB/s, rmw enabled Sep 16 04:59:54.389954 kernel: raid6: using avx512x2 recovery algorithm Sep 16 04:59:54.411716 kernel: xor: automatically using best checksumming function avx Sep 16 04:59:54.579705 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 16 04:59:54.586467 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 16 04:59:54.589540 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 04:59:54.616248 systemd-udevd[455]: Using default interface naming scheme 'v255'. Sep 16 04:59:54.622872 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 04:59:54.628233 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 16 04:59:54.652905 dracut-pre-trigger[462]: rd.md=0: removing MD RAID activation Sep 16 04:59:54.680765 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 16 04:59:54.682872 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 16 04:59:54.748340 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 04:59:54.752647 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 16 04:59:54.828447 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 16 04:59:54.828747 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 16 04:59:54.837705 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Sep 16 04:59:54.860696 kernel: cryptd: max_cpu_qlen set to 1000 Sep 16 04:59:54.882495 kernel: AES CTR mode by8 optimization enabled Sep 16 04:59:54.882558 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:16:c0:5d:59:11 Sep 16 04:59:54.885712 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 16 04:59:54.886361 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 04:59:54.886623 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:59:54.891048 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:59:54.907358 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 16 04:59:54.907439 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Sep 16 04:59:54.913301 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:59:54.915703 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 16 04:59:54.924707 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 16 04:59:54.939398 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 16 04:59:54.939475 kernel: GPT:9289727 != 16777215 Sep 16 04:59:54.939497 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 16 04:59:54.939516 kernel: GPT:9289727 != 16777215 Sep 16 04:59:54.939534 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 16 04:59:54.939553 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 16 04:59:54.946087 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:59:54.950486 (udev-worker)[506]: Network interface NamePolicy= disabled on kernel command line. Sep 16 04:59:54.980699 kernel: nvme nvme0: using unchecked data buffer Sep 16 04:59:55.073499 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 16 04:59:55.088159 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 16 04:59:55.098289 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 16 04:59:55.099160 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 16 04:59:55.125589 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 16 04:59:55.126215 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 16 04:59:55.127786 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 16 04:59:55.128987 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 04:59:55.130147 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 16 04:59:55.132030 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 16 04:59:55.134841 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 16 04:59:55.153123 disk-uuid[696]: Primary Header is updated. Sep 16 04:59:55.153123 disk-uuid[696]: Secondary Entries is updated. Sep 16 04:59:55.153123 disk-uuid[696]: Secondary Header is updated. Sep 16 04:59:55.157812 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 16 04:59:55.161694 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 16 04:59:56.172762 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 16 04:59:56.173655 disk-uuid[700]: The operation has completed successfully. Sep 16 04:59:56.292467 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 16 04:59:56.292600 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 16 04:59:56.320905 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 16 04:59:56.334202 sh[964]: Success Sep 16 04:59:56.359829 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 16 04:59:56.359909 kernel: device-mapper: uevent: version 1.0.3 Sep 16 04:59:56.361118 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 16 04:59:56.373693 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Sep 16 04:59:56.467162 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 16 04:59:56.469734 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 16 04:59:56.480246 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 16 04:59:56.499705 kernel: BTRFS: device fsid f1b91845-3914-4d21-a370-6d760ee45b2e devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (987) Sep 16 04:59:56.503817 kernel: BTRFS info (device dm-0): first mount of filesystem f1b91845-3914-4d21-a370-6d760ee45b2e Sep 16 04:59:56.503897 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 16 04:59:56.588703 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 16 04:59:56.588785 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 16 04:59:56.588806 kernel: BTRFS info (device dm-0): enabling free space tree Sep 16 04:59:56.601934 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 16 04:59:56.603136 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 16 04:59:56.603802 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 16 04:59:56.604826 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 16 04:59:56.607270 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 16 04:59:56.651709 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1018) Sep 16 04:59:56.657855 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 04:59:56.657931 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 16 04:59:56.666245 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 16 04:59:56.666325 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 16 04:59:56.674755 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 04:59:56.676444 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 16 04:59:56.679837 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 16 04:59:56.720192 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 16 04:59:56.722599 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 16 04:59:56.771393 systemd-networkd[1156]: lo: Link UP Sep 16 04:59:56.771407 systemd-networkd[1156]: lo: Gained carrier Sep 16 04:59:56.774232 systemd-networkd[1156]: Enumeration completed Sep 16 04:59:56.774363 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 16 04:59:56.775221 systemd[1]: Reached target network.target - Network. Sep 16 04:59:56.776246 systemd-networkd[1156]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:59:56.776253 systemd-networkd[1156]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 16 04:59:56.785876 systemd-networkd[1156]: eth0: Link UP Sep 16 04:59:56.785887 systemd-networkd[1156]: eth0: Gained carrier Sep 16 04:59:56.785909 systemd-networkd[1156]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:59:56.802793 systemd-networkd[1156]: eth0: DHCPv4 address 172.31.17.131/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 16 04:59:57.175036 ignition[1101]: Ignition 2.22.0 Sep 16 04:59:57.175053 ignition[1101]: Stage: fetch-offline Sep 16 04:59:57.175286 ignition[1101]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:59:57.175299 ignition[1101]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 16 04:59:57.175575 ignition[1101]: Ignition finished successfully Sep 16 04:59:57.178190 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 16 04:59:57.180176 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 16 04:59:57.213813 ignition[1165]: Ignition 2.22.0 Sep 16 04:59:57.213827 ignition[1165]: Stage: fetch Sep 16 04:59:57.214200 ignition[1165]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:59:57.214212 ignition[1165]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 16 04:59:57.214319 ignition[1165]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 16 04:59:57.222748 ignition[1165]: PUT result: OK Sep 16 04:59:57.224516 ignition[1165]: parsed url from cmdline: "" Sep 16 04:59:57.224525 ignition[1165]: no config URL provided Sep 16 04:59:57.224532 ignition[1165]: reading system config file "/usr/lib/ignition/user.ign" Sep 16 04:59:57.224544 ignition[1165]: no config at "/usr/lib/ignition/user.ign" Sep 16 04:59:57.224586 ignition[1165]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 16 04:59:57.225095 ignition[1165]: PUT result: OK Sep 16 04:59:57.225133 ignition[1165]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 16 04:59:57.225724 ignition[1165]: GET result: OK Sep 16 04:59:57.225790 ignition[1165]: parsing config with SHA512: 0bb9add7c1029e0a2e94fb6690b5851726f60d46ebf59e454f969d69a471d5a9e83220c69a8f166050f714a932ea0a4a16598f61967486ed6e42fea74473deec Sep 16 04:59:57.230540 unknown[1165]: fetched base config from "system" Sep 16 04:59:57.231286 ignition[1165]: fetch: fetch complete Sep 16 04:59:57.230549 unknown[1165]: fetched base config from "system" Sep 16 04:59:57.231291 ignition[1165]: fetch: fetch passed Sep 16 04:59:57.230559 unknown[1165]: fetched user config from "aws" Sep 16 04:59:57.231345 ignition[1165]: Ignition finished successfully Sep 16 04:59:57.235297 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 16 04:59:57.236983 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 16 04:59:57.274080 ignition[1171]: Ignition 2.22.0 Sep 16 04:59:57.274095 ignition[1171]: Stage: kargs Sep 16 04:59:57.274492 ignition[1171]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:59:57.274505 ignition[1171]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 16 04:59:57.274624 ignition[1171]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 16 04:59:57.275649 ignition[1171]: PUT result: OK Sep 16 04:59:57.278739 ignition[1171]: kargs: kargs passed Sep 16 04:59:57.278810 ignition[1171]: Ignition finished successfully Sep 16 04:59:57.281301 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 16 04:59:57.282753 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 16 04:59:57.317095 ignition[1177]: Ignition 2.22.0 Sep 16 04:59:57.317111 ignition[1177]: Stage: disks Sep 16 04:59:57.317511 ignition[1177]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:59:57.317524 ignition[1177]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 16 04:59:57.317642 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 16 04:59:57.323442 ignition[1177]: PUT result: OK Sep 16 04:59:57.326134 ignition[1177]: disks: disks passed Sep 16 04:59:57.326191 ignition[1177]: Ignition finished successfully Sep 16 04:59:57.328479 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 16 04:59:57.329067 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 16 04:59:57.329407 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 16 04:59:57.330081 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 16 04:59:57.330752 systemd[1]: Reached target sysinit.target - System Initialization. Sep 16 04:59:57.331427 systemd[1]: Reached target basic.target - Basic System. Sep 16 04:59:57.333128 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 16 04:59:57.372079 systemd-fsck[1186]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 16 04:59:57.374550 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 16 04:59:57.376576 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 16 04:59:57.546692 kernel: EXT4-fs (nvme0n1p9): mounted filesystem fb1cb44f-955b-4cd0-8849-33ce3640d547 r/w with ordered data mode. Quota mode: none. Sep 16 04:59:57.547631 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 16 04:59:57.548502 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 16 04:59:57.550433 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 16 04:59:57.552042 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 16 04:59:57.554360 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 16 04:59:57.554771 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 16 04:59:57.554796 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 16 04:59:57.560517 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 16 04:59:57.562553 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 16 04:59:57.576691 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1205) Sep 16 04:59:57.580848 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 04:59:57.580919 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 16 04:59:57.589769 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 16 04:59:57.589844 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 16 04:59:57.593122 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 16 04:59:57.835556 initrd-setup-root[1229]: cut: /sysroot/etc/passwd: No such file or directory Sep 16 04:59:57.850587 initrd-setup-root[1236]: cut: /sysroot/etc/group: No such file or directory Sep 16 04:59:57.856153 initrd-setup-root[1243]: cut: /sysroot/etc/shadow: No such file or directory Sep 16 04:59:57.861899 initrd-setup-root[1250]: cut: /sysroot/etc/gshadow: No such file or directory Sep 16 04:59:58.112338 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 16 04:59:58.114474 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 16 04:59:58.116520 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 16 04:59:58.139839 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 16 04:59:58.141954 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 04:59:58.187354 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 16 04:59:58.195071 ignition[1317]: INFO : Ignition 2.22.0 Sep 16 04:59:58.195071 ignition[1317]: INFO : Stage: mount Sep 16 04:59:58.196625 ignition[1317]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 04:59:58.196625 ignition[1317]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 16 04:59:58.196625 ignition[1317]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 16 04:59:58.196625 ignition[1317]: INFO : PUT result: OK Sep 16 04:59:58.199592 ignition[1317]: INFO : mount: mount passed Sep 16 04:59:58.200755 ignition[1317]: INFO : Ignition finished successfully Sep 16 04:59:58.201422 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 16 04:59:58.203575 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 16 04:59:58.289873 systemd-networkd[1156]: eth0: Gained IPv6LL Sep 16 04:59:58.549461 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 16 04:59:58.583713 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1330) Sep 16 04:59:58.586781 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 04:59:58.586853 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 16 04:59:58.596515 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 16 04:59:58.596594 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 16 04:59:58.598842 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 16 04:59:58.636792 ignition[1346]: INFO : Ignition 2.22.0 Sep 16 04:59:58.636792 ignition[1346]: INFO : Stage: files Sep 16 04:59:58.638411 ignition[1346]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 04:59:58.638411 ignition[1346]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 16 04:59:58.638411 ignition[1346]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 16 04:59:58.639966 ignition[1346]: INFO : PUT result: OK Sep 16 04:59:58.641099 ignition[1346]: DEBUG : files: compiled without relabeling support, skipping Sep 16 04:59:58.642238 ignition[1346]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 16 04:59:58.642238 ignition[1346]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 16 04:59:58.646497 ignition[1346]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 16 04:59:58.647561 ignition[1346]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 16 04:59:58.648573 unknown[1346]: wrote ssh authorized keys file for user: core Sep 16 04:59:58.649126 ignition[1346]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 16 04:59:58.653408 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 16 04:59:58.654468 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 16 04:59:58.704688 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 16 04:59:58.917578 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 16 04:59:58.917578 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 16 04:59:58.926063 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 16 04:59:59.129109 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 16 04:59:59.255825 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 16 04:59:59.255825 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 16 04:59:59.258540 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 16 04:59:59.258540 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 16 04:59:59.258540 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 16 04:59:59.258540 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 16 04:59:59.258540 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 16 04:59:59.258540 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 16 04:59:59.258540 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 16 04:59:59.264995 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 16 04:59:59.264995 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 16 04:59:59.264995 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 16 04:59:59.264995 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 16 04:59:59.264995 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 16 04:59:59.269938 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 16 04:59:59.542563 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 16 04:59:59.975723 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 16 04:59:59.975723 ignition[1346]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 16 04:59:59.988254 ignition[1346]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 16 04:59:59.996680 ignition[1346]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 16 04:59:59.996680 ignition[1346]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 16 05:00:00.003808 ignition[1346]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 16 05:00:00.003808 ignition[1346]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 16 05:00:00.003808 ignition[1346]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 16 05:00:00.003808 ignition[1346]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 16 05:00:00.003808 ignition[1346]: INFO : files: files passed Sep 16 05:00:00.003808 ignition[1346]: INFO : Ignition finished successfully Sep 16 05:00:00.003935 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 16 05:00:00.014255 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 16 05:00:00.034721 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 16 05:00:00.046295 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 16 05:00:00.046755 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 16 05:00:00.061697 initrd-setup-root-after-ignition[1377]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 16 05:00:00.065027 initrd-setup-root-after-ignition[1377]: grep: Sep 16 05:00:00.068205 initrd-setup-root-after-ignition[1381]: grep: Sep 16 05:00:00.069484 initrd-setup-root-after-ignition[1377]: /sysroot/usr/share/flatcar/enabled-sysext.conf Sep 16 05:00:00.072254 initrd-setup-root-after-ignition[1381]: /sysroot/etc/flatcar/enabled-sysext.conf Sep 16 05:00:00.070567 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 16 05:00:00.076568 initrd-setup-root-after-ignition[1377]: : No such file or directory Sep 16 05:00:00.073295 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 16 05:00:00.084393 initrd-setup-root-after-ignition[1381]: : No such file or directory Sep 16 05:00:00.076822 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 16 05:00:00.395399 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 16 05:00:00.395839 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 16 05:00:00.401646 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 16 05:00:00.405009 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 16 05:00:00.409429 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 16 05:00:00.418264 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 16 05:00:00.480933 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 16 05:00:00.496140 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 16 05:00:00.551507 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 16 05:00:00.552290 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 05:00:00.558131 systemd[1]: Stopped target timers.target - Timer Units. Sep 16 05:00:00.563017 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 16 05:00:00.563271 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 16 05:00:00.575053 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 16 05:00:00.581518 systemd[1]: Stopped target basic.target - Basic System. Sep 16 05:00:00.597072 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 16 05:00:00.605399 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 16 05:00:00.610643 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 16 05:00:00.618288 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 16 05:00:00.625324 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 16 05:00:00.631555 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 16 05:00:00.632306 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 16 05:00:00.644336 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 16 05:00:00.656905 systemd[1]: Stopped target swap.target - Swaps. Sep 16 05:00:00.659913 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 16 05:00:00.660119 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 16 05:00:00.673637 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 16 05:00:00.686214 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 05:00:00.693955 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 16 05:00:00.694330 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 05:00:00.696707 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 16 05:00:00.699071 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 16 05:00:00.705596 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 16 05:00:00.706082 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 16 05:00:00.710455 systemd[1]: ignition-files.service: Deactivated successfully. Sep 16 05:00:00.710724 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 16 05:00:00.731813 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 16 05:00:00.756030 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 16 05:00:00.781271 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 16 05:00:00.781540 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 05:00:00.791812 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 16 05:00:00.791980 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 16 05:00:00.861777 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 16 05:00:00.917893 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 16 05:00:01.027103 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 16 05:00:01.044159 ignition[1401]: INFO : Ignition 2.22.0 Sep 16 05:00:01.044159 ignition[1401]: INFO : Stage: umount Sep 16 05:00:01.054001 ignition[1401]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 05:00:01.054001 ignition[1401]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 16 05:00:01.054001 ignition[1401]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 16 05:00:01.074362 ignition[1401]: INFO : PUT result: OK Sep 16 05:00:01.082464 ignition[1401]: INFO : umount: umount passed Sep 16 05:00:01.082464 ignition[1401]: INFO : Ignition finished successfully Sep 16 05:00:01.099222 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 16 05:00:01.099406 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 16 05:00:01.117100 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 16 05:00:01.120707 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 16 05:00:01.130413 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 16 05:00:01.130507 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 16 05:00:01.135542 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 16 05:00:01.135639 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 16 05:00:01.136327 systemd[1]: Stopped target network.target - Network. Sep 16 05:00:01.146396 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 16 05:00:01.146493 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 16 05:00:01.158984 systemd[1]: Stopped target paths.target - Path Units. Sep 16 05:00:01.160272 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 16 05:00:01.160345 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 05:00:01.168255 systemd[1]: Stopped target slices.target - Slice Units. Sep 16 05:00:01.189967 systemd[1]: Stopped target sockets.target - Socket Units. Sep 16 05:00:01.193248 systemd[1]: iscsid.socket: Deactivated successfully. Sep 16 05:00:01.193314 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 16 05:00:01.195928 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 16 05:00:01.195995 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 16 05:00:01.209284 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 16 05:00:01.209401 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 16 05:00:01.212121 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 16 05:00:01.212208 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 16 05:00:01.213082 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 16 05:00:01.221082 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 16 05:00:01.222438 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 16 05:00:01.222573 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 16 05:00:01.259198 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 16 05:00:01.259361 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 16 05:00:01.276414 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 16 05:00:01.276579 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 16 05:00:01.300302 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 16 05:00:01.300634 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 16 05:00:01.300834 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 16 05:00:01.320703 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 16 05:00:01.322477 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 16 05:00:01.338976 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 16 05:00:01.339056 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 16 05:00:01.346702 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 16 05:00:01.348035 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 16 05:00:01.348807 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 16 05:00:01.375362 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 16 05:00:01.375461 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 16 05:00:01.383440 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 16 05:00:01.383522 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 16 05:00:01.387887 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 16 05:00:01.387981 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 05:00:01.408627 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 05:00:01.450928 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 16 05:00:01.451057 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 16 05:00:01.495891 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 16 05:00:01.496889 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 05:00:01.510309 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 16 05:00:01.515386 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 16 05:00:01.521261 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 16 05:00:01.521327 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 05:00:01.530282 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 16 05:00:01.530386 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 16 05:00:01.540437 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 16 05:00:01.540528 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 16 05:00:01.556149 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 16 05:00:01.556259 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 16 05:00:01.567731 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 16 05:00:01.579370 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 16 05:00:01.586951 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 05:00:01.596637 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 16 05:00:01.597228 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 05:00:01.615504 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 16 05:00:01.615606 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 16 05:00:01.623744 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 16 05:00:01.623819 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 05:00:01.624540 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 05:00:01.624610 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 05:00:01.655728 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 16 05:00:01.655825 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 16 05:00:01.655876 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 16 05:00:01.655929 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 16 05:00:01.659647 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 16 05:00:01.663703 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 16 05:00:01.679760 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 16 05:00:01.679897 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 16 05:00:01.682580 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 16 05:00:01.718072 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 16 05:00:01.768350 systemd[1]: Switching root. Sep 16 05:00:01.861804 systemd-journald[207]: Received SIGTERM from PID 1 (systemd). Sep 16 05:00:01.861896 systemd-journald[207]: Journal stopped Sep 16 05:00:07.153141 kernel: SELinux: policy capability network_peer_controls=1 Sep 16 05:00:07.153236 kernel: SELinux: policy capability open_perms=1 Sep 16 05:00:07.153258 kernel: SELinux: policy capability extended_socket_class=1 Sep 16 05:00:07.153284 kernel: SELinux: policy capability always_check_network=0 Sep 16 05:00:07.153305 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 16 05:00:07.153324 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 16 05:00:07.153344 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 16 05:00:07.153363 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 16 05:00:07.162754 kernel: SELinux: policy capability userspace_initial_context=0 Sep 16 05:00:07.162832 kernel: audit: type=1403 audit(1757998802.976:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 16 05:00:07.162870 systemd[1]: Successfully loaded SELinux policy in 239.751ms. Sep 16 05:00:07.162907 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 85.610ms. Sep 16 05:00:07.162931 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 16 05:00:07.162950 systemd[1]: Detected virtualization amazon. Sep 16 05:00:07.162967 systemd[1]: Detected architecture x86-64. Sep 16 05:00:07.162986 systemd[1]: Detected first boot. Sep 16 05:00:07.163005 systemd[1]: Initializing machine ID from VM UUID. Sep 16 05:00:07.163025 zram_generator::config[1444]: No configuration found. Sep 16 05:00:07.163045 kernel: Guest personality initialized and is inactive Sep 16 05:00:07.163073 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 16 05:00:07.163107 kernel: Initialized host personality Sep 16 05:00:07.163127 kernel: NET: Registered PF_VSOCK protocol family Sep 16 05:00:07.163144 systemd[1]: Populated /etc with preset unit settings. Sep 16 05:00:07.174096 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 16 05:00:07.174135 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 16 05:00:07.174163 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 16 05:00:07.174186 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 16 05:00:07.174205 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 16 05:00:07.174227 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 16 05:00:07.174256 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 16 05:00:07.174277 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 16 05:00:07.174299 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 16 05:00:07.174320 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 16 05:00:07.174341 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 16 05:00:07.174364 systemd[1]: Created slice user.slice - User and Session Slice. Sep 16 05:00:07.180782 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 05:00:07.180823 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 05:00:07.180849 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 16 05:00:07.180884 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 16 05:00:07.180909 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 16 05:00:07.180937 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 16 05:00:07.180997 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 16 05:00:07.181022 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 05:00:07.181047 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 16 05:00:07.181072 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 16 05:00:07.181101 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 16 05:00:07.181127 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 16 05:00:07.181151 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 16 05:00:07.181177 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 05:00:07.181226 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 16 05:00:07.181252 systemd[1]: Reached target slices.target - Slice Units. Sep 16 05:00:07.181277 systemd[1]: Reached target swap.target - Swaps. Sep 16 05:00:07.181300 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 16 05:00:07.181325 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 16 05:00:07.181350 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 16 05:00:07.181378 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 16 05:00:07.181403 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 16 05:00:07.181427 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 05:00:07.181451 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 16 05:00:07.181476 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 16 05:00:07.181501 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 16 05:00:07.181527 systemd[1]: Mounting media.mount - External Media Directory... Sep 16 05:00:07.181552 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 05:00:07.181580 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 16 05:00:07.181604 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 16 05:00:07.181628 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 16 05:00:07.181654 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 16 05:00:07.198597 systemd[1]: Reached target machines.target - Containers. Sep 16 05:00:07.198635 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 16 05:00:07.199108 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 05:00:07.199284 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 16 05:00:07.199315 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 16 05:00:07.199348 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 05:00:07.199369 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 16 05:00:07.199391 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 05:00:07.199412 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 16 05:00:07.199434 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 05:00:07.199556 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 16 05:00:07.199583 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 16 05:00:07.199605 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 16 05:00:07.199632 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 16 05:00:07.199654 systemd[1]: Stopped systemd-fsck-usr.service. Sep 16 05:00:07.207756 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 05:00:07.207807 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 16 05:00:07.208180 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 16 05:00:07.208207 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 16 05:00:07.208228 kernel: loop: module loaded Sep 16 05:00:07.208248 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 16 05:00:07.208268 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 16 05:00:07.208297 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 16 05:00:07.208319 systemd[1]: verity-setup.service: Deactivated successfully. Sep 16 05:00:07.208339 systemd[1]: Stopped verity-setup.service. Sep 16 05:00:07.208359 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 05:00:07.208378 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 16 05:00:07.208395 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 16 05:00:07.208414 systemd[1]: Mounted media.mount - External Media Directory. Sep 16 05:00:07.208433 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 16 05:00:07.208451 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 16 05:00:07.208469 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 16 05:00:07.208492 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 05:00:07.208511 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 16 05:00:07.208529 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 16 05:00:07.208547 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 05:00:07.208566 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 05:00:07.208586 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 05:00:07.208608 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 05:00:07.208629 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 05:00:07.208651 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 05:00:07.208698 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 16 05:00:07.208721 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 05:00:07.208743 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 16 05:00:07.208765 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 16 05:00:07.208784 kernel: fuse: init (API version 7.41) Sep 16 05:00:07.208803 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 16 05:00:07.208821 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 16 05:00:07.208840 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 16 05:00:07.208857 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 16 05:00:07.208881 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 16 05:00:07.208902 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 05:00:07.208923 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 16 05:00:07.208942 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 16 05:00:07.208965 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 16 05:00:07.208985 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 16 05:00:07.209006 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 16 05:00:07.209108 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 16 05:00:07.209133 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 16 05:00:07.209155 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 16 05:00:07.209175 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 16 05:00:07.209194 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 16 05:00:07.209217 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 16 05:00:07.209243 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 16 05:00:07.209265 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 16 05:00:07.209286 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 16 05:00:07.209307 kernel: ACPI: bus type drm_connector registered Sep 16 05:00:07.209471 systemd-journald[1523]: Collecting audit messages is disabled. Sep 16 05:00:07.209528 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 16 05:00:07.209558 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 16 05:00:07.209581 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 16 05:00:07.209604 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 16 05:00:07.209626 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 16 05:00:07.209651 systemd-journald[1523]: Journal started Sep 16 05:00:07.226852 systemd-journald[1523]: Runtime Journal (/run/log/journal/ec2d5267806a3121a7d0e42c2a83558a) is 4.8M, max 38.4M, 33.6M free. Sep 16 05:00:07.226968 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 05:00:05.504469 systemd[1]: Queued start job for default target multi-user.target. Sep 16 05:00:07.233358 systemd[1]: Started systemd-journald.service - Journal Service. Sep 16 05:00:05.519369 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 16 05:00:05.519940 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 16 05:00:07.259733 kernel: loop0: detected capacity change from 0 to 128016 Sep 16 05:00:07.291533 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 16 05:00:07.295625 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 16 05:00:07.300243 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 16 05:00:07.307535 systemd-tmpfiles[1552]: ACLs are not supported, ignoring. Sep 16 05:00:07.307565 systemd-tmpfiles[1552]: ACLs are not supported, ignoring. Sep 16 05:00:07.317409 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 16 05:00:07.320416 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 16 05:00:07.328662 systemd-journald[1523]: Time spent on flushing to /var/log/journal/ec2d5267806a3121a7d0e42c2a83558a is 72.960ms for 1038 entries. Sep 16 05:00:07.328662 systemd-journald[1523]: System Journal (/var/log/journal/ec2d5267806a3121a7d0e42c2a83558a) is 8M, max 195.6M, 187.6M free. Sep 16 05:00:07.421456 systemd-journald[1523]: Received client request to flush runtime journal. Sep 16 05:00:07.421541 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 16 05:00:07.425109 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 16 05:00:07.444783 kernel: loop1: detected capacity change from 0 to 110984 Sep 16 05:00:07.481519 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 16 05:00:07.485553 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 16 05:00:07.565732 systemd-tmpfiles[1599]: ACLs are not supported, ignoring. Sep 16 05:00:07.566155 systemd-tmpfiles[1599]: ACLs are not supported, ignoring. Sep 16 05:00:07.573994 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 16 05:00:07.581496 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 05:00:07.604831 kernel: loop2: detected capacity change from 0 to 229808 Sep 16 05:00:07.766802 kernel: loop3: detected capacity change from 0 to 72368 Sep 16 05:00:07.934706 kernel: loop4: detected capacity change from 0 to 128016 Sep 16 05:00:08.011698 kernel: loop5: detected capacity change from 0 to 110984 Sep 16 05:00:08.039149 kernel: loop6: detected capacity change from 0 to 229808 Sep 16 05:00:08.083694 kernel: loop7: detected capacity change from 0 to 72368 Sep 16 05:00:08.148395 (sd-merge)[1605]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 16 05:00:08.150940 (sd-merge)[1605]: Merged extensions into '/usr'. Sep 16 05:00:08.166530 systemd[1]: Reload requested from client PID 1551 ('systemd-sysext') (unit systemd-sysext.service)... Sep 16 05:00:08.166751 systemd[1]: Reloading... Sep 16 05:00:08.530918 zram_generator::config[1627]: No configuration found. Sep 16 05:00:09.690383 systemd[1]: Reloading finished in 1521 ms. Sep 16 05:00:09.751353 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 16 05:00:09.797903 systemd[1]: Starting ensure-sysext.service... Sep 16 05:00:09.804870 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 16 05:00:09.844250 systemd[1]: Reload requested from client PID 1682 ('systemctl') (unit ensure-sysext.service)... Sep 16 05:00:09.844277 systemd[1]: Reloading... Sep 16 05:00:09.881548 ldconfig[1544]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 16 05:00:09.892753 systemd-tmpfiles[1683]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 16 05:00:09.892801 systemd-tmpfiles[1683]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 16 05:00:09.893309 systemd-tmpfiles[1683]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 16 05:00:09.893750 systemd-tmpfiles[1683]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 16 05:00:09.895300 systemd-tmpfiles[1683]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 16 05:00:09.895764 systemd-tmpfiles[1683]: ACLs are not supported, ignoring. Sep 16 05:00:09.895866 systemd-tmpfiles[1683]: ACLs are not supported, ignoring. Sep 16 05:00:09.904971 systemd-tmpfiles[1683]: Detected autofs mount point /boot during canonicalization of boot. Sep 16 05:00:09.905160 systemd-tmpfiles[1683]: Skipping /boot Sep 16 05:00:09.921110 systemd-tmpfiles[1683]: Detected autofs mount point /boot during canonicalization of boot. Sep 16 05:00:09.921282 systemd-tmpfiles[1683]: Skipping /boot Sep 16 05:00:09.990803 zram_generator::config[1711]: No configuration found. Sep 16 05:00:10.338507 systemd[1]: Reloading finished in 493 ms. Sep 16 05:00:10.367883 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 16 05:00:10.368747 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 16 05:00:10.388220 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 05:00:10.417144 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 16 05:00:10.422037 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 16 05:00:10.428403 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 16 05:00:10.437029 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 16 05:00:10.446010 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 05:00:10.449823 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 16 05:00:10.458595 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 05:00:10.458941 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 05:00:10.462986 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 05:00:10.470644 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 05:00:10.481447 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 05:00:10.482320 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 05:00:10.482516 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 05:00:10.489585 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 16 05:00:10.490562 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 05:00:10.502331 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 05:00:10.502637 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 05:00:10.504025 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 05:00:10.504172 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 05:00:10.504328 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 05:00:10.511946 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 05:00:10.512433 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 05:00:10.529780 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 16 05:00:10.531226 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 05:00:10.531576 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 05:00:10.531972 systemd[1]: Reached target time-set.target - System Time Set. Sep 16 05:00:10.533558 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 05:00:10.535041 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 05:00:10.536719 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 05:00:10.539273 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 16 05:00:10.553665 systemd[1]: Finished ensure-sysext.service. Sep 16 05:00:10.565864 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 16 05:00:10.579248 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 05:00:10.579519 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 05:00:10.586875 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 16 05:00:10.590070 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 05:00:10.591705 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 05:00:10.596427 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 16 05:00:10.596543 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 16 05:00:10.597537 systemd-udevd[1771]: Using default interface naming scheme 'v255'. Sep 16 05:00:10.614040 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 16 05:00:10.614297 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 16 05:00:10.629751 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 16 05:00:10.652737 augenrules[1805]: No rules Sep 16 05:00:10.653094 systemd[1]: audit-rules.service: Deactivated successfully. Sep 16 05:00:10.653424 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 16 05:00:10.668176 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 16 05:00:10.758341 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 05:00:10.765462 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 16 05:00:10.783365 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 16 05:00:10.793417 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 16 05:00:10.987465 systemd-resolved[1769]: Positive Trust Anchors: Sep 16 05:00:10.987482 systemd-resolved[1769]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 16 05:00:10.987548 systemd-resolved[1769]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 16 05:00:11.003683 systemd-resolved[1769]: Defaulting to hostname 'linux'. Sep 16 05:00:11.009595 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 16 05:00:11.010587 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 16 05:00:11.012175 systemd[1]: Reached target sysinit.target - System Initialization. Sep 16 05:00:11.013397 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 16 05:00:11.015161 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 16 05:00:11.015784 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 16 05:00:11.017103 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 16 05:00:11.018814 systemd-networkd[1819]: lo: Link UP Sep 16 05:00:11.018825 systemd-networkd[1819]: lo: Gained carrier Sep 16 05:00:11.018919 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 16 05:00:11.019476 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 16 05:00:11.019632 systemd-networkd[1819]: Enumeration completed Sep 16 05:00:11.020388 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 16 05:00:11.020423 systemd[1]: Reached target paths.target - Path Units. Sep 16 05:00:11.021702 systemd[1]: Reached target timers.target - Timer Units. Sep 16 05:00:11.024881 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 16 05:00:11.028782 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 16 05:00:11.036359 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 16 05:00:11.038995 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 16 05:00:11.039918 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 16 05:00:11.049313 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 16 05:00:11.051997 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 16 05:00:11.053548 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 16 05:00:11.055228 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 16 05:00:11.059005 systemd[1]: Reached target network.target - Network. Sep 16 05:00:11.060727 systemd[1]: Reached target sockets.target - Socket Units. Sep 16 05:00:11.062120 systemd[1]: Reached target basic.target - Basic System. Sep 16 05:00:11.063283 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 16 05:00:11.063380 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 16 05:00:11.066914 systemd[1]: Starting containerd.service - containerd container runtime... Sep 16 05:00:11.071936 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 16 05:00:11.077243 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 16 05:00:11.079750 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 16 05:00:11.082909 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 16 05:00:11.087960 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 16 05:00:11.088941 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 16 05:00:11.101727 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 16 05:00:11.105086 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 16 05:00:11.115934 systemd[1]: Started ntpd.service - Network Time Service. Sep 16 05:00:11.120366 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 16 05:00:11.125635 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 16 05:00:11.135106 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 16 05:00:11.142978 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 16 05:00:11.165748 google_oslogin_nss_cache[1853]: oslogin_cache_refresh[1853]: Refreshing passwd entry cache Sep 16 05:00:11.166150 oslogin_cache_refresh[1853]: Refreshing passwd entry cache Sep 16 05:00:11.169165 google_oslogin_nss_cache[1853]: oslogin_cache_refresh[1853]: Failure getting users, quitting Sep 16 05:00:11.169344 oslogin_cache_refresh[1853]: Failure getting users, quitting Sep 16 05:00:11.169450 google_oslogin_nss_cache[1853]: oslogin_cache_refresh[1853]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 16 05:00:11.169498 oslogin_cache_refresh[1853]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 16 05:00:11.169608 google_oslogin_nss_cache[1853]: oslogin_cache_refresh[1853]: Refreshing group entry cache Sep 16 05:00:11.169651 oslogin_cache_refresh[1853]: Refreshing group entry cache Sep 16 05:00:11.171516 google_oslogin_nss_cache[1853]: oslogin_cache_refresh[1853]: Failure getting groups, quitting Sep 16 05:00:11.171516 google_oslogin_nss_cache[1853]: oslogin_cache_refresh[1853]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 16 05:00:11.170327 oslogin_cache_refresh[1853]: Failure getting groups, quitting Sep 16 05:00:11.170341 oslogin_cache_refresh[1853]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 16 05:00:11.173847 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 16 05:00:11.180842 jq[1851]: false Sep 16 05:00:11.193779 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 16 05:00:11.197999 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 16 05:00:11.202105 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 16 05:00:11.202958 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 16 05:00:11.212663 systemd[1]: Starting update-engine.service - Update Engine... Sep 16 05:00:11.217177 extend-filesystems[1852]: Found /dev/nvme0n1p6 Sep 16 05:00:11.219893 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 16 05:00:11.236744 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 16 05:00:11.237847 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 16 05:00:11.238963 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 16 05:00:11.239402 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 16 05:00:11.240753 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 16 05:00:11.262273 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 16 05:00:11.271793 extend-filesystems[1852]: Found /dev/nvme0n1p9 Sep 16 05:00:11.289000 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 16 05:00:11.289315 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 16 05:00:11.303287 extend-filesystems[1852]: Checking size of /dev/nvme0n1p9 Sep 16 05:00:11.327223 jq[1870]: true Sep 16 05:00:11.333173 (ntainerd)[1893]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 16 05:00:11.369570 update_engine[1868]: I20250916 05:00:11.358954 1868 main.cc:92] Flatcar Update Engine starting Sep 16 05:00:11.374736 systemd[1]: motdgen.service: Deactivated successfully. Sep 16 05:00:11.375732 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 16 05:00:11.398339 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 16 05:00:11.412488 tar[1873]: linux-amd64/LICENSE Sep 16 05:00:11.412488 tar[1873]: linux-amd64/helm Sep 16 05:00:11.413943 ntpd[1855]: ntpd 4.2.8p18@1.4062-o Tue Sep 16 02:36:08 UTC 2025 (1): Starting Sep 16 05:00:11.416481 ntpd[1855]: 16 Sep 05:00:11 ntpd[1855]: ntpd 4.2.8p18@1.4062-o Tue Sep 16 02:36:08 UTC 2025 (1): Starting Sep 16 05:00:11.416481 ntpd[1855]: 16 Sep 05:00:11 ntpd[1855]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 16 05:00:11.416481 ntpd[1855]: 16 Sep 05:00:11 ntpd[1855]: ---------------------------------------------------- Sep 16 05:00:11.416481 ntpd[1855]: 16 Sep 05:00:11 ntpd[1855]: ntp-4 is maintained by Network Time Foundation, Sep 16 05:00:11.416481 ntpd[1855]: 16 Sep 05:00:11 ntpd[1855]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 16 05:00:11.416481 ntpd[1855]: 16 Sep 05:00:11 ntpd[1855]: corporation. Support and training for ntp-4 are Sep 16 05:00:11.416481 ntpd[1855]: 16 Sep 05:00:11 ntpd[1855]: available at https://www.nwtime.org/support Sep 16 05:00:11.416481 ntpd[1855]: 16 Sep 05:00:11 ntpd[1855]: ---------------------------------------------------- Sep 16 05:00:11.414017 ntpd[1855]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 16 05:00:11.414027 ntpd[1855]: ---------------------------------------------------- Sep 16 05:00:11.414035 ntpd[1855]: ntp-4 is maintained by Network Time Foundation, Sep 16 05:00:11.414045 ntpd[1855]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 16 05:00:11.414054 ntpd[1855]: corporation. Support and training for ntp-4 are Sep 16 05:00:11.414063 ntpd[1855]: available at https://www.nwtime.org/support Sep 16 05:00:11.414072 ntpd[1855]: ---------------------------------------------------- Sep 16 05:00:11.439378 ntpd[1855]: 16 Sep 05:00:11 ntpd[1855]: proto: precision = 0.088 usec (-23) Sep 16 05:00:11.438270 ntpd[1855]: proto: precision = 0.088 usec (-23) Sep 16 05:00:11.441015 extend-filesystems[1852]: Resized partition /dev/nvme0n1p9 Sep 16 05:00:11.441527 ntpd[1855]: basedate set to 2025-09-04 Sep 16 05:00:11.447615 ntpd[1855]: 16 Sep 05:00:11 ntpd[1855]: basedate set to 2025-09-04 Sep 16 05:00:11.447615 ntpd[1855]: 16 Sep 05:00:11 ntpd[1855]: gps base set to 2025-09-07 (week 2383) Sep 16 05:00:11.447615 ntpd[1855]: 16 Sep 05:00:11 ntpd[1855]: Listen and drop on 0 v6wildcard [::]:123 Sep 16 05:00:11.447615 ntpd[1855]: 16 Sep 05:00:11 ntpd[1855]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 16 05:00:11.447615 ntpd[1855]: 16 Sep 05:00:11 ntpd[1855]: Listen normally on 2 lo 127.0.0.1:123 Sep 16 05:00:11.447615 ntpd[1855]: 16 Sep 05:00:11 ntpd[1855]: Listen normally on 3 lo [::1]:123 Sep 16 05:00:11.447615 ntpd[1855]: 16 Sep 05:00:11 ntpd[1855]: Listening on routing socket on fd #20 for interface updates Sep 16 05:00:11.447615 ntpd[1855]: 16 Sep 05:00:11 ntpd[1855]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 16 05:00:11.447615 ntpd[1855]: 16 Sep 05:00:11 ntpd[1855]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 16 05:00:11.441548 ntpd[1855]: gps base set to 2025-09-07 (week 2383) Sep 16 05:00:11.441758 ntpd[1855]: Listen and drop on 0 v6wildcard [::]:123 Sep 16 05:00:11.441791 ntpd[1855]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 16 05:00:11.441990 ntpd[1855]: Listen normally on 2 lo 127.0.0.1:123 Sep 16 05:00:11.442020 ntpd[1855]: Listen normally on 3 lo [::1]:123 Sep 16 05:00:11.442043 ntpd[1855]: Listening on routing socket on fd #20 for interface updates Sep 16 05:00:11.443729 ntpd[1855]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 16 05:00:11.443758 ntpd[1855]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 16 05:00:11.479243 jq[1897]: true Sep 16 05:00:11.479361 extend-filesystems[1908]: resize2fs 1.47.3 (8-Jul-2025) Sep 16 05:00:11.458567 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 16 05:00:11.458083 dbus-daemon[1849]: [system] SELinux support is enabled Sep 16 05:00:11.477980 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 16 05:00:11.478018 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 16 05:00:11.480247 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 16 05:00:11.480270 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 16 05:00:11.507645 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 16 05:00:11.522385 systemd[1]: Started update-engine.service - Update Engine. Sep 16 05:00:11.526241 update_engine[1868]: I20250916 05:00:11.525925 1868 update_check_scheduler.cc:74] Next update check in 9m57s Sep 16 05:00:11.548345 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 16 05:00:11.550340 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 16 05:00:11.602521 (udev-worker)[1832]: Network interface NamePolicy= disabled on kernel command line. Sep 16 05:00:11.675740 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 16 05:00:11.682391 systemd-networkd[1819]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 05:00:11.703734 kernel: mousedev: PS/2 mouse device common for all mice Sep 16 05:00:11.682403 systemd-networkd[1819]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 16 05:00:11.687150 systemd-networkd[1819]: eth0: Link UP Sep 16 05:00:11.687380 systemd-networkd[1819]: eth0: Gained carrier Sep 16 05:00:11.687411 systemd-networkd[1819]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 05:00:11.707080 extend-filesystems[1908]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 16 05:00:11.707080 extend-filesystems[1908]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 16 05:00:11.707080 extend-filesystems[1908]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 16 05:00:11.748130 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 16 05:00:11.748174 kernel: ACPI: button: Power Button [PWRF] Sep 16 05:00:11.748198 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Sep 16 05:00:11.748243 kernel: ACPI: button: Sleep Button [SLPF] Sep 16 05:00:11.706822 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 16 05:00:11.729534 dbus-daemon[1849]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1819 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 16 05:00:11.748432 extend-filesystems[1852]: Resized filesystem in /dev/nvme0n1p9 Sep 16 05:00:11.762831 bash[1929]: Updated "/home/core/.ssh/authorized_keys" Sep 16 05:00:11.762987 coreos-metadata[1847]: Sep 16 05:00:11.733 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 16 05:00:11.762987 coreos-metadata[1847]: Sep 16 05:00:11.740 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 16 05:00:11.762987 coreos-metadata[1847]: Sep 16 05:00:11.742 INFO Fetch successful Sep 16 05:00:11.762987 coreos-metadata[1847]: Sep 16 05:00:11.742 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 16 05:00:11.762987 coreos-metadata[1847]: Sep 16 05:00:11.744 INFO Fetch successful Sep 16 05:00:11.762987 coreos-metadata[1847]: Sep 16 05:00:11.744 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 16 05:00:11.762987 coreos-metadata[1847]: Sep 16 05:00:11.744 INFO Fetch successful Sep 16 05:00:11.762987 coreos-metadata[1847]: Sep 16 05:00:11.744 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 16 05:00:11.762987 coreos-metadata[1847]: Sep 16 05:00:11.747 INFO Fetch successful Sep 16 05:00:11.762987 coreos-metadata[1847]: Sep 16 05:00:11.747 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 16 05:00:11.762987 coreos-metadata[1847]: Sep 16 05:00:11.753 INFO Fetch failed with 404: resource not found Sep 16 05:00:11.762987 coreos-metadata[1847]: Sep 16 05:00:11.753 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 16 05:00:11.762987 coreos-metadata[1847]: Sep 16 05:00:11.756 INFO Fetch successful Sep 16 05:00:11.762987 coreos-metadata[1847]: Sep 16 05:00:11.756 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 16 05:00:11.708256 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 16 05:00:11.773294 coreos-metadata[1847]: Sep 16 05:00:11.763 INFO Fetch successful Sep 16 05:00:11.773294 coreos-metadata[1847]: Sep 16 05:00:11.763 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 16 05:00:11.773294 coreos-metadata[1847]: Sep 16 05:00:11.764 INFO Fetch successful Sep 16 05:00:11.773294 coreos-metadata[1847]: Sep 16 05:00:11.764 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 16 05:00:11.726144 systemd-networkd[1819]: eth0: DHCPv4 address 172.31.17.131/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 16 05:00:11.728733 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 16 05:00:11.756358 systemd[1]: Starting sshkeys.service... Sep 16 05:00:11.759296 systemd-logind[1863]: New seat seat0. Sep 16 05:00:11.774597 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 16 05:00:11.785358 coreos-metadata[1847]: Sep 16 05:00:11.779 INFO Fetch successful Sep 16 05:00:11.785358 coreos-metadata[1847]: Sep 16 05:00:11.779 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 16 05:00:11.778428 systemd[1]: Started systemd-logind.service - User Login Management. Sep 16 05:00:11.794853 coreos-metadata[1847]: Sep 16 05:00:11.788 INFO Fetch successful Sep 16 05:00:11.837905 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 16 05:00:11.844054 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 16 05:00:11.934200 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 16 05:00:11.940283 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 16 05:00:12.007537 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Sep 16 05:00:12.045219 coreos-metadata[1950]: Sep 16 05:00:12.042 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 16 05:00:12.045219 coreos-metadata[1950]: Sep 16 05:00:12.044 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 16 05:00:12.045219 coreos-metadata[1950]: Sep 16 05:00:12.045 INFO Fetch successful Sep 16 05:00:12.045707 coreos-metadata[1950]: Sep 16 05:00:12.045 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 16 05:00:12.046548 coreos-metadata[1950]: Sep 16 05:00:12.046 INFO Fetch successful Sep 16 05:00:12.048101 unknown[1950]: wrote ssh authorized keys file for user: core Sep 16 05:00:12.111786 update-ssh-keys[1965]: Updated "/home/core/.ssh/authorized_keys" Sep 16 05:00:12.110958 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 16 05:00:12.122337 systemd[1]: Finished sshkeys.service. Sep 16 05:00:12.166859 locksmithd[1915]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 16 05:00:12.356298 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 16 05:00:12.367219 dbus-daemon[1849]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 16 05:00:12.368996 dbus-daemon[1849]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1947 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 16 05:00:12.379655 systemd[1]: Starting polkit.service - Authorization Manager... Sep 16 05:00:12.383341 sshd_keygen[1896]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 16 05:00:12.412548 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 16 05:00:12.448929 containerd[1893]: time="2025-09-16T05:00:12Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 16 05:00:12.452802 containerd[1893]: time="2025-09-16T05:00:12.452754335Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 16 05:00:12.462541 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 16 05:00:12.471097 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 16 05:00:12.475090 systemd[1]: Started sshd@0-172.31.17.131:22-139.178.68.195:59514.service - OpenSSH per-connection server daemon (139.178.68.195:59514). Sep 16 05:00:12.506212 systemd[1]: issuegen.service: Deactivated successfully. Sep 16 05:00:12.506960 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 16 05:00:12.511856 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 16 05:00:12.533177 containerd[1893]: time="2025-09-16T05:00:12.531760226Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.51µs" Sep 16 05:00:12.533177 containerd[1893]: time="2025-09-16T05:00:12.531807154Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 16 05:00:12.533177 containerd[1893]: time="2025-09-16T05:00:12.531836656Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 16 05:00:12.533177 containerd[1893]: time="2025-09-16T05:00:12.532041124Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 16 05:00:12.533177 containerd[1893]: time="2025-09-16T05:00:12.532065057Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 16 05:00:12.533177 containerd[1893]: time="2025-09-16T05:00:12.532103345Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 16 05:00:12.533177 containerd[1893]: time="2025-09-16T05:00:12.532177372Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 16 05:00:12.533177 containerd[1893]: time="2025-09-16T05:00:12.532192831Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 16 05:00:12.533177 containerd[1893]: time="2025-09-16T05:00:12.532495158Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 16 05:00:12.533177 containerd[1893]: time="2025-09-16T05:00:12.532520349Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 16 05:00:12.533177 containerd[1893]: time="2025-09-16T05:00:12.532537968Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 16 05:00:12.533177 containerd[1893]: time="2025-09-16T05:00:12.532552484Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 16 05:00:12.533700 containerd[1893]: time="2025-09-16T05:00:12.532688778Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 16 05:00:12.533700 containerd[1893]: time="2025-09-16T05:00:12.532947697Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 16 05:00:12.533700 containerd[1893]: time="2025-09-16T05:00:12.533000290Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 16 05:00:12.533700 containerd[1893]: time="2025-09-16T05:00:12.533018654Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 16 05:00:12.536527 containerd[1893]: time="2025-09-16T05:00:12.536482881Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 16 05:00:12.541288 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 16 05:00:12.547866 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 16 05:00:12.557881 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 16 05:00:12.559711 systemd[1]: Reached target getty.target - Login Prompts. Sep 16 05:00:12.563340 containerd[1893]: time="2025-09-16T05:00:12.562185403Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 16 05:00:12.563340 containerd[1893]: time="2025-09-16T05:00:12.562367453Z" level=info msg="metadata content store policy set" policy=shared Sep 16 05:00:12.608892 containerd[1893]: time="2025-09-16T05:00:12.606235146Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 16 05:00:12.608892 containerd[1893]: time="2025-09-16T05:00:12.606355843Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 16 05:00:12.608892 containerd[1893]: time="2025-09-16T05:00:12.606460987Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 16 05:00:12.608892 containerd[1893]: time="2025-09-16T05:00:12.606487823Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 16 05:00:12.608892 containerd[1893]: time="2025-09-16T05:00:12.606509488Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 16 05:00:12.608892 containerd[1893]: time="2025-09-16T05:00:12.606526382Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 16 05:00:12.608892 containerd[1893]: time="2025-09-16T05:00:12.606545423Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 16 05:00:12.608892 containerd[1893]: time="2025-09-16T05:00:12.606564288Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 16 05:00:12.608892 containerd[1893]: time="2025-09-16T05:00:12.606581839Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 16 05:00:12.608892 containerd[1893]: time="2025-09-16T05:00:12.606599610Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 16 05:00:12.608892 containerd[1893]: time="2025-09-16T05:00:12.606613835Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 16 05:00:12.608892 containerd[1893]: time="2025-09-16T05:00:12.606635989Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 16 05:00:12.608892 containerd[1893]: time="2025-09-16T05:00:12.606845312Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 16 05:00:12.608892 containerd[1893]: time="2025-09-16T05:00:12.606880872Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 16 05:00:12.609517 containerd[1893]: time="2025-09-16T05:00:12.606906075Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 16 05:00:12.609517 containerd[1893]: time="2025-09-16T05:00:12.606927000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 16 05:00:12.609517 containerd[1893]: time="2025-09-16T05:00:12.606944635Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 16 05:00:12.609517 containerd[1893]: time="2025-09-16T05:00:12.606964943Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 16 05:00:12.609517 containerd[1893]: time="2025-09-16T05:00:12.606983492Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 16 05:00:12.609517 containerd[1893]: time="2025-09-16T05:00:12.606998970Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 16 05:00:12.609517 containerd[1893]: time="2025-09-16T05:00:12.607017426Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 16 05:00:12.609517 containerd[1893]: time="2025-09-16T05:00:12.607036221Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 16 05:00:12.609517 containerd[1893]: time="2025-09-16T05:00:12.607051173Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 16 05:00:12.609517 containerd[1893]: time="2025-09-16T05:00:12.607141168Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 16 05:00:12.609517 containerd[1893]: time="2025-09-16T05:00:12.607164013Z" level=info msg="Start snapshots syncer" Sep 16 05:00:12.609517 containerd[1893]: time="2025-09-16T05:00:12.607190012Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 16 05:00:12.613061 containerd[1893]: time="2025-09-16T05:00:12.607520363Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 16 05:00:12.613061 containerd[1893]: time="2025-09-16T05:00:12.607597604Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 16 05:00:12.614723 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 05:00:12.633306 containerd[1893]: time="2025-09-16T05:00:12.629738287Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 16 05:00:12.633306 containerd[1893]: time="2025-09-16T05:00:12.629945529Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 16 05:00:12.633306 containerd[1893]: time="2025-09-16T05:00:12.629984711Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 16 05:00:12.633306 containerd[1893]: time="2025-09-16T05:00:12.630002930Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 16 05:00:12.633306 containerd[1893]: time="2025-09-16T05:00:12.630017665Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 16 05:00:12.633306 containerd[1893]: time="2025-09-16T05:00:12.630035976Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 16 05:00:12.633306 containerd[1893]: time="2025-09-16T05:00:12.630051231Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 16 05:00:12.633306 containerd[1893]: time="2025-09-16T05:00:12.630065748Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 16 05:00:12.633306 containerd[1893]: time="2025-09-16T05:00:12.630109988Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 16 05:00:12.633306 containerd[1893]: time="2025-09-16T05:00:12.630124796Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 16 05:00:12.633306 containerd[1893]: time="2025-09-16T05:00:12.630140711Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 16 05:00:12.633306 containerd[1893]: time="2025-09-16T05:00:12.630182319Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 16 05:00:12.633306 containerd[1893]: time="2025-09-16T05:00:12.630204805Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 16 05:00:12.633306 containerd[1893]: time="2025-09-16T05:00:12.630219905Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 16 05:00:12.634022 containerd[1893]: time="2025-09-16T05:00:12.630237536Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 16 05:00:12.634022 containerd[1893]: time="2025-09-16T05:00:12.630251182Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 16 05:00:12.634022 containerd[1893]: time="2025-09-16T05:00:12.630264542Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 16 05:00:12.634022 containerd[1893]: time="2025-09-16T05:00:12.630280250Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 16 05:00:12.634022 containerd[1893]: time="2025-09-16T05:00:12.630300836Z" level=info msg="runtime interface created" Sep 16 05:00:12.634022 containerd[1893]: time="2025-09-16T05:00:12.630309612Z" level=info msg="created NRI interface" Sep 16 05:00:12.634022 containerd[1893]: time="2025-09-16T05:00:12.630321242Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 16 05:00:12.634022 containerd[1893]: time="2025-09-16T05:00:12.630342343Z" level=info msg="Connect containerd service" Sep 16 05:00:12.634022 containerd[1893]: time="2025-09-16T05:00:12.630383610Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 16 05:00:12.638921 systemd-logind[1863]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 16 05:00:12.647130 containerd[1893]: time="2025-09-16T05:00:12.643002756Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 16 05:00:12.666612 systemd-logind[1863]: Watching system buttons on /dev/input/event2 (Power Button) Sep 16 05:00:12.688316 systemd-logind[1863]: Watching system buttons on /dev/input/event3 (Sleep Button) Sep 16 05:00:12.795943 polkitd[2003]: Started polkitd version 126 Sep 16 05:00:12.815802 polkitd[2003]: Loading rules from directory /etc/polkit-1/rules.d Sep 16 05:00:12.816345 polkitd[2003]: Loading rules from directory /run/polkit-1/rules.d Sep 16 05:00:12.816416 polkitd[2003]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 16 05:00:12.821548 polkitd[2003]: Loading rules from directory /usr/local/share/polkit-1/rules.d Sep 16 05:00:12.821623 polkitd[2003]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 16 05:00:12.821686 polkitd[2003]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 16 05:00:12.833961 polkitd[2003]: Finished loading, compiling and executing 2 rules Sep 16 05:00:12.834301 systemd[1]: Started polkit.service - Authorization Manager. Sep 16 05:00:12.839470 dbus-daemon[1849]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 16 05:00:12.844290 polkitd[2003]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 16 05:00:12.848284 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 05:00:12.899413 sshd[2033]: Accepted publickey for core from 139.178.68.195 port 59514 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:00:12.920815 systemd-hostnamed[1947]: Hostname set to (transient) Sep 16 05:00:12.926525 systemd-resolved[1769]: System hostname changed to 'ip-172-31-17-131'. Sep 16 05:00:12.930103 sshd-session[2033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:00:12.935458 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 16 05:00:12.968302 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 16 05:00:13.036084 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 16 05:00:13.050950 containerd[1893]: time="2025-09-16T05:00:13.050904582Z" level=info msg="Start subscribing containerd event" Sep 16 05:00:13.051109 containerd[1893]: time="2025-09-16T05:00:13.051061366Z" level=info msg="Start recovering state" Sep 16 05:00:13.054859 containerd[1893]: time="2025-09-16T05:00:13.054820731Z" level=info msg="Start event monitor" Sep 16 05:00:13.054964 containerd[1893]: time="2025-09-16T05:00:13.054864979Z" level=info msg="Start cni network conf syncer for default" Sep 16 05:00:13.054964 containerd[1893]: time="2025-09-16T05:00:13.054880230Z" level=info msg="Start streaming server" Sep 16 05:00:13.054964 containerd[1893]: time="2025-09-16T05:00:13.054897733Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 16 05:00:13.054964 containerd[1893]: time="2025-09-16T05:00:13.054908470Z" level=info msg="runtime interface starting up..." Sep 16 05:00:13.054964 containerd[1893]: time="2025-09-16T05:00:13.054916458Z" level=info msg="starting plugins..." Sep 16 05:00:13.054964 containerd[1893]: time="2025-09-16T05:00:13.054933261Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 16 05:00:13.056964 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 16 05:00:13.058936 containerd[1893]: time="2025-09-16T05:00:13.058893670Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 16 05:00:13.059040 containerd[1893]: time="2025-09-16T05:00:13.058991688Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 16 05:00:13.059123 containerd[1893]: time="2025-09-16T05:00:13.059105255Z" level=info msg="containerd successfully booted in 0.611561s" Sep 16 05:00:13.060430 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 16 05:00:13.065122 systemd[1]: Started containerd.service - containerd container runtime. Sep 16 05:00:13.096291 systemd-logind[1863]: New session 1 of user core. Sep 16 05:00:13.111370 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 16 05:00:13.120733 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 16 05:00:13.143699 (systemd)[2126]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 16 05:00:13.148063 systemd-logind[1863]: New session c1 of user core. Sep 16 05:00:13.169731 tar[1873]: linux-amd64/README.md Sep 16 05:00:13.189462 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 16 05:00:13.309551 systemd[2126]: Queued start job for default target default.target. Sep 16 05:00:13.320137 systemd[2126]: Created slice app.slice - User Application Slice. Sep 16 05:00:13.320181 systemd[2126]: Reached target paths.target - Paths. Sep 16 05:00:13.320240 systemd[2126]: Reached target timers.target - Timers. Sep 16 05:00:13.321725 systemd[2126]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 16 05:00:13.334510 systemd[2126]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 16 05:00:13.335107 systemd[2126]: Reached target sockets.target - Sockets. Sep 16 05:00:13.335240 systemd[2126]: Reached target basic.target - Basic System. Sep 16 05:00:13.335329 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 16 05:00:13.335368 systemd[2126]: Reached target default.target - Main User Target. Sep 16 05:00:13.335412 systemd[2126]: Startup finished in 176ms. Sep 16 05:00:13.343109 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 16 05:00:13.491652 systemd[1]: Started sshd@1-172.31.17.131:22-139.178.68.195:59522.service - OpenSSH per-connection server daemon (139.178.68.195:59522). Sep 16 05:00:13.521836 systemd-networkd[1819]: eth0: Gained IPv6LL Sep 16 05:00:13.524267 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 16 05:00:13.525210 systemd[1]: Reached target network-online.target - Network is Online. Sep 16 05:00:13.527590 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 16 05:00:13.531246 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 05:00:13.533733 ntpd[1855]: giving up resolving host metadata.google.internal: Name or service not known (-2) Sep 16 05:00:13.534112 ntpd[1855]: 16 Sep 05:00:13 ntpd[1855]: giving up resolving host metadata.google.internal: Name or service not known (-2) Sep 16 05:00:13.537357 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 16 05:00:13.581197 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 16 05:00:13.614934 amazon-ssm-agent[2144]: Initializing new seelog logger Sep 16 05:00:13.615228 amazon-ssm-agent[2144]: New Seelog Logger Creation Complete Sep 16 05:00:13.615228 amazon-ssm-agent[2144]: 2025/09/16 05:00:13 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 16 05:00:13.615228 amazon-ssm-agent[2144]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 16 05:00:13.616466 amazon-ssm-agent[2144]: 2025/09/16 05:00:13 processing appconfig overrides Sep 16 05:00:13.616466 amazon-ssm-agent[2144]: 2025/09/16 05:00:13 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 16 05:00:13.616466 amazon-ssm-agent[2144]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 16 05:00:13.616466 amazon-ssm-agent[2144]: 2025/09/16 05:00:13 processing appconfig overrides Sep 16 05:00:13.616609 amazon-ssm-agent[2144]: 2025/09/16 05:00:13 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 16 05:00:13.616609 amazon-ssm-agent[2144]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 16 05:00:13.616648 amazon-ssm-agent[2144]: 2025/09/16 05:00:13 processing appconfig overrides Sep 16 05:00:13.616862 amazon-ssm-agent[2144]: 2025-09-16 05:00:13.6161 INFO Proxy environment variables: Sep 16 05:00:13.621390 amazon-ssm-agent[2144]: 2025/09/16 05:00:13 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 16 05:00:13.621390 amazon-ssm-agent[2144]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 16 05:00:13.621390 amazon-ssm-agent[2144]: 2025/09/16 05:00:13 processing appconfig overrides Sep 16 05:00:13.665441 sshd[2140]: Accepted publickey for core from 139.178.68.195 port 59522 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:00:13.667402 sshd-session[2140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:00:13.676685 systemd-logind[1863]: New session 2 of user core. Sep 16 05:00:13.687951 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 16 05:00:13.717707 amazon-ssm-agent[2144]: 2025-09-16 05:00:13.6161 INFO https_proxy: Sep 16 05:00:13.816697 amazon-ssm-agent[2144]: 2025-09-16 05:00:13.6161 INFO http_proxy: Sep 16 05:00:13.819703 sshd[2161]: Connection closed by 139.178.68.195 port 59522 Sep 16 05:00:13.820360 sshd-session[2140]: pam_unix(sshd:session): session closed for user core Sep 16 05:00:13.828385 systemd[1]: sshd@1-172.31.17.131:22-139.178.68.195:59522.service: Deactivated successfully. Sep 16 05:00:13.833068 systemd[1]: session-2.scope: Deactivated successfully. Sep 16 05:00:13.838923 systemd-logind[1863]: Session 2 logged out. Waiting for processes to exit. Sep 16 05:00:13.859657 systemd-logind[1863]: Removed session 2. Sep 16 05:00:13.859943 systemd[1]: Started sshd@2-172.31.17.131:22-139.178.68.195:59538.service - OpenSSH per-connection server daemon (139.178.68.195:59538). Sep 16 05:00:13.913764 amazon-ssm-agent[2144]: 2025-09-16 05:00:13.6161 INFO no_proxy: Sep 16 05:00:14.013212 amazon-ssm-agent[2144]: 2025-09-16 05:00:13.6162 INFO Checking if agent identity type OnPrem can be assumed Sep 16 05:00:14.061069 amazon-ssm-agent[2144]: 2025/09/16 05:00:14 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 16 05:00:14.061200 amazon-ssm-agent[2144]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 16 05:00:14.061313 amazon-ssm-agent[2144]: 2025/09/16 05:00:14 processing appconfig overrides Sep 16 05:00:14.069548 sshd[2167]: Accepted publickey for core from 139.178.68.195 port 59538 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:00:14.070577 sshd-session[2167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:00:14.076572 systemd-logind[1863]: New session 3 of user core. Sep 16 05:00:14.080897 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 16 05:00:14.100321 amazon-ssm-agent[2144]: 2025-09-16 05:00:13.6164 INFO Checking if agent identity type EC2 can be assumed Sep 16 05:00:14.100608 amazon-ssm-agent[2144]: 2025-09-16 05:00:13.6620 INFO Agent will take identity from EC2 Sep 16 05:00:14.100608 amazon-ssm-agent[2144]: 2025-09-16 05:00:13.6638 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Sep 16 05:00:14.100608 amazon-ssm-agent[2144]: 2025-09-16 05:00:13.6638 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Sep 16 05:00:14.100608 amazon-ssm-agent[2144]: 2025-09-16 05:00:13.6638 INFO [amazon-ssm-agent] Starting Core Agent Sep 16 05:00:14.100608 amazon-ssm-agent[2144]: 2025-09-16 05:00:13.6638 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Sep 16 05:00:14.100608 amazon-ssm-agent[2144]: 2025-09-16 05:00:13.6638 INFO [Registrar] Starting registrar module Sep 16 05:00:14.100608 amazon-ssm-agent[2144]: 2025-09-16 05:00:13.6651 INFO [EC2Identity] Checking disk for registration info Sep 16 05:00:14.100608 amazon-ssm-agent[2144]: 2025-09-16 05:00:13.6652 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Sep 16 05:00:14.100608 amazon-ssm-agent[2144]: 2025-09-16 05:00:13.6652 INFO [EC2Identity] Generating registration keypair Sep 16 05:00:14.100608 amazon-ssm-agent[2144]: 2025-09-16 05:00:14.0190 INFO [EC2Identity] Checking write access before registering Sep 16 05:00:14.100608 amazon-ssm-agent[2144]: 2025-09-16 05:00:14.0194 INFO [EC2Identity] Registering EC2 instance with Systems Manager Sep 16 05:00:14.100608 amazon-ssm-agent[2144]: 2025-09-16 05:00:14.0608 INFO [EC2Identity] EC2 registration was successful. Sep 16 05:00:14.100608 amazon-ssm-agent[2144]: 2025-09-16 05:00:14.0609 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Sep 16 05:00:14.100608 amazon-ssm-agent[2144]: 2025-09-16 05:00:14.0609 INFO [CredentialRefresher] credentialRefresher has started Sep 16 05:00:14.100608 amazon-ssm-agent[2144]: 2025-09-16 05:00:14.0610 INFO [CredentialRefresher] Starting credentials refresher loop Sep 16 05:00:14.100608 amazon-ssm-agent[2144]: 2025-09-16 05:00:14.1000 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 16 05:00:14.100608 amazon-ssm-agent[2144]: 2025-09-16 05:00:14.1002 INFO [CredentialRefresher] Credentials ready Sep 16 05:00:14.111285 amazon-ssm-agent[2144]: 2025-09-16 05:00:14.1005 INFO [CredentialRefresher] Next credential rotation will be in 29.999991961216665 minutes Sep 16 05:00:14.208187 sshd[2172]: Connection closed by 139.178.68.195 port 59538 Sep 16 05:00:14.208790 sshd-session[2167]: pam_unix(sshd:session): session closed for user core Sep 16 05:00:14.213339 systemd[1]: sshd@2-172.31.17.131:22-139.178.68.195:59538.service: Deactivated successfully. Sep 16 05:00:14.215173 systemd[1]: session-3.scope: Deactivated successfully. Sep 16 05:00:14.215923 systemd-logind[1863]: Session 3 logged out. Waiting for processes to exit. Sep 16 05:00:14.217550 systemd-logind[1863]: Removed session 3. Sep 16 05:00:15.122838 amazon-ssm-agent[2144]: 2025-09-16 05:00:15.1211 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 16 05:00:15.223507 amazon-ssm-agent[2144]: 2025-09-16 05:00:15.1235 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2179) started Sep 16 05:00:15.330575 amazon-ssm-agent[2144]: 2025-09-16 05:00:15.1235 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 16 05:00:15.848927 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 05:00:15.850191 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 16 05:00:15.851843 systemd[1]: Startup finished in 2.669s (kernel) + 9.137s (initrd) + 13.112s (userspace) = 24.920s. Sep 16 05:00:15.856438 (kubelet)[2195]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 05:00:16.414515 ntpd[1855]: Listen normally on 4 eth0 172.31.17.131:123 Sep 16 05:00:16.415295 ntpd[1855]: 16 Sep 05:00:16 ntpd[1855]: Listen normally on 4 eth0 172.31.17.131:123 Sep 16 05:00:16.415295 ntpd[1855]: 16 Sep 05:00:16 ntpd[1855]: Listen normally on 5 eth0 [fe80::416:c0ff:fe5d:5911%2]:123 Sep 16 05:00:16.415295 ntpd[1855]: 16 Sep 05:00:16 ntpd[1855]: 172.232.15.202 local addr -> 172.31.17.131 Sep 16 05:00:16.415295 ntpd[1855]: 16 Sep 05:00:16 ntpd[1855]: 23.186.168.126 local addr -> 172.31.17.131 Sep 16 05:00:16.415295 ntpd[1855]: 16 Sep 05:00:16 ntpd[1855]: 144.202.66.214 local addr -> 172.31.17.131 Sep 16 05:00:16.415295 ntpd[1855]: 16 Sep 05:00:16 ntpd[1855]: 23.186.168.128 local addr -> 172.31.17.131 Sep 16 05:00:16.415295 ntpd[1855]: 16 Sep 05:00:16 ntpd[1855]: 169.254.169.123 local addr -> 172.31.17.131 Sep 16 05:00:16.414583 ntpd[1855]: Listen normally on 5 eth0 [fe80::416:c0ff:fe5d:5911%2]:123 Sep 16 05:00:16.414611 ntpd[1855]: 172.232.15.202 local addr -> 172.31.17.131 Sep 16 05:00:16.414632 ntpd[1855]: 23.186.168.126 local addr -> 172.31.17.131 Sep 16 05:00:16.414652 ntpd[1855]: 144.202.66.214 local addr -> 172.31.17.131 Sep 16 05:00:16.414870 ntpd[1855]: 23.186.168.128 local addr -> 172.31.17.131 Sep 16 05:00:16.414913 ntpd[1855]: 169.254.169.123 local addr -> 172.31.17.131 Sep 16 05:00:16.962999 kubelet[2195]: E0916 05:00:16.962921 2195 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 05:00:16.965592 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 05:00:16.965750 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 05:00:16.966071 systemd[1]: kubelet.service: Consumed 1.064s CPU time, 268.7M memory peak. Sep 16 05:00:24.254723 systemd[1]: Started sshd@3-172.31.17.131:22-139.178.68.195:45202.service - OpenSSH per-connection server daemon (139.178.68.195:45202). Sep 16 05:00:24.429260 sshd[2207]: Accepted publickey for core from 139.178.68.195 port 45202 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:00:24.430854 sshd-session[2207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:00:24.436241 systemd-logind[1863]: New session 4 of user core. Sep 16 05:00:24.443000 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 16 05:00:24.564038 sshd[2210]: Connection closed by 139.178.68.195 port 45202 Sep 16 05:00:24.564712 sshd-session[2207]: pam_unix(sshd:session): session closed for user core Sep 16 05:00:24.569602 systemd[1]: sshd@3-172.31.17.131:22-139.178.68.195:45202.service: Deactivated successfully. Sep 16 05:00:24.571662 systemd[1]: session-4.scope: Deactivated successfully. Sep 16 05:00:24.572721 systemd-logind[1863]: Session 4 logged out. Waiting for processes to exit. Sep 16 05:00:24.574289 systemd-logind[1863]: Removed session 4. Sep 16 05:00:24.594196 systemd[1]: Started sshd@4-172.31.17.131:22-139.178.68.195:45218.service - OpenSSH per-connection server daemon (139.178.68.195:45218). Sep 16 05:00:24.766075 sshd[2216]: Accepted publickey for core from 139.178.68.195 port 45218 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:00:24.767514 sshd-session[2216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:00:24.774420 systemd-logind[1863]: New session 5 of user core. Sep 16 05:00:24.783946 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 16 05:00:24.896910 sshd[2219]: Connection closed by 139.178.68.195 port 45218 Sep 16 05:00:24.897702 sshd-session[2216]: pam_unix(sshd:session): session closed for user core Sep 16 05:00:24.901598 systemd-logind[1863]: Session 5 logged out. Waiting for processes to exit. Sep 16 05:00:24.902128 systemd[1]: sshd@4-172.31.17.131:22-139.178.68.195:45218.service: Deactivated successfully. Sep 16 05:00:24.904161 systemd[1]: session-5.scope: Deactivated successfully. Sep 16 05:00:24.905893 systemd-logind[1863]: Removed session 5. Sep 16 05:00:24.933890 systemd[1]: Started sshd@5-172.31.17.131:22-139.178.68.195:45220.service - OpenSSH per-connection server daemon (139.178.68.195:45220). Sep 16 05:00:25.100917 sshd[2225]: Accepted publickey for core from 139.178.68.195 port 45220 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:00:25.102203 sshd-session[2225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:00:25.108849 systemd-logind[1863]: New session 6 of user core. Sep 16 05:00:25.115007 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 16 05:00:25.235325 sshd[2228]: Connection closed by 139.178.68.195 port 45220 Sep 16 05:00:25.236405 sshd-session[2225]: pam_unix(sshd:session): session closed for user core Sep 16 05:00:25.240587 systemd[1]: sshd@5-172.31.17.131:22-139.178.68.195:45220.service: Deactivated successfully. Sep 16 05:00:25.242318 systemd[1]: session-6.scope: Deactivated successfully. Sep 16 05:00:25.243632 systemd-logind[1863]: Session 6 logged out. Waiting for processes to exit. Sep 16 05:00:25.245461 systemd-logind[1863]: Removed session 6. Sep 16 05:00:25.273012 systemd[1]: Started sshd@6-172.31.17.131:22-139.178.68.195:45224.service - OpenSSH per-connection server daemon (139.178.68.195:45224). Sep 16 05:00:25.448603 sshd[2234]: Accepted publickey for core from 139.178.68.195 port 45224 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:00:25.449895 sshd-session[2234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:00:25.456100 systemd-logind[1863]: New session 7 of user core. Sep 16 05:00:25.461897 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 16 05:00:25.592638 sudo[2238]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 16 05:00:25.593273 sudo[2238]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 05:00:25.605922 sudo[2238]: pam_unix(sudo:session): session closed for user root Sep 16 05:00:25.629007 sshd[2237]: Connection closed by 139.178.68.195 port 45224 Sep 16 05:00:25.629734 sshd-session[2234]: pam_unix(sshd:session): session closed for user core Sep 16 05:00:25.634049 systemd[1]: sshd@6-172.31.17.131:22-139.178.68.195:45224.service: Deactivated successfully. Sep 16 05:00:25.636015 systemd[1]: session-7.scope: Deactivated successfully. Sep 16 05:00:25.636919 systemd-logind[1863]: Session 7 logged out. Waiting for processes to exit. Sep 16 05:00:25.638206 systemd-logind[1863]: Removed session 7. Sep 16 05:00:25.659421 systemd[1]: Started sshd@7-172.31.17.131:22-139.178.68.195:45234.service - OpenSSH per-connection server daemon (139.178.68.195:45234). Sep 16 05:00:25.830087 sshd[2244]: Accepted publickey for core from 139.178.68.195 port 45234 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:00:25.831790 sshd-session[2244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:00:25.837091 systemd-logind[1863]: New session 8 of user core. Sep 16 05:00:25.842922 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 16 05:00:25.940090 sudo[2249]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 16 05:00:25.940362 sudo[2249]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 05:00:25.946013 sudo[2249]: pam_unix(sudo:session): session closed for user root Sep 16 05:00:25.952197 sudo[2248]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 16 05:00:25.952691 sudo[2248]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 05:00:25.963777 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 16 05:00:26.009869 augenrules[2271]: No rules Sep 16 05:00:26.011345 systemd[1]: audit-rules.service: Deactivated successfully. Sep 16 05:00:26.011567 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 16 05:00:26.012942 sudo[2248]: pam_unix(sudo:session): session closed for user root Sep 16 05:00:26.034761 sshd[2247]: Connection closed by 139.178.68.195 port 45234 Sep 16 05:00:26.035419 sshd-session[2244]: pam_unix(sshd:session): session closed for user core Sep 16 05:00:26.039613 systemd[1]: sshd@7-172.31.17.131:22-139.178.68.195:45234.service: Deactivated successfully. Sep 16 05:00:26.041517 systemd[1]: session-8.scope: Deactivated successfully. Sep 16 05:00:26.042258 systemd-logind[1863]: Session 8 logged out. Waiting for processes to exit. Sep 16 05:00:26.043991 systemd-logind[1863]: Removed session 8. Sep 16 05:00:26.071538 systemd[1]: Started sshd@8-172.31.17.131:22-139.178.68.195:45244.service - OpenSSH per-connection server daemon (139.178.68.195:45244). Sep 16 05:00:26.252172 sshd[2280]: Accepted publickey for core from 139.178.68.195 port 45244 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:00:26.253159 sshd-session[2280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:00:26.257508 systemd-logind[1863]: New session 9 of user core. Sep 16 05:00:26.263941 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 16 05:00:26.362756 sudo[2284]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 16 05:00:26.363041 sudo[2284]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 05:00:26.947574 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 16 05:00:26.958134 (dockerd)[2303]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 16 05:00:27.216300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 16 05:00:27.218006 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 05:00:27.471863 dockerd[2303]: time="2025-09-16T05:00:27.471471752Z" level=info msg="Starting up" Sep 16 05:00:27.480029 dockerd[2303]: time="2025-09-16T05:00:27.479942066Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 16 05:00:27.497755 dockerd[2303]: time="2025-09-16T05:00:27.497707172Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 16 05:00:27.516969 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 05:00:27.527091 (kubelet)[2331]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 05:00:27.547576 systemd[1]: var-lib-docker-metacopy\x2dcheck407905932-merged.mount: Deactivated successfully. Sep 16 05:00:27.579070 dockerd[2303]: time="2025-09-16T05:00:27.579018648Z" level=info msg="Loading containers: start." Sep 16 05:00:27.594391 kubelet[2331]: E0916 05:00:27.594341 2331 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 05:00:27.598328 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 05:00:27.598501 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 05:00:27.599340 systemd[1]: kubelet.service: Consumed 203ms CPU time, 109M memory peak. Sep 16 05:00:27.606724 kernel: Initializing XFRM netlink socket Sep 16 05:00:27.896794 (udev-worker)[2338]: Network interface NamePolicy= disabled on kernel command line. Sep 16 05:00:27.959067 systemd-networkd[1819]: docker0: Link UP Sep 16 05:00:27.969971 dockerd[2303]: time="2025-09-16T05:00:27.969897556Z" level=info msg="Loading containers: done." Sep 16 05:00:27.997775 dockerd[2303]: time="2025-09-16T05:00:27.997715084Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 16 05:00:27.997961 dockerd[2303]: time="2025-09-16T05:00:27.997804589Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 16 05:00:27.997961 dockerd[2303]: time="2025-09-16T05:00:27.997909400Z" level=info msg="Initializing buildkit" Sep 16 05:00:28.037974 dockerd[2303]: time="2025-09-16T05:00:28.037905188Z" level=info msg="Completed buildkit initialization" Sep 16 05:00:28.045244 dockerd[2303]: time="2025-09-16T05:00:28.045193962Z" level=info msg="Daemon has completed initialization" Sep 16 05:00:28.045731 dockerd[2303]: time="2025-09-16T05:00:28.045371595Z" level=info msg="API listen on /run/docker.sock" Sep 16 05:00:28.045754 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 16 05:00:29.334159 containerd[1893]: time="2025-09-16T05:00:29.334119653Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 16 05:00:29.922343 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount534059720.mount: Deactivated successfully. Sep 16 05:00:31.468372 containerd[1893]: time="2025-09-16T05:00:31.468311192Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:00:31.470949 containerd[1893]: time="2025-09-16T05:00:31.470896482Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Sep 16 05:00:31.473490 containerd[1893]: time="2025-09-16T05:00:31.473432866Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:00:31.477352 containerd[1893]: time="2025-09-16T05:00:31.477289466Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:00:31.478473 containerd[1893]: time="2025-09-16T05:00:31.478072378Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 2.143915747s" Sep 16 05:00:31.478473 containerd[1893]: time="2025-09-16T05:00:31.478107810Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Sep 16 05:00:31.479097 containerd[1893]: time="2025-09-16T05:00:31.479068490Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 16 05:00:32.979379 containerd[1893]: time="2025-09-16T05:00:32.979323741Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:00:32.980461 containerd[1893]: time="2025-09-16T05:00:32.980258369Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Sep 16 05:00:32.981712 containerd[1893]: time="2025-09-16T05:00:32.981684357Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:00:32.984575 containerd[1893]: time="2025-09-16T05:00:32.984542450Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:00:32.985372 containerd[1893]: time="2025-09-16T05:00:32.985340410Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.506238674s" Sep 16 05:00:32.985455 containerd[1893]: time="2025-09-16T05:00:32.985375078Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Sep 16 05:00:32.986067 containerd[1893]: time="2025-09-16T05:00:32.985855075Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 16 05:00:34.363476 containerd[1893]: time="2025-09-16T05:00:34.363417177Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:00:34.365106 containerd[1893]: time="2025-09-16T05:00:34.365057981Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Sep 16 05:00:34.367272 containerd[1893]: time="2025-09-16T05:00:34.367210523Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:00:34.370644 containerd[1893]: time="2025-09-16T05:00:34.370586122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:00:34.372441 containerd[1893]: time="2025-09-16T05:00:34.371739712Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.385855449s" Sep 16 05:00:34.372441 containerd[1893]: time="2025-09-16T05:00:34.371781640Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Sep 16 05:00:34.372602 containerd[1893]: time="2025-09-16T05:00:34.372588400Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 16 05:00:35.438356 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1022170864.mount: Deactivated successfully. Sep 16 05:00:36.038258 containerd[1893]: time="2025-09-16T05:00:36.038203732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:00:36.039439 containerd[1893]: time="2025-09-16T05:00:36.039266829Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Sep 16 05:00:36.040998 containerd[1893]: time="2025-09-16T05:00:36.040960385Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:00:36.043106 containerd[1893]: time="2025-09-16T05:00:36.043073923Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:00:36.043806 containerd[1893]: time="2025-09-16T05:00:36.043648055Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 1.671033036s" Sep 16 05:00:36.043806 containerd[1893]: time="2025-09-16T05:00:36.043695310Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Sep 16 05:00:36.044395 containerd[1893]: time="2025-09-16T05:00:36.044365360Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 16 05:00:37.645522 systemd-resolved[1769]: Clock change detected. Flushing caches. Sep 16 05:00:37.739210 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1973498795.mount: Deactivated successfully. Sep 16 05:00:38.756794 containerd[1893]: time="2025-09-16T05:00:38.756734210Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:00:38.758183 containerd[1893]: time="2025-09-16T05:00:38.757957383Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Sep 16 05:00:38.759249 containerd[1893]: time="2025-09-16T05:00:38.759215957Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:00:38.762100 containerd[1893]: time="2025-09-16T05:00:38.762069447Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:00:38.763337 containerd[1893]: time="2025-09-16T05:00:38.763286146Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.488100254s" Sep 16 05:00:38.763337 containerd[1893]: time="2025-09-16T05:00:38.763322869Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 16 05:00:38.763852 containerd[1893]: time="2025-09-16T05:00:38.763823839Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 16 05:00:39.079910 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 16 05:00:39.081511 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 05:00:39.225074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount51401471.mount: Deactivated successfully. Sep 16 05:00:39.236901 containerd[1893]: time="2025-09-16T05:00:39.236846346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 05:00:39.241610 containerd[1893]: time="2025-09-16T05:00:39.241573313Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 16 05:00:39.243899 containerd[1893]: time="2025-09-16T05:00:39.243861401Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 05:00:39.252684 containerd[1893]: time="2025-09-16T05:00:39.251610379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 05:00:39.252684 containerd[1893]: time="2025-09-16T05:00:39.252382037Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 488.529187ms" Sep 16 05:00:39.252684 containerd[1893]: time="2025-09-16T05:00:39.252415216Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 16 05:00:39.253531 containerd[1893]: time="2025-09-16T05:00:39.253502759Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 16 05:00:39.341329 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 05:00:39.352078 (kubelet)[2665]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 05:00:39.421662 kubelet[2665]: E0916 05:00:39.421587 2665 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 05:00:39.424285 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 05:00:39.424490 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 05:00:39.424921 systemd[1]: kubelet.service: Consumed 178ms CPU time, 108.2M memory peak. Sep 16 05:00:39.717803 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount710473113.mount: Deactivated successfully. Sep 16 05:00:42.257824 containerd[1893]: time="2025-09-16T05:00:42.257769232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:00:42.258968 containerd[1893]: time="2025-09-16T05:00:42.258729628Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Sep 16 05:00:42.260104 containerd[1893]: time="2025-09-16T05:00:42.260073972Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:00:42.263410 containerd[1893]: time="2025-09-16T05:00:42.263375242Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:00:42.264260 containerd[1893]: time="2025-09-16T05:00:42.264219124Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.010679222s" Sep 16 05:00:42.264355 containerd[1893]: time="2025-09-16T05:00:42.264265432Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 16 05:00:44.182338 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 16 05:00:46.634884 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 05:00:46.635143 systemd[1]: kubelet.service: Consumed 178ms CPU time, 108.2M memory peak. Sep 16 05:00:46.638482 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 05:00:46.671234 systemd[1]: Reload requested from client PID 2758 ('systemctl') (unit session-9.scope)... Sep 16 05:00:46.671251 systemd[1]: Reloading... Sep 16 05:00:46.774499 zram_generator::config[2798]: No configuration found. Sep 16 05:00:47.058617 systemd[1]: Reloading finished in 386 ms. Sep 16 05:00:47.131455 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 16 05:00:47.131573 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 16 05:00:47.131831 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 05:00:47.131887 systemd[1]: kubelet.service: Consumed 91ms CPU time, 69.9M memory peak. Sep 16 05:00:47.134603 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 05:00:47.504231 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 05:00:47.517042 (kubelet)[2862]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 16 05:00:47.571436 kubelet[2862]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 05:00:47.571436 kubelet[2862]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 16 05:00:47.571436 kubelet[2862]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 05:00:47.574368 kubelet[2862]: I0916 05:00:47.574293 2862 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 16 05:00:48.395350 kubelet[2862]: I0916 05:00:48.395280 2862 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 16 05:00:48.395350 kubelet[2862]: I0916 05:00:48.395315 2862 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 16 05:00:48.395581 kubelet[2862]: I0916 05:00:48.395543 2862 server.go:956] "Client rotation is on, will bootstrap in background" Sep 16 05:00:48.439767 kubelet[2862]: I0916 05:00:48.439717 2862 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 16 05:00:48.447284 kubelet[2862]: E0916 05:00:48.447243 2862 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.17.131:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.17.131:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 16 05:00:48.460587 kubelet[2862]: I0916 05:00:48.460561 2862 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 16 05:00:48.470271 kubelet[2862]: I0916 05:00:48.470225 2862 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 16 05:00:48.475031 kubelet[2862]: I0916 05:00:48.474959 2862 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 16 05:00:48.478970 kubelet[2862]: I0916 05:00:48.475022 2862 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-131","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 16 05:00:48.478970 kubelet[2862]: I0916 05:00:48.478971 2862 topology_manager.go:138] "Creating topology manager with none policy" Sep 16 05:00:48.479224 kubelet[2862]: I0916 05:00:48.478989 2862 container_manager_linux.go:303] "Creating device plugin manager" Sep 16 05:00:48.480306 kubelet[2862]: I0916 05:00:48.480275 2862 state_mem.go:36] "Initialized new in-memory state store" Sep 16 05:00:48.485982 kubelet[2862]: I0916 05:00:48.485925 2862 kubelet.go:480] "Attempting to sync node with API server" Sep 16 05:00:48.485982 kubelet[2862]: I0916 05:00:48.485969 2862 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 16 05:00:48.486129 kubelet[2862]: I0916 05:00:48.486003 2862 kubelet.go:386] "Adding apiserver pod source" Sep 16 05:00:48.486129 kubelet[2862]: I0916 05:00:48.486018 2862 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 16 05:00:48.494848 kubelet[2862]: E0916 05:00:48.494801 2862 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.17.131:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-131&limit=500&resourceVersion=0\": dial tcp 172.31.17.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 16 05:00:48.494991 kubelet[2862]: I0916 05:00:48.494928 2862 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 16 05:00:48.498067 kubelet[2862]: I0916 05:00:48.497933 2862 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 16 05:00:48.500502 kubelet[2862]: W0916 05:00:48.499135 2862 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 16 05:00:48.506260 kubelet[2862]: I0916 05:00:48.506208 2862 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 16 05:00:48.506383 kubelet[2862]: I0916 05:00:48.506278 2862 server.go:1289] "Started kubelet" Sep 16 05:00:48.508376 kubelet[2862]: E0916 05:00:48.508283 2862 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.17.131:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.17.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 16 05:00:48.511059 kubelet[2862]: I0916 05:00:48.510938 2862 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 16 05:00:48.511823 kubelet[2862]: I0916 05:00:48.511760 2862 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 16 05:00:48.512124 kubelet[2862]: I0916 05:00:48.512099 2862 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 16 05:00:48.519727 kubelet[2862]: E0916 05:00:48.515197 2862 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.17.131:6443/api/v1/namespaces/default/events\": dial tcp 172.31.17.131:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-17-131.1865aaa370c2a11e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-17-131,UID:ip-172-31-17-131,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-17-131,},FirstTimestamp:2025-09-16 05:00:48.50624131 +0000 UTC m=+0.985298137,LastTimestamp:2025-09-16 05:00:48.50624131 +0000 UTC m=+0.985298137,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-17-131,}" Sep 16 05:00:48.522004 kubelet[2862]: I0916 05:00:48.521965 2862 server.go:317] "Adding debug handlers to kubelet server" Sep 16 05:00:48.522596 kubelet[2862]: I0916 05:00:48.522577 2862 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 16 05:00:48.526800 kubelet[2862]: I0916 05:00:48.526775 2862 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 16 05:00:48.530209 kubelet[2862]: E0916 05:00:48.530183 2862 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-131\" not found" Sep 16 05:00:48.530369 kubelet[2862]: I0916 05:00:48.530361 2862 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 16 05:00:48.533126 kubelet[2862]: I0916 05:00:48.531867 2862 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 16 05:00:48.533126 kubelet[2862]: I0916 05:00:48.531949 2862 reconciler.go:26] "Reconciler: start to sync state" Sep 16 05:00:48.533126 kubelet[2862]: E0916 05:00:48.532388 2862 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.17.131:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.17.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 16 05:00:48.533126 kubelet[2862]: E0916 05:00:48.532695 2862 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-131?timeout=10s\": dial tcp 172.31.17.131:6443: connect: connection refused" interval="200ms" Sep 16 05:00:48.537723 kubelet[2862]: I0916 05:00:48.537323 2862 factory.go:223] Registration of the systemd container factory successfully Sep 16 05:00:48.537723 kubelet[2862]: I0916 05:00:48.537419 2862 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 16 05:00:48.540167 kubelet[2862]: I0916 05:00:48.540145 2862 factory.go:223] Registration of the containerd container factory successfully Sep 16 05:00:48.559411 kubelet[2862]: I0916 05:00:48.559387 2862 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 16 05:00:48.559411 kubelet[2862]: I0916 05:00:48.559405 2862 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 16 05:00:48.559411 kubelet[2862]: I0916 05:00:48.559422 2862 state_mem.go:36] "Initialized new in-memory state store" Sep 16 05:00:48.561896 kubelet[2862]: I0916 05:00:48.561803 2862 policy_none.go:49] "None policy: Start" Sep 16 05:00:48.561896 kubelet[2862]: I0916 05:00:48.561836 2862 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 16 05:00:48.561896 kubelet[2862]: I0916 05:00:48.561847 2862 state_mem.go:35] "Initializing new in-memory state store" Sep 16 05:00:48.571608 kubelet[2862]: I0916 05:00:48.571566 2862 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 16 05:00:48.573597 kubelet[2862]: I0916 05:00:48.572777 2862 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 16 05:00:48.573597 kubelet[2862]: I0916 05:00:48.572796 2862 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 16 05:00:48.573597 kubelet[2862]: I0916 05:00:48.572816 2862 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 16 05:00:48.573597 kubelet[2862]: I0916 05:00:48.572824 2862 kubelet.go:2436] "Starting kubelet main sync loop" Sep 16 05:00:48.573597 kubelet[2862]: E0916 05:00:48.572861 2862 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 16 05:00:48.574741 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 16 05:00:48.578572 kubelet[2862]: E0916 05:00:48.578524 2862 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.17.131:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.17.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 16 05:00:48.590298 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 16 05:00:48.593555 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 16 05:00:48.602421 kubelet[2862]: E0916 05:00:48.602396 2862 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 16 05:00:48.603132 kubelet[2862]: I0916 05:00:48.602723 2862 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 16 05:00:48.603132 kubelet[2862]: I0916 05:00:48.602738 2862 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 16 05:00:48.603132 kubelet[2862]: I0916 05:00:48.602981 2862 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 16 05:00:48.604418 kubelet[2862]: E0916 05:00:48.604403 2862 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 16 05:00:48.604697 kubelet[2862]: E0916 05:00:48.604686 2862 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-17-131\" not found" Sep 16 05:00:48.686152 systemd[1]: Created slice kubepods-burstable-pod5a7257f36d65ea5be08f1ed8220cf575.slice - libcontainer container kubepods-burstable-pod5a7257f36d65ea5be08f1ed8220cf575.slice. Sep 16 05:00:48.695585 kubelet[2862]: E0916 05:00:48.695380 2862 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-131\" not found" node="ip-172-31-17-131" Sep 16 05:00:48.697890 systemd[1]: Created slice kubepods-burstable-podd4077919725d04852f3366876bdcf14f.slice - libcontainer container kubepods-burstable-podd4077919725d04852f3366876bdcf14f.slice. Sep 16 05:00:48.704458 kubelet[2862]: I0916 05:00:48.704428 2862 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-131" Sep 16 05:00:48.706235 kubelet[2862]: E0916 05:00:48.706203 2862 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.131:6443/api/v1/nodes\": dial tcp 172.31.17.131:6443: connect: connection refused" node="ip-172-31-17-131" Sep 16 05:00:48.706583 kubelet[2862]: E0916 05:00:48.706544 2862 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-131\" not found" node="ip-172-31-17-131" Sep 16 05:00:48.709924 systemd[1]: Created slice kubepods-burstable-pod97bd428164c8498f6a20de7def0d9a18.slice - libcontainer container kubepods-burstable-pod97bd428164c8498f6a20de7def0d9a18.slice. Sep 16 05:00:48.712261 kubelet[2862]: E0916 05:00:48.712231 2862 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-131\" not found" node="ip-172-31-17-131" Sep 16 05:00:48.733927 kubelet[2862]: I0916 05:00:48.733670 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5a7257f36d65ea5be08f1ed8220cf575-ca-certs\") pod \"kube-apiserver-ip-172-31-17-131\" (UID: \"5a7257f36d65ea5be08f1ed8220cf575\") " pod="kube-system/kube-apiserver-ip-172-31-17-131" Sep 16 05:00:48.733927 kubelet[2862]: I0916 05:00:48.733713 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5a7257f36d65ea5be08f1ed8220cf575-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-131\" (UID: \"5a7257f36d65ea5be08f1ed8220cf575\") " pod="kube-system/kube-apiserver-ip-172-31-17-131" Sep 16 05:00:48.733927 kubelet[2862]: I0916 05:00:48.733732 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4077919725d04852f3366876bdcf14f-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-131\" (UID: \"d4077919725d04852f3366876bdcf14f\") " pod="kube-system/kube-controller-manager-ip-172-31-17-131" Sep 16 05:00:48.733927 kubelet[2862]: I0916 05:00:48.733757 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4077919725d04852f3366876bdcf14f-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-131\" (UID: \"d4077919725d04852f3366876bdcf14f\") " pod="kube-system/kube-controller-manager-ip-172-31-17-131" Sep 16 05:00:48.733927 kubelet[2862]: I0916 05:00:48.733777 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4077919725d04852f3366876bdcf14f-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-131\" (UID: \"d4077919725d04852f3366876bdcf14f\") " pod="kube-system/kube-controller-manager-ip-172-31-17-131" Sep 16 05:00:48.734153 kubelet[2862]: I0916 05:00:48.733793 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4077919725d04852f3366876bdcf14f-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-131\" (UID: \"d4077919725d04852f3366876bdcf14f\") " pod="kube-system/kube-controller-manager-ip-172-31-17-131" Sep 16 05:00:48.734153 kubelet[2862]: I0916 05:00:48.733811 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/97bd428164c8498f6a20de7def0d9a18-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-131\" (UID: \"97bd428164c8498f6a20de7def0d9a18\") " pod="kube-system/kube-scheduler-ip-172-31-17-131" Sep 16 05:00:48.734153 kubelet[2862]: I0916 05:00:48.733827 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5a7257f36d65ea5be08f1ed8220cf575-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-131\" (UID: \"5a7257f36d65ea5be08f1ed8220cf575\") " pod="kube-system/kube-apiserver-ip-172-31-17-131" Sep 16 05:00:48.734153 kubelet[2862]: I0916 05:00:48.733844 2862 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4077919725d04852f3366876bdcf14f-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-131\" (UID: \"d4077919725d04852f3366876bdcf14f\") " pod="kube-system/kube-controller-manager-ip-172-31-17-131" Sep 16 05:00:48.734153 kubelet[2862]: E0916 05:00:48.733880 2862 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-131?timeout=10s\": dial tcp 172.31.17.131:6443: connect: connection refused" interval="400ms" Sep 16 05:00:48.908763 kubelet[2862]: I0916 05:00:48.908640 2862 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-131" Sep 16 05:00:48.909054 kubelet[2862]: E0916 05:00:48.909006 2862 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.131:6443/api/v1/nodes\": dial tcp 172.31.17.131:6443: connect: connection refused" node="ip-172-31-17-131" Sep 16 05:00:48.996887 containerd[1893]: time="2025-09-16T05:00:48.996835229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-131,Uid:5a7257f36d65ea5be08f1ed8220cf575,Namespace:kube-system,Attempt:0,}" Sep 16 05:00:49.008067 containerd[1893]: time="2025-09-16T05:00:49.007827497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-131,Uid:d4077919725d04852f3366876bdcf14f,Namespace:kube-system,Attempt:0,}" Sep 16 05:00:49.017101 containerd[1893]: time="2025-09-16T05:00:49.017051532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-131,Uid:97bd428164c8498f6a20de7def0d9a18,Namespace:kube-system,Attempt:0,}" Sep 16 05:00:49.134572 kubelet[2862]: E0916 05:00:49.134487 2862 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-131?timeout=10s\": dial tcp 172.31.17.131:6443: connect: connection refused" interval="800ms" Sep 16 05:00:49.136567 containerd[1893]: time="2025-09-16T05:00:49.136446399Z" level=info msg="connecting to shim a2a75d3789d45e8b0e6136a75508654c2cd1cd3394393562aa2cb01fd1bace4e" address="unix:///run/containerd/s/cc2d91ebd7eabf762a285dbfb1262cbd625649c0235e7e863e01fea61b1d843e" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:00:49.158676 containerd[1893]: time="2025-09-16T05:00:49.158620028Z" level=info msg="connecting to shim 9c867680e5fdd8083c060f28947f917973059e0e45361050c8b44fc3d6e3a99f" address="unix:///run/containerd/s/7c17e1f93b483157135ed244fb92dc1e916e27d14c04eb39211176f47b0791cd" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:00:49.159351 containerd[1893]: time="2025-09-16T05:00:49.159312666Z" level=info msg="connecting to shim 5e6118e99c6037f19a1ee3cd7e18a8600dc4841b2a34f37435f7dfb5f816aa91" address="unix:///run/containerd/s/387f703f2747c5aa5b602b29008828819fae7bf60ee4b27c226ec65b175464c9" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:00:49.275919 systemd[1]: Started cri-containerd-a2a75d3789d45e8b0e6136a75508654c2cd1cd3394393562aa2cb01fd1bace4e.scope - libcontainer container a2a75d3789d45e8b0e6136a75508654c2cd1cd3394393562aa2cb01fd1bace4e. Sep 16 05:00:49.302618 systemd[1]: Started cri-containerd-5e6118e99c6037f19a1ee3cd7e18a8600dc4841b2a34f37435f7dfb5f816aa91.scope - libcontainer container 5e6118e99c6037f19a1ee3cd7e18a8600dc4841b2a34f37435f7dfb5f816aa91. Sep 16 05:00:49.306601 systemd[1]: Started cri-containerd-9c867680e5fdd8083c060f28947f917973059e0e45361050c8b44fc3d6e3a99f.scope - libcontainer container 9c867680e5fdd8083c060f28947f917973059e0e45361050c8b44fc3d6e3a99f. Sep 16 05:00:49.316417 kubelet[2862]: I0916 05:00:49.315801 2862 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-131" Sep 16 05:00:49.316942 kubelet[2862]: E0916 05:00:49.316892 2862 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.131:6443/api/v1/nodes\": dial tcp 172.31.17.131:6443: connect: connection refused" node="ip-172-31-17-131" Sep 16 05:00:49.401331 containerd[1893]: time="2025-09-16T05:00:49.401286271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-131,Uid:5a7257f36d65ea5be08f1ed8220cf575,Namespace:kube-system,Attempt:0,} returns sandbox id \"a2a75d3789d45e8b0e6136a75508654c2cd1cd3394393562aa2cb01fd1bace4e\"" Sep 16 05:00:49.417048 containerd[1893]: time="2025-09-16T05:00:49.417000845Z" level=info msg="CreateContainer within sandbox \"a2a75d3789d45e8b0e6136a75508654c2cd1cd3394393562aa2cb01fd1bace4e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 16 05:00:49.418154 containerd[1893]: time="2025-09-16T05:00:49.418046726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-131,Uid:97bd428164c8498f6a20de7def0d9a18,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e6118e99c6037f19a1ee3cd7e18a8600dc4841b2a34f37435f7dfb5f816aa91\"" Sep 16 05:00:49.427279 containerd[1893]: time="2025-09-16T05:00:49.426970021Z" level=info msg="CreateContainer within sandbox \"5e6118e99c6037f19a1ee3cd7e18a8600dc4841b2a34f37435f7dfb5f816aa91\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 16 05:00:49.438064 containerd[1893]: time="2025-09-16T05:00:49.438019315Z" level=info msg="Container 0f740cd43d36f1a4db5a912f840cc2cda9cc385706b60c8887df7384bdc24f33: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:00:49.441275 containerd[1893]: time="2025-09-16T05:00:49.441236502Z" level=info msg="Container 2dc980dc7b707f3970bbbb3b94daa5f5105d547d4ab79a2a8a78a2d563f02dac: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:00:49.442001 containerd[1893]: time="2025-09-16T05:00:49.441961106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-131,Uid:d4077919725d04852f3366876bdcf14f,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c867680e5fdd8083c060f28947f917973059e0e45361050c8b44fc3d6e3a99f\"" Sep 16 05:00:49.449923 containerd[1893]: time="2025-09-16T05:00:49.449883603Z" level=info msg="CreateContainer within sandbox \"9c867680e5fdd8083c060f28947f917973059e0e45361050c8b44fc3d6e3a99f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 16 05:00:49.455410 containerd[1893]: time="2025-09-16T05:00:49.455302502Z" level=info msg="CreateContainer within sandbox \"5e6118e99c6037f19a1ee3cd7e18a8600dc4841b2a34f37435f7dfb5f816aa91\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2dc980dc7b707f3970bbbb3b94daa5f5105d547d4ab79a2a8a78a2d563f02dac\"" Sep 16 05:00:49.456395 containerd[1893]: time="2025-09-16T05:00:49.456228146Z" level=info msg="StartContainer for \"2dc980dc7b707f3970bbbb3b94daa5f5105d547d4ab79a2a8a78a2d563f02dac\"" Sep 16 05:00:49.458111 containerd[1893]: time="2025-09-16T05:00:49.458026344Z" level=info msg="connecting to shim 2dc980dc7b707f3970bbbb3b94daa5f5105d547d4ab79a2a8a78a2d563f02dac" address="unix:///run/containerd/s/387f703f2747c5aa5b602b29008828819fae7bf60ee4b27c226ec65b175464c9" protocol=ttrpc version=3 Sep 16 05:00:49.465054 containerd[1893]: time="2025-09-16T05:00:49.464992485Z" level=info msg="CreateContainer within sandbox \"a2a75d3789d45e8b0e6136a75508654c2cd1cd3394393562aa2cb01fd1bace4e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0f740cd43d36f1a4db5a912f840cc2cda9cc385706b60c8887df7384bdc24f33\"" Sep 16 05:00:49.466565 containerd[1893]: time="2025-09-16T05:00:49.466528210Z" level=info msg="StartContainer for \"0f740cd43d36f1a4db5a912f840cc2cda9cc385706b60c8887df7384bdc24f33\"" Sep 16 05:00:49.468493 containerd[1893]: time="2025-09-16T05:00:49.467627613Z" level=info msg="Container 31efe49d3f08b81ca6af53f221cbc1a3af66c561985b54d701e5d63219f49390: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:00:49.468730 containerd[1893]: time="2025-09-16T05:00:49.468707128Z" level=info msg="connecting to shim 0f740cd43d36f1a4db5a912f840cc2cda9cc385706b60c8887df7384bdc24f33" address="unix:///run/containerd/s/cc2d91ebd7eabf762a285dbfb1262cbd625649c0235e7e863e01fea61b1d843e" protocol=ttrpc version=3 Sep 16 05:00:49.479868 containerd[1893]: time="2025-09-16T05:00:49.479831279Z" level=info msg="CreateContainer within sandbox \"9c867680e5fdd8083c060f28947f917973059e0e45361050c8b44fc3d6e3a99f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"31efe49d3f08b81ca6af53f221cbc1a3af66c561985b54d701e5d63219f49390\"" Sep 16 05:00:49.480878 containerd[1893]: time="2025-09-16T05:00:49.480841578Z" level=info msg="StartContainer for \"31efe49d3f08b81ca6af53f221cbc1a3af66c561985b54d701e5d63219f49390\"" Sep 16 05:00:49.482010 containerd[1893]: time="2025-09-16T05:00:49.481941378Z" level=info msg="connecting to shim 31efe49d3f08b81ca6af53f221cbc1a3af66c561985b54d701e5d63219f49390" address="unix:///run/containerd/s/7c17e1f93b483157135ed244fb92dc1e916e27d14c04eb39211176f47b0791cd" protocol=ttrpc version=3 Sep 16 05:00:49.483814 systemd[1]: Started cri-containerd-2dc980dc7b707f3970bbbb3b94daa5f5105d547d4ab79a2a8a78a2d563f02dac.scope - libcontainer container 2dc980dc7b707f3970bbbb3b94daa5f5105d547d4ab79a2a8a78a2d563f02dac. Sep 16 05:00:49.504625 systemd[1]: Started cri-containerd-0f740cd43d36f1a4db5a912f840cc2cda9cc385706b60c8887df7384bdc24f33.scope - libcontainer container 0f740cd43d36f1a4db5a912f840cc2cda9cc385706b60c8887df7384bdc24f33. Sep 16 05:00:49.520921 systemd[1]: Started cri-containerd-31efe49d3f08b81ca6af53f221cbc1a3af66c561985b54d701e5d63219f49390.scope - libcontainer container 31efe49d3f08b81ca6af53f221cbc1a3af66c561985b54d701e5d63219f49390. Sep 16 05:00:49.546445 kubelet[2862]: E0916 05:00:49.546022 2862 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.17.131:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.17.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 16 05:00:49.631611 containerd[1893]: time="2025-09-16T05:00:49.630953526Z" level=info msg="StartContainer for \"2dc980dc7b707f3970bbbb3b94daa5f5105d547d4ab79a2a8a78a2d563f02dac\" returns successfully" Sep 16 05:00:49.647207 containerd[1893]: time="2025-09-16T05:00:49.647092640Z" level=info msg="StartContainer for \"0f740cd43d36f1a4db5a912f840cc2cda9cc385706b60c8887df7384bdc24f33\" returns successfully" Sep 16 05:00:49.654276 containerd[1893]: time="2025-09-16T05:00:49.653884666Z" level=info msg="StartContainer for \"31efe49d3f08b81ca6af53f221cbc1a3af66c561985b54d701e5d63219f49390\" returns successfully" Sep 16 05:00:49.693555 kubelet[2862]: E0916 05:00:49.693513 2862 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.17.131:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-131&limit=500&resourceVersion=0\": dial tcp 172.31.17.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 16 05:00:49.935410 kubelet[2862]: E0916 05:00:49.935265 2862 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-131?timeout=10s\": dial tcp 172.31.17.131:6443: connect: connection refused" interval="1.6s" Sep 16 05:00:49.959548 kubelet[2862]: E0916 05:00:49.959389 2862 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.17.131:6443/api/v1/namespaces/default/events\": dial tcp 172.31.17.131:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-17-131.1865aaa370c2a11e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-17-131,UID:ip-172-31-17-131,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-17-131,},FirstTimestamp:2025-09-16 05:00:48.50624131 +0000 UTC m=+0.985298137,LastTimestamp:2025-09-16 05:00:48.50624131 +0000 UTC m=+0.985298137,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-17-131,}" Sep 16 05:00:50.010454 kubelet[2862]: E0916 05:00:50.010407 2862 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.17.131:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.17.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 16 05:00:50.106447 kubelet[2862]: E0916 05:00:50.106397 2862 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.17.131:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.17.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 16 05:00:50.119624 kubelet[2862]: I0916 05:00:50.119357 2862 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-131" Sep 16 05:00:50.120781 kubelet[2862]: E0916 05:00:50.120749 2862 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.131:6443/api/v1/nodes\": dial tcp 172.31.17.131:6443: connect: connection refused" node="ip-172-31-17-131" Sep 16 05:00:50.559446 kubelet[2862]: E0916 05:00:50.559404 2862 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.17.131:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.17.131:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 16 05:00:50.619572 kubelet[2862]: E0916 05:00:50.619289 2862 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-131\" not found" node="ip-172-31-17-131" Sep 16 05:00:50.625855 kubelet[2862]: E0916 05:00:50.625829 2862 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-131\" not found" node="ip-172-31-17-131" Sep 16 05:00:50.629538 kubelet[2862]: E0916 05:00:50.628783 2862 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-131\" not found" node="ip-172-31-17-131" Sep 16 05:00:51.297244 kubelet[2862]: E0916 05:00:51.297189 2862 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.17.131:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.17.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 16 05:00:51.631287 kubelet[2862]: E0916 05:00:51.631187 2862 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-131\" not found" node="ip-172-31-17-131" Sep 16 05:00:51.632933 kubelet[2862]: E0916 05:00:51.632910 2862 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-131\" not found" node="ip-172-31-17-131" Sep 16 05:00:51.633754 kubelet[2862]: E0916 05:00:51.633362 2862 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-131\" not found" node="ip-172-31-17-131" Sep 16 05:00:51.725778 kubelet[2862]: I0916 05:00:51.725753 2862 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-131" Sep 16 05:00:52.634442 kubelet[2862]: E0916 05:00:52.634411 2862 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-131\" not found" node="ip-172-31-17-131" Sep 16 05:00:52.634981 kubelet[2862]: E0916 05:00:52.634958 2862 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-131\" not found" node="ip-172-31-17-131" Sep 16 05:00:53.362093 kubelet[2862]: E0916 05:00:53.362039 2862 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-17-131\" not found" node="ip-172-31-17-131" Sep 16 05:00:53.410322 kubelet[2862]: I0916 05:00:53.410245 2862 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-17-131" Sep 16 05:00:53.410322 kubelet[2862]: E0916 05:00:53.410281 2862 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-17-131\": node \"ip-172-31-17-131\" not found" Sep 16 05:00:53.433037 kubelet[2862]: I0916 05:00:53.433001 2862 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-17-131" Sep 16 05:00:53.499597 kubelet[2862]: E0916 05:00:53.499558 2862 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-17-131\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-17-131" Sep 16 05:00:53.499597 kubelet[2862]: I0916 05:00:53.499589 2862 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-17-131" Sep 16 05:00:53.503491 kubelet[2862]: E0916 05:00:53.502694 2862 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-17-131\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-17-131" Sep 16 05:00:53.503491 kubelet[2862]: I0916 05:00:53.502715 2862 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-17-131" Sep 16 05:00:53.505776 kubelet[2862]: I0916 05:00:53.505753 2862 apiserver.go:52] "Watching apiserver" Sep 16 05:00:53.508885 kubelet[2862]: E0916 05:00:53.508655 2862 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-17-131\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-17-131" Sep 16 05:00:53.533600 kubelet[2862]: I0916 05:00:53.533560 2862 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 16 05:00:53.757799 kubelet[2862]: I0916 05:00:53.757757 2862 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-17-131" Sep 16 05:00:53.760076 kubelet[2862]: E0916 05:00:53.760039 2862 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-17-131\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-17-131" Sep 16 05:00:54.768289 kubelet[2862]: I0916 05:00:54.768069 2862 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-17-131" Sep 16 05:00:55.484320 systemd[1]: Reload requested from client PID 3144 ('systemctl') (unit session-9.scope)... Sep 16 05:00:55.484339 systemd[1]: Reloading... Sep 16 05:00:55.515226 kubelet[2862]: I0916 05:00:55.515199 2862 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-17-131" Sep 16 05:00:55.628549 zram_generator::config[3188]: No configuration found. Sep 16 05:00:55.972452 systemd[1]: Reloading finished in 487 ms. Sep 16 05:00:56.004922 kubelet[2862]: I0916 05:00:56.004874 2862 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 16 05:00:56.005848 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 05:00:56.023210 systemd[1]: kubelet.service: Deactivated successfully. Sep 16 05:00:56.023572 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 05:00:56.023638 systemd[1]: kubelet.service: Consumed 1.410s CPU time, 127.1M memory peak. Sep 16 05:00:56.026621 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 05:00:56.318586 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 05:00:56.327886 (kubelet)[3249]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 16 05:00:56.415048 kubelet[3249]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 05:00:56.415048 kubelet[3249]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 16 05:00:56.415048 kubelet[3249]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 05:00:56.415542 kubelet[3249]: I0916 05:00:56.415081 3249 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 16 05:00:56.423654 kubelet[3249]: I0916 05:00:56.423618 3249 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 16 05:00:56.423654 kubelet[3249]: I0916 05:00:56.423645 3249 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 16 05:00:56.423971 kubelet[3249]: I0916 05:00:56.423949 3249 server.go:956] "Client rotation is on, will bootstrap in background" Sep 16 05:00:56.426976 kubelet[3249]: I0916 05:00:56.426929 3249 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 16 05:00:56.437920 kubelet[3249]: I0916 05:00:56.437867 3249 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 16 05:00:56.447331 kubelet[3249]: I0916 05:00:56.447289 3249 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 16 05:00:56.450670 kubelet[3249]: I0916 05:00:56.450643 3249 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 16 05:00:56.450916 kubelet[3249]: I0916 05:00:56.450877 3249 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 16 05:00:56.451110 kubelet[3249]: I0916 05:00:56.450918 3249 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-131","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 16 05:00:56.451110 kubelet[3249]: I0916 05:00:56.451109 3249 topology_manager.go:138] "Creating topology manager with none policy" Sep 16 05:00:56.451423 kubelet[3249]: I0916 05:00:56.451125 3249 container_manager_linux.go:303] "Creating device plugin manager" Sep 16 05:00:56.451423 kubelet[3249]: I0916 05:00:56.451176 3249 state_mem.go:36] "Initialized new in-memory state store" Sep 16 05:00:56.451423 kubelet[3249]: I0916 05:00:56.451350 3249 kubelet.go:480] "Attempting to sync node with API server" Sep 16 05:00:56.451423 kubelet[3249]: I0916 05:00:56.451366 3249 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 16 05:00:56.451423 kubelet[3249]: I0916 05:00:56.451395 3249 kubelet.go:386] "Adding apiserver pod source" Sep 16 05:00:56.451423 kubelet[3249]: I0916 05:00:56.451414 3249 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 16 05:00:56.461146 kubelet[3249]: I0916 05:00:56.461082 3249 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 16 05:00:56.464494 kubelet[3249]: I0916 05:00:56.462880 3249 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 16 05:00:56.470099 kubelet[3249]: I0916 05:00:56.470053 3249 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 16 05:00:56.470355 kubelet[3249]: I0916 05:00:56.470345 3249 server.go:1289] "Started kubelet" Sep 16 05:00:56.474965 kubelet[3249]: I0916 05:00:56.474939 3249 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 16 05:00:56.487483 kubelet[3249]: I0916 05:00:56.487405 3249 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 16 05:00:56.492516 kubelet[3249]: I0916 05:00:56.492198 3249 server.go:317] "Adding debug handlers to kubelet server" Sep 16 05:00:56.498453 kubelet[3249]: I0916 05:00:56.498388 3249 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 16 05:00:56.501195 kubelet[3249]: I0916 05:00:56.501017 3249 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 16 05:00:56.509823 kubelet[3249]: I0916 05:00:56.509772 3249 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 16 05:00:56.514620 kubelet[3249]: I0916 05:00:56.514596 3249 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 16 05:00:56.521129 kubelet[3249]: I0916 05:00:56.520664 3249 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 16 05:00:56.522311 kubelet[3249]: I0916 05:00:56.522294 3249 reconciler.go:26] "Reconciler: start to sync state" Sep 16 05:00:56.526746 kubelet[3249]: I0916 05:00:56.526720 3249 factory.go:223] Registration of the systemd container factory successfully Sep 16 05:00:56.527006 kubelet[3249]: I0916 05:00:56.526981 3249 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 16 05:00:56.529580 sudo[3266]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 16 05:00:56.530025 sudo[3266]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 16 05:00:56.532003 kubelet[3249]: I0916 05:00:56.531971 3249 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 16 05:00:56.534121 kubelet[3249]: I0916 05:00:56.533215 3249 factory.go:223] Registration of the containerd container factory successfully Sep 16 05:00:56.534463 kubelet[3249]: I0916 05:00:56.534444 3249 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 16 05:00:56.535963 kubelet[3249]: I0916 05:00:56.535947 3249 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 16 05:00:56.536073 kubelet[3249]: I0916 05:00:56.536060 3249 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 16 05:00:56.536135 kubelet[3249]: I0916 05:00:56.536127 3249 kubelet.go:2436] "Starting kubelet main sync loop" Sep 16 05:00:56.536250 kubelet[3249]: E0916 05:00:56.536234 3249 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 16 05:00:56.621861 kubelet[3249]: I0916 05:00:56.621745 3249 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 16 05:00:56.623576 kubelet[3249]: I0916 05:00:56.622807 3249 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 16 05:00:56.623576 kubelet[3249]: I0916 05:00:56.622858 3249 state_mem.go:36] "Initialized new in-memory state store" Sep 16 05:00:56.623576 kubelet[3249]: I0916 05:00:56.623022 3249 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 16 05:00:56.623576 kubelet[3249]: I0916 05:00:56.623035 3249 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 16 05:00:56.623576 kubelet[3249]: I0916 05:00:56.623056 3249 policy_none.go:49] "None policy: Start" Sep 16 05:00:56.623576 kubelet[3249]: I0916 05:00:56.623071 3249 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 16 05:00:56.623576 kubelet[3249]: I0916 05:00:56.623082 3249 state_mem.go:35] "Initializing new in-memory state store" Sep 16 05:00:56.623576 kubelet[3249]: I0916 05:00:56.623224 3249 state_mem.go:75] "Updated machine memory state" Sep 16 05:00:56.629673 kubelet[3249]: E0916 05:00:56.629644 3249 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 16 05:00:56.629998 kubelet[3249]: I0916 05:00:56.629982 3249 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 16 05:00:56.630495 kubelet[3249]: I0916 05:00:56.630424 3249 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 16 05:00:56.630896 kubelet[3249]: I0916 05:00:56.630883 3249 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 16 05:00:56.633505 kubelet[3249]: E0916 05:00:56.633435 3249 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 16 05:00:56.637697 kubelet[3249]: I0916 05:00:56.637216 3249 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-17-131" Sep 16 05:00:56.645826 kubelet[3249]: I0916 05:00:56.638093 3249 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-17-131" Sep 16 05:00:56.648599 kubelet[3249]: I0916 05:00:56.638271 3249 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-17-131" Sep 16 05:00:56.666293 kubelet[3249]: E0916 05:00:56.666178 3249 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-17-131\" already exists" pod="kube-system/kube-apiserver-ip-172-31-17-131" Sep 16 05:00:56.671640 kubelet[3249]: E0916 05:00:56.671563 3249 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-17-131\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-17-131" Sep 16 05:00:56.748628 kubelet[3249]: I0916 05:00:56.748584 3249 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-131" Sep 16 05:00:56.761562 kubelet[3249]: I0916 05:00:56.761515 3249 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-17-131" Sep 16 05:00:56.762412 kubelet[3249]: I0916 05:00:56.761954 3249 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-17-131" Sep 16 05:00:56.824105 kubelet[3249]: I0916 05:00:56.823964 3249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4077919725d04852f3366876bdcf14f-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-131\" (UID: \"d4077919725d04852f3366876bdcf14f\") " pod="kube-system/kube-controller-manager-ip-172-31-17-131" Sep 16 05:00:56.824105 kubelet[3249]: I0916 05:00:56.824029 3249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5a7257f36d65ea5be08f1ed8220cf575-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-131\" (UID: \"5a7257f36d65ea5be08f1ed8220cf575\") " pod="kube-system/kube-apiserver-ip-172-31-17-131" Sep 16 05:00:56.824105 kubelet[3249]: I0916 05:00:56.824108 3249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4077919725d04852f3366876bdcf14f-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-131\" (UID: \"d4077919725d04852f3366876bdcf14f\") " pod="kube-system/kube-controller-manager-ip-172-31-17-131" Sep 16 05:00:56.824347 kubelet[3249]: I0916 05:00:56.824132 3249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4077919725d04852f3366876bdcf14f-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-131\" (UID: \"d4077919725d04852f3366876bdcf14f\") " pod="kube-system/kube-controller-manager-ip-172-31-17-131" Sep 16 05:00:56.824347 kubelet[3249]: I0916 05:00:56.824153 3249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4077919725d04852f3366876bdcf14f-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-131\" (UID: \"d4077919725d04852f3366876bdcf14f\") " pod="kube-system/kube-controller-manager-ip-172-31-17-131" Sep 16 05:00:56.824347 kubelet[3249]: I0916 05:00:56.824172 3249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4077919725d04852f3366876bdcf14f-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-131\" (UID: \"d4077919725d04852f3366876bdcf14f\") " pod="kube-system/kube-controller-manager-ip-172-31-17-131" Sep 16 05:00:56.824347 kubelet[3249]: I0916 05:00:56.824194 3249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/97bd428164c8498f6a20de7def0d9a18-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-131\" (UID: \"97bd428164c8498f6a20de7def0d9a18\") " pod="kube-system/kube-scheduler-ip-172-31-17-131" Sep 16 05:00:56.824347 kubelet[3249]: I0916 05:00:56.824211 3249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5a7257f36d65ea5be08f1ed8220cf575-ca-certs\") pod \"kube-apiserver-ip-172-31-17-131\" (UID: \"5a7257f36d65ea5be08f1ed8220cf575\") " pod="kube-system/kube-apiserver-ip-172-31-17-131" Sep 16 05:00:56.835649 kubelet[3249]: I0916 05:00:56.824225 3249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5a7257f36d65ea5be08f1ed8220cf575-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-131\" (UID: \"5a7257f36d65ea5be08f1ed8220cf575\") " pod="kube-system/kube-apiserver-ip-172-31-17-131" Sep 16 05:00:57.466870 kubelet[3249]: I0916 05:00:57.466535 3249 apiserver.go:52] "Watching apiserver" Sep 16 05:00:57.481459 sudo[3266]: pam_unix(sudo:session): session closed for user root Sep 16 05:00:57.522671 kubelet[3249]: I0916 05:00:57.522629 3249 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 16 05:00:57.590360 kubelet[3249]: I0916 05:00:57.590329 3249 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-17-131" Sep 16 05:00:57.606412 kubelet[3249]: E0916 05:00:57.606361 3249 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-17-131\" already exists" pod="kube-system/kube-apiserver-ip-172-31-17-131" Sep 16 05:00:57.630100 kubelet[3249]: I0916 05:00:57.630020 3249 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-17-131" podStartSLOduration=1.629989943 podStartE2EDuration="1.629989943s" podCreationTimestamp="2025-09-16 05:00:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 05:00:57.629867767 +0000 UTC m=+1.274199937" watchObservedRunningTime="2025-09-16 05:00:57.629989943 +0000 UTC m=+1.274322111" Sep 16 05:00:57.656699 kubelet[3249]: I0916 05:00:57.656301 3249 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-17-131" podStartSLOduration=3.656282539 podStartE2EDuration="3.656282539s" podCreationTimestamp="2025-09-16 05:00:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 05:00:57.644207002 +0000 UTC m=+1.288539171" watchObservedRunningTime="2025-09-16 05:00:57.656282539 +0000 UTC m=+1.300614710" Sep 16 05:00:57.670379 kubelet[3249]: I0916 05:00:57.670309 3249 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-17-131" podStartSLOduration=2.6702726070000002 podStartE2EDuration="2.670272607s" podCreationTimestamp="2025-09-16 05:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 05:00:57.658186776 +0000 UTC m=+1.302518947" watchObservedRunningTime="2025-09-16 05:00:57.670272607 +0000 UTC m=+1.314604780" Sep 16 05:00:58.164288 update_engine[1868]: I20250916 05:00:58.163704 1868 update_attempter.cc:509] Updating boot flags... Sep 16 05:00:59.543622 sudo[2284]: pam_unix(sudo:session): session closed for user root Sep 16 05:00:59.565992 sshd[2283]: Connection closed by 139.178.68.195 port 45244 Sep 16 05:00:59.567118 sshd-session[2280]: pam_unix(sshd:session): session closed for user core Sep 16 05:00:59.571788 systemd[1]: sshd@8-172.31.17.131:22-139.178.68.195:45244.service: Deactivated successfully. Sep 16 05:00:59.575583 systemd[1]: session-9.scope: Deactivated successfully. Sep 16 05:00:59.575876 systemd[1]: session-9.scope: Consumed 6.382s CPU time, 208.1M memory peak. Sep 16 05:00:59.578961 systemd-logind[1863]: Session 9 logged out. Waiting for processes to exit. Sep 16 05:00:59.580402 systemd-logind[1863]: Removed session 9. Sep 16 05:01:00.681952 kubelet[3249]: I0916 05:01:00.681907 3249 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 16 05:01:00.682331 containerd[1893]: time="2025-09-16T05:01:00.682200699Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 16 05:01:00.682551 kubelet[3249]: I0916 05:01:00.682426 3249 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 16 05:01:01.850982 kubelet[3249]: I0916 05:01:01.850938 3249 status_manager.go:895] "Failed to get status for pod" podUID="2fe752b2-f67d-4fbd-9a11-7afa4ccbbf03" pod="kube-system/kube-proxy-49w58" err="pods \"kube-proxy-49w58\" is forbidden: User \"system:node:ip-172-31-17-131\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-17-131' and this object" Sep 16 05:01:01.853301 kubelet[3249]: E0916 05:01:01.851728 3249 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ip-172-31-17-131\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-17-131' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Sep 16 05:01:01.853426 systemd[1]: Created slice kubepods-besteffort-pod2fe752b2_f67d_4fbd_9a11_7afa4ccbbf03.slice - libcontainer container kubepods-besteffort-pod2fe752b2_f67d_4fbd_9a11_7afa4ccbbf03.slice. Sep 16 05:01:01.880676 kubelet[3249]: E0916 05:01:01.880240 3249 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ip-172-31-17-131\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-17-131' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"hubble-server-certs\"" type="*v1.Secret" Sep 16 05:01:01.880676 kubelet[3249]: E0916 05:01:01.880332 3249 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ip-172-31-17-131\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-17-131' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" Sep 16 05:01:01.880676 kubelet[3249]: E0916 05:01:01.880392 3249 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ip-172-31-17-131\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-17-131' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"cilium-config\"" type="*v1.ConfigMap" Sep 16 05:01:01.880676 kubelet[3249]: E0916 05:01:01.880444 3249 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ip-172-31-17-131\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-17-131' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"cilium-clustermesh\"" type="*v1.Secret" Sep 16 05:01:01.900499 kubelet[3249]: I0916 05:01:01.897164 3249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2fe752b2-f67d-4fbd-9a11-7afa4ccbbf03-lib-modules\") pod \"kube-proxy-49w58\" (UID: \"2fe752b2-f67d-4fbd-9a11-7afa4ccbbf03\") " pod="kube-system/kube-proxy-49w58" Sep 16 05:01:01.900499 kubelet[3249]: I0916 05:01:01.897285 3249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2fe752b2-f67d-4fbd-9a11-7afa4ccbbf03-kube-proxy\") pod \"kube-proxy-49w58\" (UID: \"2fe752b2-f67d-4fbd-9a11-7afa4ccbbf03\") " pod="kube-system/kube-proxy-49w58" Sep 16 05:01:01.900499 kubelet[3249]: I0916 05:01:01.897310 3249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2fe752b2-f67d-4fbd-9a11-7afa4ccbbf03-xtables-lock\") pod \"kube-proxy-49w58\" (UID: \"2fe752b2-f67d-4fbd-9a11-7afa4ccbbf03\") " pod="kube-system/kube-proxy-49w58" Sep 16 05:01:01.900499 kubelet[3249]: I0916 05:01:01.897338 3249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rprv4\" (UniqueName: \"kubernetes.io/projected/2fe752b2-f67d-4fbd-9a11-7afa4ccbbf03-kube-api-access-rprv4\") pod \"kube-proxy-49w58\" (UID: \"2fe752b2-f67d-4fbd-9a11-7afa4ccbbf03\") " pod="kube-system/kube-proxy-49w58" Sep 16 05:01:01.944372 systemd[1]: Created slice kubepods-burstable-pod87e26ec0_5339_4d19_9d8f_cff1af90e72a.slice - libcontainer container kubepods-burstable-pod87e26ec0_5339_4d19_9d8f_cff1af90e72a.slice. Sep 16 05:01:01.997685 kubelet[3249]: I0916 05:01:01.997597 3249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/87e26ec0-5339-4d19-9d8f-cff1af90e72a-xtables-lock\") pod \"cilium-6gjwd\" (UID: \"87e26ec0-5339-4d19-9d8f-cff1af90e72a\") " pod="kube-system/cilium-6gjwd" Sep 16 05:01:01.997962 kubelet[3249]: I0916 05:01:01.997909 3249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/87e26ec0-5339-4d19-9d8f-cff1af90e72a-cilium-config-path\") pod \"cilium-6gjwd\" (UID: \"87e26ec0-5339-4d19-9d8f-cff1af90e72a\") " pod="kube-system/cilium-6gjwd" Sep 16 05:01:01.998146 kubelet[3249]: I0916 05:01:01.998049 3249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/87e26ec0-5339-4d19-9d8f-cff1af90e72a-hubble-tls\") pod \"cilium-6gjwd\" (UID: \"87e26ec0-5339-4d19-9d8f-cff1af90e72a\") " pod="kube-system/cilium-6gjwd" Sep 16 05:01:01.999592 kubelet[3249]: I0916 05:01:01.999538 3249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/87e26ec0-5339-4d19-9d8f-cff1af90e72a-cni-path\") pod \"cilium-6gjwd\" (UID: \"87e26ec0-5339-4d19-9d8f-cff1af90e72a\") " pod="kube-system/cilium-6gjwd" Sep 16 05:01:01.999857 kubelet[3249]: I0916 05:01:01.999749 3249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/87e26ec0-5339-4d19-9d8f-cff1af90e72a-host-proc-sys-net\") pod \"cilium-6gjwd\" (UID: \"87e26ec0-5339-4d19-9d8f-cff1af90e72a\") " pod="kube-system/cilium-6gjwd" Sep 16 05:01:01.999857 kubelet[3249]: I0916 05:01:01.999827 3249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/87e26ec0-5339-4d19-9d8f-cff1af90e72a-cilium-run\") pod \"cilium-6gjwd\" (UID: \"87e26ec0-5339-4d19-9d8f-cff1af90e72a\") " pod="kube-system/cilium-6gjwd" Sep 16 05:01:02.000128 kubelet[3249]: I0916 05:01:02.000090 3249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lcnd\" (UniqueName: \"kubernetes.io/projected/87e26ec0-5339-4d19-9d8f-cff1af90e72a-kube-api-access-2lcnd\") pod \"cilium-6gjwd\" (UID: \"87e26ec0-5339-4d19-9d8f-cff1af90e72a\") " pod="kube-system/cilium-6gjwd" Sep 16 05:01:02.000829 kubelet[3249]: I0916 05:01:02.000452 3249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/87e26ec0-5339-4d19-9d8f-cff1af90e72a-bpf-maps\") pod \"cilium-6gjwd\" (UID: \"87e26ec0-5339-4d19-9d8f-cff1af90e72a\") " pod="kube-system/cilium-6gjwd" Sep 16 05:01:02.000829 kubelet[3249]: I0916 05:01:02.000501 3249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/87e26ec0-5339-4d19-9d8f-cff1af90e72a-cilium-cgroup\") pod \"cilium-6gjwd\" (UID: \"87e26ec0-5339-4d19-9d8f-cff1af90e72a\") " pod="kube-system/cilium-6gjwd" Sep 16 05:01:02.000829 kubelet[3249]: I0916 05:01:02.000528 3249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/87e26ec0-5339-4d19-9d8f-cff1af90e72a-lib-modules\") pod \"cilium-6gjwd\" (UID: \"87e26ec0-5339-4d19-9d8f-cff1af90e72a\") " pod="kube-system/cilium-6gjwd" Sep 16 05:01:02.000829 kubelet[3249]: I0916 05:01:02.000550 3249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/87e26ec0-5339-4d19-9d8f-cff1af90e72a-clustermesh-secrets\") pod \"cilium-6gjwd\" (UID: \"87e26ec0-5339-4d19-9d8f-cff1af90e72a\") " pod="kube-system/cilium-6gjwd" Sep 16 05:01:02.000829 kubelet[3249]: I0916 05:01:02.000580 3249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/87e26ec0-5339-4d19-9d8f-cff1af90e72a-host-proc-sys-kernel\") pod \"cilium-6gjwd\" (UID: \"87e26ec0-5339-4d19-9d8f-cff1af90e72a\") " pod="kube-system/cilium-6gjwd" Sep 16 05:01:02.000829 kubelet[3249]: I0916 05:01:02.000620 3249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/87e26ec0-5339-4d19-9d8f-cff1af90e72a-hostproc\") pod \"cilium-6gjwd\" (UID: \"87e26ec0-5339-4d19-9d8f-cff1af90e72a\") " pod="kube-system/cilium-6gjwd" Sep 16 05:01:02.001112 kubelet[3249]: I0916 05:01:02.000641 3249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/87e26ec0-5339-4d19-9d8f-cff1af90e72a-etc-cni-netd\") pod \"cilium-6gjwd\" (UID: \"87e26ec0-5339-4d19-9d8f-cff1af90e72a\") " pod="kube-system/cilium-6gjwd" Sep 16 05:01:02.327086 systemd[1]: Created slice kubepods-besteffort-pod520ec74e_1110_4aa1_8d9f_84a7a476f809.slice - libcontainer container kubepods-besteffort-pod520ec74e_1110_4aa1_8d9f_84a7a476f809.slice. Sep 16 05:01:02.414647 kubelet[3249]: I0916 05:01:02.413991 3249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8m8l\" (UniqueName: \"kubernetes.io/projected/520ec74e-1110-4aa1-8d9f-84a7a476f809-kube-api-access-c8m8l\") pod \"cilium-operator-6c4d7847fc-jbtzr\" (UID: \"520ec74e-1110-4aa1-8d9f-84a7a476f809\") " pod="kube-system/cilium-operator-6c4d7847fc-jbtzr" Sep 16 05:01:02.414647 kubelet[3249]: I0916 05:01:02.414047 3249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/520ec74e-1110-4aa1-8d9f-84a7a476f809-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-jbtzr\" (UID: \"520ec74e-1110-4aa1-8d9f-84a7a476f809\") " pod="kube-system/cilium-operator-6c4d7847fc-jbtzr" Sep 16 05:01:03.025157 kubelet[3249]: E0916 05:01:03.024889 3249 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Sep 16 05:01:03.031422 kubelet[3249]: E0916 05:01:03.026657 3249 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2fe752b2-f67d-4fbd-9a11-7afa4ccbbf03-kube-proxy podName:2fe752b2-f67d-4fbd-9a11-7afa4ccbbf03 nodeName:}" failed. No retries permitted until 2025-09-16 05:01:03.525202649 +0000 UTC m=+7.169534812 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/2fe752b2-f67d-4fbd-9a11-7afa4ccbbf03-kube-proxy") pod "kube-proxy-49w58" (UID: "2fe752b2-f67d-4fbd-9a11-7afa4ccbbf03") : failed to sync configmap cache: timed out waiting for the condition Sep 16 05:01:03.052900 kubelet[3249]: E0916 05:01:03.052311 3249 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Sep 16 05:01:03.053442 kubelet[3249]: E0916 05:01:03.052913 3249 projected.go:194] Error preparing data for projected volume kube-api-access-rprv4 for pod kube-system/kube-proxy-49w58: failed to sync configmap cache: timed out waiting for the condition Sep 16 05:01:03.053563 kubelet[3249]: E0916 05:01:03.053527 3249 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2fe752b2-f67d-4fbd-9a11-7afa4ccbbf03-kube-api-access-rprv4 podName:2fe752b2-f67d-4fbd-9a11-7afa4ccbbf03 nodeName:}" failed. No retries permitted until 2025-09-16 05:01:03.553494418 +0000 UTC m=+7.197826585 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rprv4" (UniqueName: "kubernetes.io/projected/2fe752b2-f67d-4fbd-9a11-7afa4ccbbf03-kube-api-access-rprv4") pod "kube-proxy-49w58" (UID: "2fe752b2-f67d-4fbd-9a11-7afa4ccbbf03") : failed to sync configmap cache: timed out waiting for the condition Sep 16 05:01:03.106983 kubelet[3249]: E0916 05:01:03.106689 3249 projected.go:264] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Sep 16 05:01:03.106983 kubelet[3249]: E0916 05:01:03.106970 3249 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-6gjwd: failed to sync secret cache: timed out waiting for the condition Sep 16 05:01:03.107197 kubelet[3249]: E0916 05:01:03.107077 3249 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/87e26ec0-5339-4d19-9d8f-cff1af90e72a-hubble-tls podName:87e26ec0-5339-4d19-9d8f-cff1af90e72a nodeName:}" failed. No retries permitted until 2025-09-16 05:01:03.607053604 +0000 UTC m=+7.251385766 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/87e26ec0-5339-4d19-9d8f-cff1af90e72a-hubble-tls") pod "cilium-6gjwd" (UID: "87e26ec0-5339-4d19-9d8f-cff1af90e72a") : failed to sync secret cache: timed out waiting for the condition Sep 16 05:01:03.107423 kubelet[3249]: E0916 05:01:03.107357 3249 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Sep 16 05:01:03.108242 kubelet[3249]: E0916 05:01:03.107406 3249 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Sep 16 05:01:03.110240 kubelet[3249]: E0916 05:01:03.109064 3249 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87e26ec0-5339-4d19-9d8f-cff1af90e72a-cilium-config-path podName:87e26ec0-5339-4d19-9d8f-cff1af90e72a nodeName:}" failed. No retries permitted until 2025-09-16 05:01:03.608627215 +0000 UTC m=+7.252959385 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/87e26ec0-5339-4d19-9d8f-cff1af90e72a-cilium-config-path") pod "cilium-6gjwd" (UID: "87e26ec0-5339-4d19-9d8f-cff1af90e72a") : failed to sync configmap cache: timed out waiting for the condition Sep 16 05:01:03.111218 kubelet[3249]: E0916 05:01:03.110419 3249 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87e26ec0-5339-4d19-9d8f-cff1af90e72a-clustermesh-secrets podName:87e26ec0-5339-4d19-9d8f-cff1af90e72a nodeName:}" failed. No retries permitted until 2025-09-16 05:01:03.609088075 +0000 UTC m=+7.253420223 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/87e26ec0-5339-4d19-9d8f-cff1af90e72a-clustermesh-secrets") pod "cilium-6gjwd" (UID: "87e26ec0-5339-4d19-9d8f-cff1af90e72a") : failed to sync secret cache: timed out waiting for the condition Sep 16 05:01:03.535331 containerd[1893]: time="2025-09-16T05:01:03.535095121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-jbtzr,Uid:520ec74e-1110-4aa1-8d9f-84a7a476f809,Namespace:kube-system,Attempt:0,}" Sep 16 05:01:03.728978 containerd[1893]: time="2025-09-16T05:01:03.728870052Z" level=info msg="connecting to shim 770d89b5b4ad8b6628f3d05d4981af3b8cb905c5d7e3781819520aa59e5fce72" address="unix:///run/containerd/s/036ef89d665ccbf3c4f715c21fff14273751bf303b3871db7596f20cea685bd1" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:01:03.762077 containerd[1893]: time="2025-09-16T05:01:03.762023168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6gjwd,Uid:87e26ec0-5339-4d19-9d8f-cff1af90e72a,Namespace:kube-system,Attempt:0,}" Sep 16 05:01:03.837268 systemd[1]: Started cri-containerd-770d89b5b4ad8b6628f3d05d4981af3b8cb905c5d7e3781819520aa59e5fce72.scope - libcontainer container 770d89b5b4ad8b6628f3d05d4981af3b8cb905c5d7e3781819520aa59e5fce72. Sep 16 05:01:03.881861 containerd[1893]: time="2025-09-16T05:01:03.881782251Z" level=info msg="connecting to shim 6213bf3b45aa9029be2ae36d57457a8838b1c22fc05035a8f6216376e1ef820f" address="unix:///run/containerd/s/99c2a75478e64df8431758025b2eeb1985abb26fd694dc12acd0418545dcf0b4" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:01:03.979724 systemd[1]: Started cri-containerd-6213bf3b45aa9029be2ae36d57457a8838b1c22fc05035a8f6216376e1ef820f.scope - libcontainer container 6213bf3b45aa9029be2ae36d57457a8838b1c22fc05035a8f6216376e1ef820f. Sep 16 05:01:04.004271 containerd[1893]: time="2025-09-16T05:01:04.003933461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-49w58,Uid:2fe752b2-f67d-4fbd-9a11-7afa4ccbbf03,Namespace:kube-system,Attempt:0,}" Sep 16 05:01:04.053364 containerd[1893]: time="2025-09-16T05:01:04.053309956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-jbtzr,Uid:520ec74e-1110-4aa1-8d9f-84a7a476f809,Namespace:kube-system,Attempt:0,} returns sandbox id \"770d89b5b4ad8b6628f3d05d4981af3b8cb905c5d7e3781819520aa59e5fce72\"" Sep 16 05:01:04.061998 containerd[1893]: time="2025-09-16T05:01:04.061937273Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 16 05:01:04.078887 containerd[1893]: time="2025-09-16T05:01:04.078741067Z" level=info msg="connecting to shim b08426f6e1b83c122441a36939c9d3daaaa0b86088de5194d6dc498f04fb0d67" address="unix:///run/containerd/s/0158fef76bf6179e72a5df15997958e03029fa80d7da6f85d5dcffac80257d50" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:01:04.093256 containerd[1893]: time="2025-09-16T05:01:04.092961707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6gjwd,Uid:87e26ec0-5339-4d19-9d8f-cff1af90e72a,Namespace:kube-system,Attempt:0,} returns sandbox id \"6213bf3b45aa9029be2ae36d57457a8838b1c22fc05035a8f6216376e1ef820f\"" Sep 16 05:01:04.116283 systemd[1]: Started cri-containerd-b08426f6e1b83c122441a36939c9d3daaaa0b86088de5194d6dc498f04fb0d67.scope - libcontainer container b08426f6e1b83c122441a36939c9d3daaaa0b86088de5194d6dc498f04fb0d67. Sep 16 05:01:04.168107 containerd[1893]: time="2025-09-16T05:01:04.168058937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-49w58,Uid:2fe752b2-f67d-4fbd-9a11-7afa4ccbbf03,Namespace:kube-system,Attempt:0,} returns sandbox id \"b08426f6e1b83c122441a36939c9d3daaaa0b86088de5194d6dc498f04fb0d67\"" Sep 16 05:01:04.175899 containerd[1893]: time="2025-09-16T05:01:04.175762007Z" level=info msg="CreateContainer within sandbox \"b08426f6e1b83c122441a36939c9d3daaaa0b86088de5194d6dc498f04fb0d67\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 16 05:01:04.193238 containerd[1893]: time="2025-09-16T05:01:04.193096718Z" level=info msg="Container 10c2c6f2e8e88ea0a288e3a4de42ef357d6718ea4e169b97d6c1ecfd888151a9: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:01:04.213632 containerd[1893]: time="2025-09-16T05:01:04.213429230Z" level=info msg="CreateContainer within sandbox \"b08426f6e1b83c122441a36939c9d3daaaa0b86088de5194d6dc498f04fb0d67\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"10c2c6f2e8e88ea0a288e3a4de42ef357d6718ea4e169b97d6c1ecfd888151a9\"" Sep 16 05:01:04.218715 containerd[1893]: time="2025-09-16T05:01:04.216683872Z" level=info msg="StartContainer for \"10c2c6f2e8e88ea0a288e3a4de42ef357d6718ea4e169b97d6c1ecfd888151a9\"" Sep 16 05:01:04.225277 containerd[1893]: time="2025-09-16T05:01:04.225229442Z" level=info msg="connecting to shim 10c2c6f2e8e88ea0a288e3a4de42ef357d6718ea4e169b97d6c1ecfd888151a9" address="unix:///run/containerd/s/0158fef76bf6179e72a5df15997958e03029fa80d7da6f85d5dcffac80257d50" protocol=ttrpc version=3 Sep 16 05:01:04.255742 systemd[1]: Started cri-containerd-10c2c6f2e8e88ea0a288e3a4de42ef357d6718ea4e169b97d6c1ecfd888151a9.scope - libcontainer container 10c2c6f2e8e88ea0a288e3a4de42ef357d6718ea4e169b97d6c1ecfd888151a9. Sep 16 05:01:04.311851 containerd[1893]: time="2025-09-16T05:01:04.311806177Z" level=info msg="StartContainer for \"10c2c6f2e8e88ea0a288e3a4de42ef357d6718ea4e169b97d6c1ecfd888151a9\" returns successfully" Sep 16 05:01:04.644626 kubelet[3249]: I0916 05:01:04.644052 3249 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-49w58" podStartSLOduration=3.644033054 podStartE2EDuration="3.644033054s" podCreationTimestamp="2025-09-16 05:01:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 05:01:04.643578093 +0000 UTC m=+8.287910265" watchObservedRunningTime="2025-09-16 05:01:04.644033054 +0000 UTC m=+8.288365226" Sep 16 05:01:05.250679 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3909968803.mount: Deactivated successfully. Sep 16 05:01:06.361368 containerd[1893]: time="2025-09-16T05:01:06.361224994Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:01:06.362842 containerd[1893]: time="2025-09-16T05:01:06.362662227Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 16 05:01:06.364623 containerd[1893]: time="2025-09-16T05:01:06.364582324Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:01:06.367334 containerd[1893]: time="2025-09-16T05:01:06.365958564Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.303762244s" Sep 16 05:01:06.367334 containerd[1893]: time="2025-09-16T05:01:06.366000474Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 16 05:01:06.369848 containerd[1893]: time="2025-09-16T05:01:06.369207425Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 16 05:01:06.379181 containerd[1893]: time="2025-09-16T05:01:06.379127733Z" level=info msg="CreateContainer within sandbox \"770d89b5b4ad8b6628f3d05d4981af3b8cb905c5d7e3781819520aa59e5fce72\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 16 05:01:06.430619 containerd[1893]: time="2025-09-16T05:01:06.429885587Z" level=info msg="Container 2ab71e8653291fc3f5f4334f61972322e1ed32a57637daab11d447426848ce5b: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:01:06.447044 containerd[1893]: time="2025-09-16T05:01:06.446995585Z" level=info msg="CreateContainer within sandbox \"770d89b5b4ad8b6628f3d05d4981af3b8cb905c5d7e3781819520aa59e5fce72\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2ab71e8653291fc3f5f4334f61972322e1ed32a57637daab11d447426848ce5b\"" Sep 16 05:01:06.449055 containerd[1893]: time="2025-09-16T05:01:06.448651190Z" level=info msg="StartContainer for \"2ab71e8653291fc3f5f4334f61972322e1ed32a57637daab11d447426848ce5b\"" Sep 16 05:01:06.451742 containerd[1893]: time="2025-09-16T05:01:06.451704342Z" level=info msg="connecting to shim 2ab71e8653291fc3f5f4334f61972322e1ed32a57637daab11d447426848ce5b" address="unix:///run/containerd/s/036ef89d665ccbf3c4f715c21fff14273751bf303b3871db7596f20cea685bd1" protocol=ttrpc version=3 Sep 16 05:01:06.484124 systemd[1]: Started cri-containerd-2ab71e8653291fc3f5f4334f61972322e1ed32a57637daab11d447426848ce5b.scope - libcontainer container 2ab71e8653291fc3f5f4334f61972322e1ed32a57637daab11d447426848ce5b. Sep 16 05:01:06.531209 containerd[1893]: time="2025-09-16T05:01:06.531157271Z" level=info msg="StartContainer for \"2ab71e8653291fc3f5f4334f61972322e1ed32a57637daab11d447426848ce5b\" returns successfully" Sep 16 05:01:08.498302 kubelet[3249]: I0916 05:01:08.498193 3249 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-jbtzr" podStartSLOduration=5.185592057 podStartE2EDuration="7.498176592s" podCreationTimestamp="2025-09-16 05:01:01 +0000 UTC" firstStartedPulling="2025-09-16 05:01:04.05672497 +0000 UTC m=+7.701057119" lastFinishedPulling="2025-09-16 05:01:06.369309488 +0000 UTC m=+10.013641654" observedRunningTime="2025-09-16 05:01:06.672510122 +0000 UTC m=+10.316842293" watchObservedRunningTime="2025-09-16 05:01:08.498176592 +0000 UTC m=+12.142508760" Sep 16 05:01:12.000440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1917568918.mount: Deactivated successfully. Sep 16 05:01:14.844567 containerd[1893]: time="2025-09-16T05:01:14.844510008Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:01:14.852300 containerd[1893]: time="2025-09-16T05:01:14.846986127Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 16 05:01:14.852300 containerd[1893]: time="2025-09-16T05:01:14.847140209Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 05:01:14.852300 containerd[1893]: time="2025-09-16T05:01:14.848408629Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.479166475s" Sep 16 05:01:14.852300 containerd[1893]: time="2025-09-16T05:01:14.848438515Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 16 05:01:14.853592 containerd[1893]: time="2025-09-16T05:01:14.853548748Z" level=info msg="CreateContainer within sandbox \"6213bf3b45aa9029be2ae36d57457a8838b1c22fc05035a8f6216376e1ef820f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 16 05:01:14.920556 containerd[1893]: time="2025-09-16T05:01:14.918735695Z" level=info msg="Container f780faf490e3a55bf4f1b191346f5cc92782ea343019eb7f8f275f725fb5130e: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:01:14.920269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2915960472.mount: Deactivated successfully. Sep 16 05:01:14.951267 containerd[1893]: time="2025-09-16T05:01:14.951210136Z" level=info msg="CreateContainer within sandbox \"6213bf3b45aa9029be2ae36d57457a8838b1c22fc05035a8f6216376e1ef820f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f780faf490e3a55bf4f1b191346f5cc92782ea343019eb7f8f275f725fb5130e\"" Sep 16 05:01:14.952482 containerd[1893]: time="2025-09-16T05:01:14.952440028Z" level=info msg="StartContainer for \"f780faf490e3a55bf4f1b191346f5cc92782ea343019eb7f8f275f725fb5130e\"" Sep 16 05:01:14.955243 containerd[1893]: time="2025-09-16T05:01:14.955195116Z" level=info msg="connecting to shim f780faf490e3a55bf4f1b191346f5cc92782ea343019eb7f8f275f725fb5130e" address="unix:///run/containerd/s/99c2a75478e64df8431758025b2eeb1985abb26fd694dc12acd0418545dcf0b4" protocol=ttrpc version=3 Sep 16 05:01:15.056752 systemd[1]: Started cri-containerd-f780faf490e3a55bf4f1b191346f5cc92782ea343019eb7f8f275f725fb5130e.scope - libcontainer container f780faf490e3a55bf4f1b191346f5cc92782ea343019eb7f8f275f725fb5130e. Sep 16 05:01:15.113990 containerd[1893]: time="2025-09-16T05:01:15.113813631Z" level=info msg="StartContainer for \"f780faf490e3a55bf4f1b191346f5cc92782ea343019eb7f8f275f725fb5130e\" returns successfully" Sep 16 05:01:15.143528 systemd[1]: cri-containerd-f780faf490e3a55bf4f1b191346f5cc92782ea343019eb7f8f275f725fb5130e.scope: Deactivated successfully. Sep 16 05:01:15.194840 containerd[1893]: time="2025-09-16T05:01:15.194738976Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f780faf490e3a55bf4f1b191346f5cc92782ea343019eb7f8f275f725fb5130e\" id:\"f780faf490e3a55bf4f1b191346f5cc92782ea343019eb7f8f275f725fb5130e\" pid:3817 exited_at:{seconds:1757998875 nanos:147386507}" Sep 16 05:01:15.214042 containerd[1893]: time="2025-09-16T05:01:15.213975679Z" level=info msg="received exit event container_id:\"f780faf490e3a55bf4f1b191346f5cc92782ea343019eb7f8f275f725fb5130e\" id:\"f780faf490e3a55bf4f1b191346f5cc92782ea343019eb7f8f275f725fb5130e\" pid:3817 exited_at:{seconds:1757998875 nanos:147386507}" Sep 16 05:01:15.775250 containerd[1893]: time="2025-09-16T05:01:15.775154483Z" level=info msg="CreateContainer within sandbox \"6213bf3b45aa9029be2ae36d57457a8838b1c22fc05035a8f6216376e1ef820f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 16 05:01:15.796155 containerd[1893]: time="2025-09-16T05:01:15.796102637Z" level=info msg="Container cd82eb07ff28d822baf87ef5f471ea82e15b91ad1de94db5f3162c3f049a5299: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:01:15.807355 containerd[1893]: time="2025-09-16T05:01:15.807296005Z" level=info msg="CreateContainer within sandbox \"6213bf3b45aa9029be2ae36d57457a8838b1c22fc05035a8f6216376e1ef820f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"cd82eb07ff28d822baf87ef5f471ea82e15b91ad1de94db5f3162c3f049a5299\"" Sep 16 05:01:15.808277 containerd[1893]: time="2025-09-16T05:01:15.808204462Z" level=info msg="StartContainer for \"cd82eb07ff28d822baf87ef5f471ea82e15b91ad1de94db5f3162c3f049a5299\"" Sep 16 05:01:15.809528 containerd[1893]: time="2025-09-16T05:01:15.809501367Z" level=info msg="connecting to shim cd82eb07ff28d822baf87ef5f471ea82e15b91ad1de94db5f3162c3f049a5299" address="unix:///run/containerd/s/99c2a75478e64df8431758025b2eeb1985abb26fd694dc12acd0418545dcf0b4" protocol=ttrpc version=3 Sep 16 05:01:15.835731 systemd[1]: Started cri-containerd-cd82eb07ff28d822baf87ef5f471ea82e15b91ad1de94db5f3162c3f049a5299.scope - libcontainer container cd82eb07ff28d822baf87ef5f471ea82e15b91ad1de94db5f3162c3f049a5299. Sep 16 05:01:15.876745 containerd[1893]: time="2025-09-16T05:01:15.876699527Z" level=info msg="StartContainer for \"cd82eb07ff28d822baf87ef5f471ea82e15b91ad1de94db5f3162c3f049a5299\" returns successfully" Sep 16 05:01:15.899997 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 16 05:01:15.901051 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 16 05:01:15.901247 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 16 05:01:15.904517 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 16 05:01:15.917881 containerd[1893]: time="2025-09-16T05:01:15.917833663Z" level=info msg="received exit event container_id:\"cd82eb07ff28d822baf87ef5f471ea82e15b91ad1de94db5f3162c3f049a5299\" id:\"cd82eb07ff28d822baf87ef5f471ea82e15b91ad1de94db5f3162c3f049a5299\" pid:3861 exited_at:{seconds:1757998875 nanos:917540867}" Sep 16 05:01:15.918305 containerd[1893]: time="2025-09-16T05:01:15.918128061Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cd82eb07ff28d822baf87ef5f471ea82e15b91ad1de94db5f3162c3f049a5299\" id:\"cd82eb07ff28d822baf87ef5f471ea82e15b91ad1de94db5f3162c3f049a5299\" pid:3861 exited_at:{seconds:1757998875 nanos:917540867}" Sep 16 05:01:15.918190 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f780faf490e3a55bf4f1b191346f5cc92782ea343019eb7f8f275f725fb5130e-rootfs.mount: Deactivated successfully. Sep 16 05:01:15.918595 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 16 05:01:15.921671 systemd[1]: cri-containerd-cd82eb07ff28d822baf87ef5f471ea82e15b91ad1de94db5f3162c3f049a5299.scope: Deactivated successfully. Sep 16 05:01:15.960324 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cd82eb07ff28d822baf87ef5f471ea82e15b91ad1de94db5f3162c3f049a5299-rootfs.mount: Deactivated successfully. Sep 16 05:01:15.977983 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 16 05:01:16.776322 containerd[1893]: time="2025-09-16T05:01:16.776262997Z" level=info msg="CreateContainer within sandbox \"6213bf3b45aa9029be2ae36d57457a8838b1c22fc05035a8f6216376e1ef820f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 16 05:01:16.803246 containerd[1893]: time="2025-09-16T05:01:16.800061016Z" level=info msg="Container 0ce73fb7d0b241b387caed0028f95d8426f1e04293bd260cbfde363e058e5936: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:01:16.820762 containerd[1893]: time="2025-09-16T05:01:16.820698941Z" level=info msg="CreateContainer within sandbox \"6213bf3b45aa9029be2ae36d57457a8838b1c22fc05035a8f6216376e1ef820f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0ce73fb7d0b241b387caed0028f95d8426f1e04293bd260cbfde363e058e5936\"" Sep 16 05:01:16.822673 containerd[1893]: time="2025-09-16T05:01:16.821926216Z" level=info msg="StartContainer for \"0ce73fb7d0b241b387caed0028f95d8426f1e04293bd260cbfde363e058e5936\"" Sep 16 05:01:16.823945 containerd[1893]: time="2025-09-16T05:01:16.823908516Z" level=info msg="connecting to shim 0ce73fb7d0b241b387caed0028f95d8426f1e04293bd260cbfde363e058e5936" address="unix:///run/containerd/s/99c2a75478e64df8431758025b2eeb1985abb26fd694dc12acd0418545dcf0b4" protocol=ttrpc version=3 Sep 16 05:01:16.847743 systemd[1]: Started cri-containerd-0ce73fb7d0b241b387caed0028f95d8426f1e04293bd260cbfde363e058e5936.scope - libcontainer container 0ce73fb7d0b241b387caed0028f95d8426f1e04293bd260cbfde363e058e5936. Sep 16 05:01:16.932488 containerd[1893]: time="2025-09-16T05:01:16.932423353Z" level=info msg="StartContainer for \"0ce73fb7d0b241b387caed0028f95d8426f1e04293bd260cbfde363e058e5936\" returns successfully" Sep 16 05:01:17.081711 systemd[1]: cri-containerd-0ce73fb7d0b241b387caed0028f95d8426f1e04293bd260cbfde363e058e5936.scope: Deactivated successfully. Sep 16 05:01:17.082053 systemd[1]: cri-containerd-0ce73fb7d0b241b387caed0028f95d8426f1e04293bd260cbfde363e058e5936.scope: Consumed 28ms CPU time, 5.6M memory peak, 1.2M read from disk. Sep 16 05:01:17.086026 containerd[1893]: time="2025-09-16T05:01:17.085959758Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0ce73fb7d0b241b387caed0028f95d8426f1e04293bd260cbfde363e058e5936\" id:\"0ce73fb7d0b241b387caed0028f95d8426f1e04293bd260cbfde363e058e5936\" pid:3912 exited_at:{seconds:1757998877 nanos:85659945}" Sep 16 05:01:17.086250 containerd[1893]: time="2025-09-16T05:01:17.086108566Z" level=info msg="received exit event container_id:\"0ce73fb7d0b241b387caed0028f95d8426f1e04293bd260cbfde363e058e5936\" id:\"0ce73fb7d0b241b387caed0028f95d8426f1e04293bd260cbfde363e058e5936\" pid:3912 exited_at:{seconds:1757998877 nanos:85659945}" Sep 16 05:01:17.110406 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ce73fb7d0b241b387caed0028f95d8426f1e04293bd260cbfde363e058e5936-rootfs.mount: Deactivated successfully. Sep 16 05:01:17.788739 containerd[1893]: time="2025-09-16T05:01:17.788697470Z" level=info msg="CreateContainer within sandbox \"6213bf3b45aa9029be2ae36d57457a8838b1c22fc05035a8f6216376e1ef820f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 16 05:01:17.811296 containerd[1893]: time="2025-09-16T05:01:17.810424041Z" level=info msg="Container 60098e801a93df21cd34b873bffbfe0206ca379cbfd80331500bd994236ac0ff: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:01:17.813483 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2024369945.mount: Deactivated successfully. Sep 16 05:01:17.824485 containerd[1893]: time="2025-09-16T05:01:17.824433162Z" level=info msg="CreateContainer within sandbox \"6213bf3b45aa9029be2ae36d57457a8838b1c22fc05035a8f6216376e1ef820f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"60098e801a93df21cd34b873bffbfe0206ca379cbfd80331500bd994236ac0ff\"" Sep 16 05:01:17.825277 containerd[1893]: time="2025-09-16T05:01:17.825245994Z" level=info msg="StartContainer for \"60098e801a93df21cd34b873bffbfe0206ca379cbfd80331500bd994236ac0ff\"" Sep 16 05:01:17.826259 containerd[1893]: time="2025-09-16T05:01:17.826192910Z" level=info msg="connecting to shim 60098e801a93df21cd34b873bffbfe0206ca379cbfd80331500bd994236ac0ff" address="unix:///run/containerd/s/99c2a75478e64df8431758025b2eeb1985abb26fd694dc12acd0418545dcf0b4" protocol=ttrpc version=3 Sep 16 05:01:17.856686 systemd[1]: Started cri-containerd-60098e801a93df21cd34b873bffbfe0206ca379cbfd80331500bd994236ac0ff.scope - libcontainer container 60098e801a93df21cd34b873bffbfe0206ca379cbfd80331500bd994236ac0ff. Sep 16 05:01:17.889235 systemd[1]: cri-containerd-60098e801a93df21cd34b873bffbfe0206ca379cbfd80331500bd994236ac0ff.scope: Deactivated successfully. Sep 16 05:01:17.892457 containerd[1893]: time="2025-09-16T05:01:17.892415564Z" level=info msg="received exit event container_id:\"60098e801a93df21cd34b873bffbfe0206ca379cbfd80331500bd994236ac0ff\" id:\"60098e801a93df21cd34b873bffbfe0206ca379cbfd80331500bd994236ac0ff\" pid:3954 exited_at:{seconds:1757998877 nanos:892184623}" Sep 16 05:01:17.892457 containerd[1893]: time="2025-09-16T05:01:17.892853608Z" level=info msg="TaskExit event in podsandbox handler container_id:\"60098e801a93df21cd34b873bffbfe0206ca379cbfd80331500bd994236ac0ff\" id:\"60098e801a93df21cd34b873bffbfe0206ca379cbfd80331500bd994236ac0ff\" pid:3954 exited_at:{seconds:1757998877 nanos:892184623}" Sep 16 05:01:17.894651 containerd[1893]: time="2025-09-16T05:01:17.894609501Z" level=info msg="StartContainer for \"60098e801a93df21cd34b873bffbfe0206ca379cbfd80331500bd994236ac0ff\" returns successfully" Sep 16 05:01:17.934592 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-60098e801a93df21cd34b873bffbfe0206ca379cbfd80331500bd994236ac0ff-rootfs.mount: Deactivated successfully. Sep 16 05:01:18.787066 containerd[1893]: time="2025-09-16T05:01:18.787020276Z" level=info msg="CreateContainer within sandbox \"6213bf3b45aa9029be2ae36d57457a8838b1c22fc05035a8f6216376e1ef820f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 16 05:01:18.804011 containerd[1893]: time="2025-09-16T05:01:18.800508872Z" level=info msg="Container 5c0259a48686195b00655b81edcbc366d2739e73ee89fdbc0406680f685945fd: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:01:18.827688 containerd[1893]: time="2025-09-16T05:01:18.827637173Z" level=info msg="CreateContainer within sandbox \"6213bf3b45aa9029be2ae36d57457a8838b1c22fc05035a8f6216376e1ef820f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5c0259a48686195b00655b81edcbc366d2739e73ee89fdbc0406680f685945fd\"" Sep 16 05:01:18.828617 containerd[1893]: time="2025-09-16T05:01:18.828583136Z" level=info msg="StartContainer for \"5c0259a48686195b00655b81edcbc366d2739e73ee89fdbc0406680f685945fd\"" Sep 16 05:01:18.830997 containerd[1893]: time="2025-09-16T05:01:18.830961293Z" level=info msg="connecting to shim 5c0259a48686195b00655b81edcbc366d2739e73ee89fdbc0406680f685945fd" address="unix:///run/containerd/s/99c2a75478e64df8431758025b2eeb1985abb26fd694dc12acd0418545dcf0b4" protocol=ttrpc version=3 Sep 16 05:01:18.855928 systemd[1]: Started cri-containerd-5c0259a48686195b00655b81edcbc366d2739e73ee89fdbc0406680f685945fd.scope - libcontainer container 5c0259a48686195b00655b81edcbc366d2739e73ee89fdbc0406680f685945fd. Sep 16 05:01:18.903288 containerd[1893]: time="2025-09-16T05:01:18.903251132Z" level=info msg="StartContainer for \"5c0259a48686195b00655b81edcbc366d2739e73ee89fdbc0406680f685945fd\" returns successfully" Sep 16 05:01:19.378163 containerd[1893]: time="2025-09-16T05:01:19.378129600Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5c0259a48686195b00655b81edcbc366d2739e73ee89fdbc0406680f685945fd\" id:\"fd51eb6391997df2eba9975e2e5c6c3eabc80bc2dd2aee65f5f66088a19e48f7\" pid:4021 exited_at:{seconds:1757998879 nanos:377676312}" Sep 16 05:01:19.459701 kubelet[3249]: I0916 05:01:19.458550 3249 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 16 05:01:19.549221 systemd[1]: Created slice kubepods-burstable-pod5bc65ee3_550c_4ea8_a347_e24da1ae8bca.slice - libcontainer container kubepods-burstable-pod5bc65ee3_550c_4ea8_a347_e24da1ae8bca.slice. Sep 16 05:01:19.576223 systemd[1]: Created slice kubepods-burstable-pod286ddd48_f107_438a_8b2d_e259450eea5f.slice - libcontainer container kubepods-burstable-pod286ddd48_f107_438a_8b2d_e259450eea5f.slice. Sep 16 05:01:19.579327 kubelet[3249]: I0916 05:01:19.578930 3249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6dwz\" (UniqueName: \"kubernetes.io/projected/5bc65ee3-550c-4ea8-a347-e24da1ae8bca-kube-api-access-n6dwz\") pod \"coredns-674b8bbfcf-cxqkn\" (UID: \"5bc65ee3-550c-4ea8-a347-e24da1ae8bca\") " pod="kube-system/coredns-674b8bbfcf-cxqkn" Sep 16 05:01:19.579327 kubelet[3249]: I0916 05:01:19.578963 3249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5bc65ee3-550c-4ea8-a347-e24da1ae8bca-config-volume\") pod \"coredns-674b8bbfcf-cxqkn\" (UID: \"5bc65ee3-550c-4ea8-a347-e24da1ae8bca\") " pod="kube-system/coredns-674b8bbfcf-cxqkn" Sep 16 05:01:19.680116 kubelet[3249]: I0916 05:01:19.679538 3249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/286ddd48-f107-438a-8b2d-e259450eea5f-config-volume\") pod \"coredns-674b8bbfcf-fghkb\" (UID: \"286ddd48-f107-438a-8b2d-e259450eea5f\") " pod="kube-system/coredns-674b8bbfcf-fghkb" Sep 16 05:01:19.680116 kubelet[3249]: I0916 05:01:19.679609 3249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wr2q\" (UniqueName: \"kubernetes.io/projected/286ddd48-f107-438a-8b2d-e259450eea5f-kube-api-access-4wr2q\") pod \"coredns-674b8bbfcf-fghkb\" (UID: \"286ddd48-f107-438a-8b2d-e259450eea5f\") " pod="kube-system/coredns-674b8bbfcf-fghkb" Sep 16 05:01:19.814165 kubelet[3249]: I0916 05:01:19.814113 3249 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6gjwd" podStartSLOduration=8.059665302 podStartE2EDuration="18.814096906s" podCreationTimestamp="2025-09-16 05:01:01 +0000 UTC" firstStartedPulling="2025-09-16 05:01:04.095196736 +0000 UTC m=+7.739528905" lastFinishedPulling="2025-09-16 05:01:14.84962836 +0000 UTC m=+18.493960509" observedRunningTime="2025-09-16 05:01:19.813508804 +0000 UTC m=+23.457840994" watchObservedRunningTime="2025-09-16 05:01:19.814096906 +0000 UTC m=+23.458429074" Sep 16 05:01:19.853561 containerd[1893]: time="2025-09-16T05:01:19.853522057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cxqkn,Uid:5bc65ee3-550c-4ea8-a347-e24da1ae8bca,Namespace:kube-system,Attempt:0,}" Sep 16 05:01:19.882344 containerd[1893]: time="2025-09-16T05:01:19.882291894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fghkb,Uid:286ddd48-f107-438a-8b2d-e259450eea5f,Namespace:kube-system,Attempt:0,}" Sep 16 05:01:23.462266 (udev-worker)[4116]: Network interface NamePolicy= disabled on kernel command line. Sep 16 05:01:23.463405 (udev-worker)[4082]: Network interface NamePolicy= disabled on kernel command line. Sep 16 05:01:23.464614 systemd-networkd[1819]: cilium_host: Link UP Sep 16 05:01:23.465444 systemd-networkd[1819]: cilium_net: Link UP Sep 16 05:01:23.466276 systemd-networkd[1819]: cilium_net: Gained carrier Sep 16 05:01:23.466925 systemd-networkd[1819]: cilium_host: Gained carrier Sep 16 05:01:23.581628 systemd-networkd[1819]: cilium_net: Gained IPv6LL Sep 16 05:01:23.783144 (udev-worker)[4122]: Network interface NamePolicy= disabled on kernel command line. Sep 16 05:01:23.791190 systemd-networkd[1819]: cilium_vxlan: Link UP Sep 16 05:01:23.791196 systemd-networkd[1819]: cilium_vxlan: Gained carrier Sep 16 05:01:24.320644 systemd-networkd[1819]: cilium_host: Gained IPv6LL Sep 16 05:01:24.759497 kernel: NET: Registered PF_ALG protocol family Sep 16 05:01:25.153559 systemd-networkd[1819]: cilium_vxlan: Gained IPv6LL Sep 16 05:01:25.825171 systemd-networkd[1819]: lxc_health: Link UP Sep 16 05:01:25.830622 systemd-networkd[1819]: lxc_health: Gained carrier Sep 16 05:01:26.441068 systemd-networkd[1819]: lxc89107e3d5d9e: Link UP Sep 16 05:01:26.447775 kernel: eth0: renamed from tmp3c348 Sep 16 05:01:26.451409 systemd-networkd[1819]: lxc89107e3d5d9e: Gained carrier Sep 16 05:01:26.488604 (udev-worker)[4456]: Network interface NamePolicy= disabled on kernel command line. Sep 16 05:01:26.505140 kernel: eth0: renamed from tmp68899 Sep 16 05:01:26.504364 systemd-networkd[1819]: lxc7e62381d81ca: Link UP Sep 16 05:01:26.509795 systemd-networkd[1819]: lxc7e62381d81ca: Gained carrier Sep 16 05:01:27.008678 systemd-networkd[1819]: lxc_health: Gained IPv6LL Sep 16 05:01:28.354576 systemd-networkd[1819]: lxc7e62381d81ca: Gained IPv6LL Sep 16 05:01:28.480628 systemd-networkd[1819]: lxc89107e3d5d9e: Gained IPv6LL Sep 16 05:01:30.645396 ntpd[1855]: Listen normally on 6 cilium_host 192.168.0.72:123 Sep 16 05:01:30.647554 ntpd[1855]: 16 Sep 05:01:30 ntpd[1855]: Listen normally on 6 cilium_host 192.168.0.72:123 Sep 16 05:01:30.647554 ntpd[1855]: 16 Sep 05:01:30 ntpd[1855]: Listen normally on 7 cilium_net [fe80::3071:a2ff:fe02:4ae9%4]:123 Sep 16 05:01:30.647554 ntpd[1855]: 16 Sep 05:01:30 ntpd[1855]: Listen normally on 8 cilium_host [fe80::c0f5:8ff:fe85:2ab6%5]:123 Sep 16 05:01:30.647554 ntpd[1855]: 16 Sep 05:01:30 ntpd[1855]: Listen normally on 9 cilium_vxlan [fe80::5c9c:e7ff:fe4b:4ef6%6]:123 Sep 16 05:01:30.647554 ntpd[1855]: 16 Sep 05:01:30 ntpd[1855]: Listen normally on 10 lxc_health [fe80::cc87:87ff:fe81:e0d0%8]:123 Sep 16 05:01:30.647554 ntpd[1855]: 16 Sep 05:01:30 ntpd[1855]: Listen normally on 11 lxc89107e3d5d9e [fe80::787b:dcff:fec0:5ce2%10]:123 Sep 16 05:01:30.647554 ntpd[1855]: 16 Sep 05:01:30 ntpd[1855]: Listen normally on 12 lxc7e62381d81ca [fe80::8ca5:d2ff:fe9a:203e%12]:123 Sep 16 05:01:30.645503 ntpd[1855]: Listen normally on 7 cilium_net [fe80::3071:a2ff:fe02:4ae9%4]:123 Sep 16 05:01:30.645538 ntpd[1855]: Listen normally on 8 cilium_host [fe80::c0f5:8ff:fe85:2ab6%5]:123 Sep 16 05:01:30.645565 ntpd[1855]: Listen normally on 9 cilium_vxlan [fe80::5c9c:e7ff:fe4b:4ef6%6]:123 Sep 16 05:01:30.645592 ntpd[1855]: Listen normally on 10 lxc_health [fe80::cc87:87ff:fe81:e0d0%8]:123 Sep 16 05:01:30.646004 ntpd[1855]: Listen normally on 11 lxc89107e3d5d9e [fe80::787b:dcff:fec0:5ce2%10]:123 Sep 16 05:01:30.646060 ntpd[1855]: Listen normally on 12 lxc7e62381d81ca [fe80::8ca5:d2ff:fe9a:203e%12]:123 Sep 16 05:01:31.181012 containerd[1893]: time="2025-09-16T05:01:31.180947665Z" level=info msg="connecting to shim 3c348d2f28099bae8412cb3a0dd8a30f7e0b3078e423c5c54a540fc3de01508b" address="unix:///run/containerd/s/86ce512d4f1298533d38390f32a7e30c55545986ff30e45ee0ea0be4c884c1f7" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:01:31.182850 containerd[1893]: time="2025-09-16T05:01:31.181489511Z" level=info msg="connecting to shim 68899b611eba7dd1b25b1072c3b68f12aa4eb23e43e3cbe429f932f1755aeb93" address="unix:///run/containerd/s/eed75409f739c77a1864b0b9a1fd4c4dd9bd2af6254a8d9eeb2d8068046dac15" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:01:31.250699 systemd[1]: Started cri-containerd-3c348d2f28099bae8412cb3a0dd8a30f7e0b3078e423c5c54a540fc3de01508b.scope - libcontainer container 3c348d2f28099bae8412cb3a0dd8a30f7e0b3078e423c5c54a540fc3de01508b. Sep 16 05:01:31.259019 systemd[1]: Started cri-containerd-68899b611eba7dd1b25b1072c3b68f12aa4eb23e43e3cbe429f932f1755aeb93.scope - libcontainer container 68899b611eba7dd1b25b1072c3b68f12aa4eb23e43e3cbe429f932f1755aeb93. Sep 16 05:01:31.344122 containerd[1893]: time="2025-09-16T05:01:31.344073246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fghkb,Uid:286ddd48-f107-438a-8b2d-e259450eea5f,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c348d2f28099bae8412cb3a0dd8a30f7e0b3078e423c5c54a540fc3de01508b\"" Sep 16 05:01:31.361201 containerd[1893]: time="2025-09-16T05:01:31.361147957Z" level=info msg="CreateContainer within sandbox \"3c348d2f28099bae8412cb3a0dd8a30f7e0b3078e423c5c54a540fc3de01508b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 16 05:01:31.382804 containerd[1893]: time="2025-09-16T05:01:31.380922588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cxqkn,Uid:5bc65ee3-550c-4ea8-a347-e24da1ae8bca,Namespace:kube-system,Attempt:0,} returns sandbox id \"68899b611eba7dd1b25b1072c3b68f12aa4eb23e43e3cbe429f932f1755aeb93\"" Sep 16 05:01:31.388804 containerd[1893]: time="2025-09-16T05:01:31.387958950Z" level=info msg="CreateContainer within sandbox \"68899b611eba7dd1b25b1072c3b68f12aa4eb23e43e3cbe429f932f1755aeb93\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 16 05:01:31.393490 containerd[1893]: time="2025-09-16T05:01:31.393173421Z" level=info msg="Container f97b1547fb1f2a53329be7692e087ba97ad88e4654b78e5666b93ade21583719: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:01:31.395668 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2666446387.mount: Deactivated successfully. Sep 16 05:01:31.398381 containerd[1893]: time="2025-09-16T05:01:31.398270858Z" level=info msg="Container b2584aac513bff1a5317d23421150730f2df5da6c8d175353b0179f173708c87: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:01:31.420352 containerd[1893]: time="2025-09-16T05:01:31.420315938Z" level=info msg="CreateContainer within sandbox \"68899b611eba7dd1b25b1072c3b68f12aa4eb23e43e3cbe429f932f1755aeb93\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b2584aac513bff1a5317d23421150730f2df5da6c8d175353b0179f173708c87\"" Sep 16 05:01:31.421152 containerd[1893]: time="2025-09-16T05:01:31.421128048Z" level=info msg="StartContainer for \"b2584aac513bff1a5317d23421150730f2df5da6c8d175353b0179f173708c87\"" Sep 16 05:01:31.422239 containerd[1893]: time="2025-09-16T05:01:31.422182017Z" level=info msg="connecting to shim b2584aac513bff1a5317d23421150730f2df5da6c8d175353b0179f173708c87" address="unix:///run/containerd/s/eed75409f739c77a1864b0b9a1fd4c4dd9bd2af6254a8d9eeb2d8068046dac15" protocol=ttrpc version=3 Sep 16 05:01:31.423348 containerd[1893]: time="2025-09-16T05:01:31.423273111Z" level=info msg="CreateContainer within sandbox \"3c348d2f28099bae8412cb3a0dd8a30f7e0b3078e423c5c54a540fc3de01508b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f97b1547fb1f2a53329be7692e087ba97ad88e4654b78e5666b93ade21583719\"" Sep 16 05:01:31.424371 containerd[1893]: time="2025-09-16T05:01:31.424347625Z" level=info msg="StartContainer for \"f97b1547fb1f2a53329be7692e087ba97ad88e4654b78e5666b93ade21583719\"" Sep 16 05:01:31.426905 containerd[1893]: time="2025-09-16T05:01:31.426875823Z" level=info msg="connecting to shim f97b1547fb1f2a53329be7692e087ba97ad88e4654b78e5666b93ade21583719" address="unix:///run/containerd/s/86ce512d4f1298533d38390f32a7e30c55545986ff30e45ee0ea0be4c884c1f7" protocol=ttrpc version=3 Sep 16 05:01:31.451712 systemd[1]: Started cri-containerd-b2584aac513bff1a5317d23421150730f2df5da6c8d175353b0179f173708c87.scope - libcontainer container b2584aac513bff1a5317d23421150730f2df5da6c8d175353b0179f173708c87. Sep 16 05:01:31.462010 systemd[1]: Started cri-containerd-f97b1547fb1f2a53329be7692e087ba97ad88e4654b78e5666b93ade21583719.scope - libcontainer container f97b1547fb1f2a53329be7692e087ba97ad88e4654b78e5666b93ade21583719. Sep 16 05:01:31.516159 containerd[1893]: time="2025-09-16T05:01:31.516064768Z" level=info msg="StartContainer for \"b2584aac513bff1a5317d23421150730f2df5da6c8d175353b0179f173708c87\" returns successfully" Sep 16 05:01:31.521198 containerd[1893]: time="2025-09-16T05:01:31.521154730Z" level=info msg="StartContainer for \"f97b1547fb1f2a53329be7692e087ba97ad88e4654b78e5666b93ade21583719\" returns successfully" Sep 16 05:01:31.866490 kubelet[3249]: I0916 05:01:31.863890 3249 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-fghkb" podStartSLOduration=29.862441361 podStartE2EDuration="29.862441361s" podCreationTimestamp="2025-09-16 05:01:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 05:01:31.861401635 +0000 UTC m=+35.505733800" watchObservedRunningTime="2025-09-16 05:01:31.862441361 +0000 UTC m=+35.506773530" Sep 16 05:01:31.874959 kubelet[3249]: I0916 05:01:31.874104 3249 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-cxqkn" podStartSLOduration=30.87408856 podStartE2EDuration="30.87408856s" podCreationTimestamp="2025-09-16 05:01:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 05:01:31.873714572 +0000 UTC m=+35.518046741" watchObservedRunningTime="2025-09-16 05:01:31.87408856 +0000 UTC m=+35.518420729" Sep 16 05:01:32.081206 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1558421187.mount: Deactivated successfully. Sep 16 05:01:33.003758 systemd[1]: Started sshd@9-172.31.17.131:22-139.178.68.195:56214.service - OpenSSH per-connection server daemon (139.178.68.195:56214). Sep 16 05:01:33.233333 sshd[4637]: Accepted publickey for core from 139.178.68.195 port 56214 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:01:33.268305 sshd-session[4637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:01:33.275504 systemd-logind[1863]: New session 10 of user core. Sep 16 05:01:33.283983 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 16 05:01:34.353252 sshd[4641]: Connection closed by 139.178.68.195 port 56214 Sep 16 05:01:34.354126 sshd-session[4637]: pam_unix(sshd:session): session closed for user core Sep 16 05:01:34.370752 systemd[1]: sshd@9-172.31.17.131:22-139.178.68.195:56214.service: Deactivated successfully. Sep 16 05:01:34.374352 systemd[1]: session-10.scope: Deactivated successfully. Sep 16 05:01:34.374584 systemd[1]: session-10.scope: Consumed 398ms CPU time, 64.8M memory peak. Sep 16 05:01:34.375968 systemd-logind[1863]: Session 10 logged out. Waiting for processes to exit. Sep 16 05:01:34.377414 systemd-logind[1863]: Removed session 10. Sep 16 05:01:39.393978 systemd[1]: Started sshd@10-172.31.17.131:22-139.178.68.195:56218.service - OpenSSH per-connection server daemon (139.178.68.195:56218). Sep 16 05:01:39.599964 sshd[4690]: Accepted publickey for core from 139.178.68.195 port 56218 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:01:39.602944 sshd-session[4690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:01:39.608832 systemd-logind[1863]: New session 11 of user core. Sep 16 05:01:39.618731 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 16 05:01:39.913580 sshd[4693]: Connection closed by 139.178.68.195 port 56218 Sep 16 05:01:39.914162 sshd-session[4690]: pam_unix(sshd:session): session closed for user core Sep 16 05:01:39.918378 systemd[1]: sshd@10-172.31.17.131:22-139.178.68.195:56218.service: Deactivated successfully. Sep 16 05:01:39.920396 systemd[1]: session-11.scope: Deactivated successfully. Sep 16 05:01:39.921637 systemd-logind[1863]: Session 11 logged out. Waiting for processes to exit. Sep 16 05:01:39.923169 systemd-logind[1863]: Removed session 11. Sep 16 05:01:44.950932 systemd[1]: Started sshd@11-172.31.17.131:22-139.178.68.195:53264.service - OpenSSH per-connection server daemon (139.178.68.195:53264). Sep 16 05:01:45.125693 sshd[4711]: Accepted publickey for core from 139.178.68.195 port 53264 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:01:45.127122 sshd-session[4711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:01:45.134898 systemd-logind[1863]: New session 12 of user core. Sep 16 05:01:45.148807 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 16 05:01:45.364789 sshd[4714]: Connection closed by 139.178.68.195 port 53264 Sep 16 05:01:45.366516 sshd-session[4711]: pam_unix(sshd:session): session closed for user core Sep 16 05:01:45.370438 systemd[1]: sshd@11-172.31.17.131:22-139.178.68.195:53264.service: Deactivated successfully. Sep 16 05:01:45.373133 systemd[1]: session-12.scope: Deactivated successfully. Sep 16 05:01:45.374162 systemd-logind[1863]: Session 12 logged out. Waiting for processes to exit. Sep 16 05:01:45.375906 systemd-logind[1863]: Removed session 12. Sep 16 05:01:50.403441 systemd[1]: Started sshd@12-172.31.17.131:22-139.178.68.195:52796.service - OpenSSH per-connection server daemon (139.178.68.195:52796). Sep 16 05:01:50.585613 sshd[4728]: Accepted publickey for core from 139.178.68.195 port 52796 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:01:50.587025 sshd-session[4728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:01:50.594236 systemd-logind[1863]: New session 13 of user core. Sep 16 05:01:50.608724 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 16 05:01:50.801603 sshd[4731]: Connection closed by 139.178.68.195 port 52796 Sep 16 05:01:50.802613 sshd-session[4728]: pam_unix(sshd:session): session closed for user core Sep 16 05:01:50.806915 systemd[1]: sshd@12-172.31.17.131:22-139.178.68.195:52796.service: Deactivated successfully. Sep 16 05:01:50.809801 systemd[1]: session-13.scope: Deactivated successfully. Sep 16 05:01:50.810967 systemd-logind[1863]: Session 13 logged out. Waiting for processes to exit. Sep 16 05:01:50.812984 systemd-logind[1863]: Removed session 13. Sep 16 05:01:50.837988 systemd[1]: Started sshd@13-172.31.17.131:22-139.178.68.195:52806.service - OpenSSH per-connection server daemon (139.178.68.195:52806). Sep 16 05:01:51.031899 sshd[4744]: Accepted publickey for core from 139.178.68.195 port 52806 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:01:51.033308 sshd-session[4744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:01:51.039092 systemd-logind[1863]: New session 14 of user core. Sep 16 05:01:51.045715 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 16 05:01:51.296602 sshd[4747]: Connection closed by 139.178.68.195 port 52806 Sep 16 05:01:51.297279 sshd-session[4744]: pam_unix(sshd:session): session closed for user core Sep 16 05:01:51.304841 systemd[1]: sshd@13-172.31.17.131:22-139.178.68.195:52806.service: Deactivated successfully. Sep 16 05:01:51.308840 systemd[1]: session-14.scope: Deactivated successfully. Sep 16 05:01:51.311391 systemd-logind[1863]: Session 14 logged out. Waiting for processes to exit. Sep 16 05:01:51.329005 systemd-logind[1863]: Removed session 14. Sep 16 05:01:51.330512 systemd[1]: Started sshd@14-172.31.17.131:22-139.178.68.195:52814.service - OpenSSH per-connection server daemon (139.178.68.195:52814). Sep 16 05:01:51.501158 sshd[4757]: Accepted publickey for core from 139.178.68.195 port 52814 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:01:51.502711 sshd-session[4757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:01:51.507680 systemd-logind[1863]: New session 15 of user core. Sep 16 05:01:51.518707 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 16 05:01:51.707538 sshd[4760]: Connection closed by 139.178.68.195 port 52814 Sep 16 05:01:51.708833 sshd-session[4757]: pam_unix(sshd:session): session closed for user core Sep 16 05:01:51.716053 systemd[1]: sshd@14-172.31.17.131:22-139.178.68.195:52814.service: Deactivated successfully. Sep 16 05:01:51.717220 systemd-logind[1863]: Session 15 logged out. Waiting for processes to exit. Sep 16 05:01:51.720631 systemd[1]: session-15.scope: Deactivated successfully. Sep 16 05:01:51.723672 systemd-logind[1863]: Removed session 15. Sep 16 05:01:56.754304 systemd[1]: Started sshd@15-172.31.17.131:22-139.178.68.195:52830.service - OpenSSH per-connection server daemon (139.178.68.195:52830). Sep 16 05:01:56.929534 sshd[4775]: Accepted publickey for core from 139.178.68.195 port 52830 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:01:56.931007 sshd-session[4775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:01:56.936625 systemd-logind[1863]: New session 16 of user core. Sep 16 05:01:56.940789 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 16 05:01:57.126211 sshd[4778]: Connection closed by 139.178.68.195 port 52830 Sep 16 05:01:57.127347 sshd-session[4775]: pam_unix(sshd:session): session closed for user core Sep 16 05:01:57.132014 systemd[1]: sshd@15-172.31.17.131:22-139.178.68.195:52830.service: Deactivated successfully. Sep 16 05:01:57.134453 systemd[1]: session-16.scope: Deactivated successfully. Sep 16 05:01:57.136019 systemd-logind[1863]: Session 16 logged out. Waiting for processes to exit. Sep 16 05:01:57.137773 systemd-logind[1863]: Removed session 16. Sep 16 05:02:02.237379 systemd[1]: Started sshd@16-172.31.17.131:22-139.178.68.195:46468.service - OpenSSH per-connection server daemon (139.178.68.195:46468). Sep 16 05:02:02.647484 sshd[4790]: Accepted publickey for core from 139.178.68.195 port 46468 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:02:02.652403 sshd-session[4790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:02:02.679548 systemd-logind[1863]: New session 17 of user core. Sep 16 05:02:02.688787 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 16 05:02:02.934269 sshd[4793]: Connection closed by 139.178.68.195 port 46468 Sep 16 05:02:02.935504 sshd-session[4790]: pam_unix(sshd:session): session closed for user core Sep 16 05:02:02.947082 systemd[1]: sshd@16-172.31.17.131:22-139.178.68.195:46468.service: Deactivated successfully. Sep 16 05:02:02.958336 systemd[1]: session-17.scope: Deactivated successfully. Sep 16 05:02:02.968582 systemd-logind[1863]: Session 17 logged out. Waiting for processes to exit. Sep 16 05:02:03.010378 systemd[1]: Started sshd@17-172.31.17.131:22-139.178.68.195:46472.service - OpenSSH per-connection server daemon (139.178.68.195:46472). Sep 16 05:02:03.011938 systemd-logind[1863]: Removed session 17. Sep 16 05:02:03.236606 sshd[4805]: Accepted publickey for core from 139.178.68.195 port 46472 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:02:03.242490 sshd-session[4805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:02:03.250595 systemd-logind[1863]: New session 18 of user core. Sep 16 05:02:03.258441 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 16 05:02:06.375631 sshd[4808]: Connection closed by 139.178.68.195 port 46472 Sep 16 05:02:06.376958 sshd-session[4805]: pam_unix(sshd:session): session closed for user core Sep 16 05:02:06.386750 systemd[1]: sshd@17-172.31.17.131:22-139.178.68.195:46472.service: Deactivated successfully. Sep 16 05:02:06.389766 systemd[1]: session-18.scope: Deactivated successfully. Sep 16 05:02:06.390786 systemd-logind[1863]: Session 18 logged out. Waiting for processes to exit. Sep 16 05:02:06.393100 systemd-logind[1863]: Removed session 18. Sep 16 05:02:06.405922 systemd[1]: Started sshd@18-172.31.17.131:22-139.178.68.195:46482.service - OpenSSH per-connection server daemon (139.178.68.195:46482). Sep 16 05:02:06.599149 sshd[4820]: Accepted publickey for core from 139.178.68.195 port 46482 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:02:06.600726 sshd-session[4820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:02:06.605554 systemd-logind[1863]: New session 19 of user core. Sep 16 05:02:06.610674 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 16 05:02:07.592979 sshd[4823]: Connection closed by 139.178.68.195 port 46482 Sep 16 05:02:07.593632 sshd-session[4820]: pam_unix(sshd:session): session closed for user core Sep 16 05:02:07.599342 systemd[1]: sshd@18-172.31.17.131:22-139.178.68.195:46482.service: Deactivated successfully. Sep 16 05:02:07.601573 systemd[1]: session-19.scope: Deactivated successfully. Sep 16 05:02:07.602387 systemd-logind[1863]: Session 19 logged out. Waiting for processes to exit. Sep 16 05:02:07.604907 systemd-logind[1863]: Removed session 19. Sep 16 05:02:07.630673 systemd[1]: Started sshd@19-172.31.17.131:22-139.178.68.195:46488.service - OpenSSH per-connection server daemon (139.178.68.195:46488). Sep 16 05:02:07.818373 sshd[4840]: Accepted publickey for core from 139.178.68.195 port 46488 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:02:07.821168 sshd-session[4840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:02:07.827410 systemd-logind[1863]: New session 20 of user core. Sep 16 05:02:07.834737 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 16 05:02:08.257450 sshd[4843]: Connection closed by 139.178.68.195 port 46488 Sep 16 05:02:08.259311 sshd-session[4840]: pam_unix(sshd:session): session closed for user core Sep 16 05:02:08.263680 systemd-logind[1863]: Session 20 logged out. Waiting for processes to exit. Sep 16 05:02:08.264459 systemd[1]: sshd@19-172.31.17.131:22-139.178.68.195:46488.service: Deactivated successfully. Sep 16 05:02:08.267319 systemd[1]: session-20.scope: Deactivated successfully. Sep 16 05:02:08.269993 systemd-logind[1863]: Removed session 20. Sep 16 05:02:08.293028 systemd[1]: Started sshd@20-172.31.17.131:22-139.178.68.195:46504.service - OpenSSH per-connection server daemon (139.178.68.195:46504). Sep 16 05:02:08.471218 sshd[4853]: Accepted publickey for core from 139.178.68.195 port 46504 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:02:08.472907 sshd-session[4853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:02:08.477602 systemd-logind[1863]: New session 21 of user core. Sep 16 05:02:08.484701 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 16 05:02:08.673614 sshd[4856]: Connection closed by 139.178.68.195 port 46504 Sep 16 05:02:08.674689 sshd-session[4853]: pam_unix(sshd:session): session closed for user core Sep 16 05:02:08.679704 systemd[1]: sshd@20-172.31.17.131:22-139.178.68.195:46504.service: Deactivated successfully. Sep 16 05:02:08.681841 systemd[1]: session-21.scope: Deactivated successfully. Sep 16 05:02:08.682764 systemd-logind[1863]: Session 21 logged out. Waiting for processes to exit. Sep 16 05:02:08.685390 systemd-logind[1863]: Removed session 21. Sep 16 05:02:13.714089 systemd[1]: Started sshd@21-172.31.17.131:22-139.178.68.195:46092.service - OpenSSH per-connection server daemon (139.178.68.195:46092). Sep 16 05:02:13.898139 sshd[4869]: Accepted publickey for core from 139.178.68.195 port 46092 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:02:13.899441 sshd-session[4869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:02:13.904886 systemd-logind[1863]: New session 22 of user core. Sep 16 05:02:13.909657 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 16 05:02:14.090090 sshd[4872]: Connection closed by 139.178.68.195 port 46092 Sep 16 05:02:14.090681 sshd-session[4869]: pam_unix(sshd:session): session closed for user core Sep 16 05:02:14.095078 systemd[1]: sshd@21-172.31.17.131:22-139.178.68.195:46092.service: Deactivated successfully. Sep 16 05:02:14.097657 systemd[1]: session-22.scope: Deactivated successfully. Sep 16 05:02:14.098560 systemd-logind[1863]: Session 22 logged out. Waiting for processes to exit. Sep 16 05:02:14.100617 systemd-logind[1863]: Removed session 22. Sep 16 05:02:19.122845 systemd[1]: Started sshd@22-172.31.17.131:22-139.178.68.195:46094.service - OpenSSH per-connection server daemon (139.178.68.195:46094). Sep 16 05:02:19.302617 sshd[4885]: Accepted publickey for core from 139.178.68.195 port 46094 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:02:19.304083 sshd-session[4885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:02:19.310401 systemd-logind[1863]: New session 23 of user core. Sep 16 05:02:19.321736 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 16 05:02:19.532293 sshd[4888]: Connection closed by 139.178.68.195 port 46094 Sep 16 05:02:19.533101 sshd-session[4885]: pam_unix(sshd:session): session closed for user core Sep 16 05:02:19.540459 systemd[1]: sshd@22-172.31.17.131:22-139.178.68.195:46094.service: Deactivated successfully. Sep 16 05:02:19.543164 systemd[1]: session-23.scope: Deactivated successfully. Sep 16 05:02:19.545285 systemd-logind[1863]: Session 23 logged out. Waiting for processes to exit. Sep 16 05:02:19.547849 systemd-logind[1863]: Removed session 23. Sep 16 05:02:24.565597 systemd[1]: Started sshd@23-172.31.17.131:22-139.178.68.195:50552.service - OpenSSH per-connection server daemon (139.178.68.195:50552). Sep 16 05:02:24.753727 sshd[4900]: Accepted publickey for core from 139.178.68.195 port 50552 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:02:24.756257 sshd-session[4900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:02:24.761679 systemd-logind[1863]: New session 24 of user core. Sep 16 05:02:24.766704 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 16 05:02:25.040496 sshd[4903]: Connection closed by 139.178.68.195 port 50552 Sep 16 05:02:25.041453 sshd-session[4900]: pam_unix(sshd:session): session closed for user core Sep 16 05:02:25.048830 systemd[1]: sshd@23-172.31.17.131:22-139.178.68.195:50552.service: Deactivated successfully. Sep 16 05:02:25.053933 systemd[1]: session-24.scope: Deactivated successfully. Sep 16 05:02:25.057958 systemd-logind[1863]: Session 24 logged out. Waiting for processes to exit. Sep 16 05:02:25.074002 systemd[1]: Started sshd@24-172.31.17.131:22-139.178.68.195:50554.service - OpenSSH per-connection server daemon (139.178.68.195:50554). Sep 16 05:02:25.075801 systemd-logind[1863]: Removed session 24. Sep 16 05:02:25.250524 sshd[4915]: Accepted publickey for core from 139.178.68.195 port 50554 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:02:25.252324 sshd-session[4915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:02:25.259089 systemd-logind[1863]: New session 25 of user core. Sep 16 05:02:25.262688 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 16 05:02:28.284586 containerd[1893]: time="2025-09-16T05:02:28.284438002Z" level=info msg="StopContainer for \"2ab71e8653291fc3f5f4334f61972322e1ed32a57637daab11d447426848ce5b\" with timeout 30 (s)" Sep 16 05:02:28.292514 containerd[1893]: time="2025-09-16T05:02:28.291665199Z" level=info msg="Stop container \"2ab71e8653291fc3f5f4334f61972322e1ed32a57637daab11d447426848ce5b\" with signal terminated" Sep 16 05:02:28.293077 containerd[1893]: time="2025-09-16T05:02:28.293036572Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 16 05:02:28.294829 containerd[1893]: time="2025-09-16T05:02:28.294794840Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5c0259a48686195b00655b81edcbc366d2739e73ee89fdbc0406680f685945fd\" id:\"757ca85f7482015e4d11a46a7d300fd174d196e5a15f50866b8a55fe4ef60a23\" pid:4936 exited_at:{seconds:1757998948 nanos:294429789}" Sep 16 05:02:28.313088 containerd[1893]: time="2025-09-16T05:02:28.312900759Z" level=info msg="StopContainer for \"5c0259a48686195b00655b81edcbc366d2739e73ee89fdbc0406680f685945fd\" with timeout 2 (s)" Sep 16 05:02:28.313794 containerd[1893]: time="2025-09-16T05:02:28.313627282Z" level=info msg="Stop container \"5c0259a48686195b00655b81edcbc366d2739e73ee89fdbc0406680f685945fd\" with signal terminated" Sep 16 05:02:28.326016 systemd-networkd[1819]: lxc_health: Link DOWN Sep 16 05:02:28.326026 systemd-networkd[1819]: lxc_health: Lost carrier Sep 16 05:02:28.346943 systemd[1]: cri-containerd-5c0259a48686195b00655b81edcbc366d2739e73ee89fdbc0406680f685945fd.scope: Deactivated successfully. Sep 16 05:02:28.347625 systemd[1]: cri-containerd-5c0259a48686195b00655b81edcbc366d2739e73ee89fdbc0406680f685945fd.scope: Consumed 7.983s CPU time, 199.8M memory peak, 81.9M read from disk, 13.3M written to disk. Sep 16 05:02:28.349909 containerd[1893]: time="2025-09-16T05:02:28.349783955Z" level=info msg="received exit event container_id:\"5c0259a48686195b00655b81edcbc366d2739e73ee89fdbc0406680f685945fd\" id:\"5c0259a48686195b00655b81edcbc366d2739e73ee89fdbc0406680f685945fd\" pid:3993 exited_at:{seconds:1757998948 nanos:348787396}" Sep 16 05:02:28.350126 containerd[1893]: time="2025-09-16T05:02:28.350100903Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5c0259a48686195b00655b81edcbc366d2739e73ee89fdbc0406680f685945fd\" id:\"5c0259a48686195b00655b81edcbc366d2739e73ee89fdbc0406680f685945fd\" pid:3993 exited_at:{seconds:1757998948 nanos:348787396}" Sep 16 05:02:28.387941 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5c0259a48686195b00655b81edcbc366d2739e73ee89fdbc0406680f685945fd-rootfs.mount: Deactivated successfully. Sep 16 05:02:28.390574 systemd[1]: cri-containerd-2ab71e8653291fc3f5f4334f61972322e1ed32a57637daab11d447426848ce5b.scope: Deactivated successfully. Sep 16 05:02:28.392168 systemd[1]: cri-containerd-2ab71e8653291fc3f5f4334f61972322e1ed32a57637daab11d447426848ce5b.scope: Consumed 432ms CPU time, 39.7M memory peak, 16.7M read from disk, 4K written to disk. Sep 16 05:02:28.395918 containerd[1893]: time="2025-09-16T05:02:28.392987691Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2ab71e8653291fc3f5f4334f61972322e1ed32a57637daab11d447426848ce5b\" id:\"2ab71e8653291fc3f5f4334f61972322e1ed32a57637daab11d447426848ce5b\" pid:3748 exited_at:{seconds:1757998948 nanos:389321717}" Sep 16 05:02:28.395918 containerd[1893]: time="2025-09-16T05:02:28.394838788Z" level=info msg="received exit event container_id:\"2ab71e8653291fc3f5f4334f61972322e1ed32a57637daab11d447426848ce5b\" id:\"2ab71e8653291fc3f5f4334f61972322e1ed32a57637daab11d447426848ce5b\" pid:3748 exited_at:{seconds:1757998948 nanos:389321717}" Sep 16 05:02:28.421288 containerd[1893]: time="2025-09-16T05:02:28.421161585Z" level=info msg="StopContainer for \"5c0259a48686195b00655b81edcbc366d2739e73ee89fdbc0406680f685945fd\" returns successfully" Sep 16 05:02:28.423160 containerd[1893]: time="2025-09-16T05:02:28.423123971Z" level=info msg="StopPodSandbox for \"6213bf3b45aa9029be2ae36d57457a8838b1c22fc05035a8f6216376e1ef820f\"" Sep 16 05:02:28.434242 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ab71e8653291fc3f5f4334f61972322e1ed32a57637daab11d447426848ce5b-rootfs.mount: Deactivated successfully. Sep 16 05:02:28.438698 containerd[1893]: time="2025-09-16T05:02:28.438653050Z" level=info msg="Container to stop \"cd82eb07ff28d822baf87ef5f471ea82e15b91ad1de94db5f3162c3f049a5299\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 05:02:28.438698 containerd[1893]: time="2025-09-16T05:02:28.438690214Z" level=info msg="Container to stop \"0ce73fb7d0b241b387caed0028f95d8426f1e04293bd260cbfde363e058e5936\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 05:02:28.438989 containerd[1893]: time="2025-09-16T05:02:28.438703365Z" level=info msg="Container to stop \"60098e801a93df21cd34b873bffbfe0206ca379cbfd80331500bd994236ac0ff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 05:02:28.438989 containerd[1893]: time="2025-09-16T05:02:28.438714260Z" level=info msg="Container to stop \"5c0259a48686195b00655b81edcbc366d2739e73ee89fdbc0406680f685945fd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 05:02:28.438989 containerd[1893]: time="2025-09-16T05:02:28.438726628Z" level=info msg="Container to stop \"f780faf490e3a55bf4f1b191346f5cc92782ea343019eb7f8f275f725fb5130e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 05:02:28.453441 containerd[1893]: time="2025-09-16T05:02:28.453393881Z" level=info msg="StopContainer for \"2ab71e8653291fc3f5f4334f61972322e1ed32a57637daab11d447426848ce5b\" returns successfully" Sep 16 05:02:28.455896 containerd[1893]: time="2025-09-16T05:02:28.455851280Z" level=info msg="StopPodSandbox for \"770d89b5b4ad8b6628f3d05d4981af3b8cb905c5d7e3781819520aa59e5fce72\"" Sep 16 05:02:28.456283 containerd[1893]: time="2025-09-16T05:02:28.455940374Z" level=info msg="Container to stop \"2ab71e8653291fc3f5f4334f61972322e1ed32a57637daab11d447426848ce5b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 05:02:28.458199 systemd[1]: cri-containerd-6213bf3b45aa9029be2ae36d57457a8838b1c22fc05035a8f6216376e1ef820f.scope: Deactivated successfully. Sep 16 05:02:28.468885 systemd[1]: cri-containerd-770d89b5b4ad8b6628f3d05d4981af3b8cb905c5d7e3781819520aa59e5fce72.scope: Deactivated successfully. Sep 16 05:02:28.469163 containerd[1893]: time="2025-09-16T05:02:28.469126629Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6213bf3b45aa9029be2ae36d57457a8838b1c22fc05035a8f6216376e1ef820f\" id:\"6213bf3b45aa9029be2ae36d57457a8838b1c22fc05035a8f6216376e1ef820f\" pid:3500 exit_status:137 exited_at:{seconds:1757998948 nanos:464914865}" Sep 16 05:02:28.510418 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6213bf3b45aa9029be2ae36d57457a8838b1c22fc05035a8f6216376e1ef820f-rootfs.mount: Deactivated successfully. Sep 16 05:02:28.522153 containerd[1893]: time="2025-09-16T05:02:28.521683376Z" level=info msg="shim disconnected" id=6213bf3b45aa9029be2ae36d57457a8838b1c22fc05035a8f6216376e1ef820f namespace=k8s.io Sep 16 05:02:28.522153 containerd[1893]: time="2025-09-16T05:02:28.521723788Z" level=warning msg="cleaning up after shim disconnected" id=6213bf3b45aa9029be2ae36d57457a8838b1c22fc05035a8f6216376e1ef820f namespace=k8s.io Sep 16 05:02:28.521857 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-770d89b5b4ad8b6628f3d05d4981af3b8cb905c5d7e3781819520aa59e5fce72-rootfs.mount: Deactivated successfully. Sep 16 05:02:28.531765 containerd[1893]: time="2025-09-16T05:02:28.521734731Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 16 05:02:28.533154 containerd[1893]: time="2025-09-16T05:02:28.526567971Z" level=error msg="failed sending message on channel" error="write unix /run/containerd/containerd.sock.ttrpc->@: write: broken pipe" Sep 16 05:02:28.533314 containerd[1893]: time="2025-09-16T05:02:28.526703166Z" level=info msg="shim disconnected" id=770d89b5b4ad8b6628f3d05d4981af3b8cb905c5d7e3781819520aa59e5fce72 namespace=k8s.io Sep 16 05:02:28.533412 containerd[1893]: time="2025-09-16T05:02:28.533397151Z" level=warning msg="cleaning up after shim disconnected" id=770d89b5b4ad8b6628f3d05d4981af3b8cb905c5d7e3781819520aa59e5fce72 namespace=k8s.io Sep 16 05:02:28.533524 containerd[1893]: time="2025-09-16T05:02:28.533493142Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 16 05:02:28.541666 containerd[1893]: time="2025-09-16T05:02:28.540226637Z" level=info msg="TaskExit event in podsandbox handler container_id:\"770d89b5b4ad8b6628f3d05d4981af3b8cb905c5d7e3781819520aa59e5fce72\" id:\"770d89b5b4ad8b6628f3d05d4981af3b8cb905c5d7e3781819520aa59e5fce72\" pid:3468 exit_status:137 exited_at:{seconds:1757998948 nanos:475786378}" Sep 16 05:02:28.541666 containerd[1893]: time="2025-09-16T05:02:28.540456368Z" level=info msg="received exit event sandbox_id:\"770d89b5b4ad8b6628f3d05d4981af3b8cb905c5d7e3781819520aa59e5fce72\" exit_status:137 exited_at:{seconds:1757998948 nanos:475786378}" Sep 16 05:02:28.541666 containerd[1893]: time="2025-09-16T05:02:28.541197517Z" level=info msg="received exit event sandbox_id:\"6213bf3b45aa9029be2ae36d57457a8838b1c22fc05035a8f6216376e1ef820f\" exit_status:137 exited_at:{seconds:1757998948 nanos:464914865}" Sep 16 05:02:28.547166 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-770d89b5b4ad8b6628f3d05d4981af3b8cb905c5d7e3781819520aa59e5fce72-shm.mount: Deactivated successfully. Sep 16 05:02:28.559996 containerd[1893]: time="2025-09-16T05:02:28.559940931Z" level=info msg="TearDown network for sandbox \"6213bf3b45aa9029be2ae36d57457a8838b1c22fc05035a8f6216376e1ef820f\" successfully" Sep 16 05:02:28.559996 containerd[1893]: time="2025-09-16T05:02:28.560001146Z" level=info msg="StopPodSandbox for \"6213bf3b45aa9029be2ae36d57457a8838b1c22fc05035a8f6216376e1ef820f\" returns successfully" Sep 16 05:02:28.562292 containerd[1893]: time="2025-09-16T05:02:28.562170011Z" level=info msg="TearDown network for sandbox \"770d89b5b4ad8b6628f3d05d4981af3b8cb905c5d7e3781819520aa59e5fce72\" successfully" Sep 16 05:02:28.562292 containerd[1893]: time="2025-09-16T05:02:28.562208369Z" level=info msg="StopPodSandbox for \"770d89b5b4ad8b6628f3d05d4981af3b8cb905c5d7e3781819520aa59e5fce72\" returns successfully" Sep 16 05:02:28.642504 kubelet[3249]: I0916 05:02:28.642018 3249 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/87e26ec0-5339-4d19-9d8f-cff1af90e72a-cilium-run\") pod \"87e26ec0-5339-4d19-9d8f-cff1af90e72a\" (UID: \"87e26ec0-5339-4d19-9d8f-cff1af90e72a\") " Sep 16 05:02:28.642504 kubelet[3249]: I0916 05:02:28.642070 3249 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/87e26ec0-5339-4d19-9d8f-cff1af90e72a-hubble-tls\") pod \"87e26ec0-5339-4d19-9d8f-cff1af90e72a\" (UID: \"87e26ec0-5339-4d19-9d8f-cff1af90e72a\") " Sep 16 05:02:28.642504 kubelet[3249]: I0916 05:02:28.642093 3249 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lcnd\" (UniqueName: \"kubernetes.io/projected/87e26ec0-5339-4d19-9d8f-cff1af90e72a-kube-api-access-2lcnd\") pod \"87e26ec0-5339-4d19-9d8f-cff1af90e72a\" (UID: \"87e26ec0-5339-4d19-9d8f-cff1af90e72a\") " Sep 16 05:02:28.642504 kubelet[3249]: I0916 05:02:28.642119 3249 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/87e26ec0-5339-4d19-9d8f-cff1af90e72a-bpf-maps\") pod \"87e26ec0-5339-4d19-9d8f-cff1af90e72a\" (UID: \"87e26ec0-5339-4d19-9d8f-cff1af90e72a\") " Sep 16 05:02:28.642504 kubelet[3249]: I0916 05:02:28.642137 3249 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/87e26ec0-5339-4d19-9d8f-cff1af90e72a-cilium-cgroup\") pod \"87e26ec0-5339-4d19-9d8f-cff1af90e72a\" (UID: \"87e26ec0-5339-4d19-9d8f-cff1af90e72a\") " Sep 16 05:02:28.642504 kubelet[3249]: I0916 05:02:28.642161 3249 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/87e26ec0-5339-4d19-9d8f-cff1af90e72a-cilium-config-path\") pod \"87e26ec0-5339-4d19-9d8f-cff1af90e72a\" (UID: \"87e26ec0-5339-4d19-9d8f-cff1af90e72a\") " Sep 16 05:02:28.643192 kubelet[3249]: I0916 05:02:28.642181 3249 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/87e26ec0-5339-4d19-9d8f-cff1af90e72a-host-proc-sys-kernel\") pod \"87e26ec0-5339-4d19-9d8f-cff1af90e72a\" (UID: \"87e26ec0-5339-4d19-9d8f-cff1af90e72a\") " Sep 16 05:02:28.643192 kubelet[3249]: I0916 05:02:28.642214 3249 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87e26ec0-5339-4d19-9d8f-cff1af90e72a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "87e26ec0-5339-4d19-9d8f-cff1af90e72a" (UID: "87e26ec0-5339-4d19-9d8f-cff1af90e72a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 05:02:28.643192 kubelet[3249]: I0916 05:02:28.642251 3249 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87e26ec0-5339-4d19-9d8f-cff1af90e72a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "87e26ec0-5339-4d19-9d8f-cff1af90e72a" (UID: "87e26ec0-5339-4d19-9d8f-cff1af90e72a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 05:02:28.643192 kubelet[3249]: I0916 05:02:28.642271 3249 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87e26ec0-5339-4d19-9d8f-cff1af90e72a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "87e26ec0-5339-4d19-9d8f-cff1af90e72a" (UID: "87e26ec0-5339-4d19-9d8f-cff1af90e72a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 05:02:28.646405 kubelet[3249]: I0916 05:02:28.646252 3249 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87e26ec0-5339-4d19-9d8f-cff1af90e72a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "87e26ec0-5339-4d19-9d8f-cff1af90e72a" (UID: "87e26ec0-5339-4d19-9d8f-cff1af90e72a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 05:02:28.650229 kubelet[3249]: I0916 05:02:28.649611 3249 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/87e26ec0-5339-4d19-9d8f-cff1af90e72a-host-proc-sys-net\") pod \"87e26ec0-5339-4d19-9d8f-cff1af90e72a\" (UID: \"87e26ec0-5339-4d19-9d8f-cff1af90e72a\") " Sep 16 05:02:28.650229 kubelet[3249]: I0916 05:02:28.649654 3249 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/87e26ec0-5339-4d19-9d8f-cff1af90e72a-lib-modules\") pod \"87e26ec0-5339-4d19-9d8f-cff1af90e72a\" (UID: \"87e26ec0-5339-4d19-9d8f-cff1af90e72a\") " Sep 16 05:02:28.650229 kubelet[3249]: I0916 05:02:28.649679 3249 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/87e26ec0-5339-4d19-9d8f-cff1af90e72a-hostproc\") pod \"87e26ec0-5339-4d19-9d8f-cff1af90e72a\" (UID: \"87e26ec0-5339-4d19-9d8f-cff1af90e72a\") " Sep 16 05:02:28.650229 kubelet[3249]: I0916 05:02:28.649703 3249 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/87e26ec0-5339-4d19-9d8f-cff1af90e72a-cni-path\") pod \"87e26ec0-5339-4d19-9d8f-cff1af90e72a\" (UID: \"87e26ec0-5339-4d19-9d8f-cff1af90e72a\") " Sep 16 05:02:28.650229 kubelet[3249]: I0916 05:02:28.649731 3249 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/520ec74e-1110-4aa1-8d9f-84a7a476f809-cilium-config-path\") pod \"520ec74e-1110-4aa1-8d9f-84a7a476f809\" (UID: \"520ec74e-1110-4aa1-8d9f-84a7a476f809\") " Sep 16 05:02:28.650229 kubelet[3249]: I0916 05:02:28.649754 3249 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/87e26ec0-5339-4d19-9d8f-cff1af90e72a-xtables-lock\") pod \"87e26ec0-5339-4d19-9d8f-cff1af90e72a\" (UID: \"87e26ec0-5339-4d19-9d8f-cff1af90e72a\") " Sep 16 05:02:28.650497 kubelet[3249]: I0916 05:02:28.649778 3249 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/87e26ec0-5339-4d19-9d8f-cff1af90e72a-clustermesh-secrets\") pod \"87e26ec0-5339-4d19-9d8f-cff1af90e72a\" (UID: \"87e26ec0-5339-4d19-9d8f-cff1af90e72a\") " Sep 16 05:02:28.650497 kubelet[3249]: I0916 05:02:28.649833 3249 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/87e26ec0-5339-4d19-9d8f-cff1af90e72a-etc-cni-netd\") pod \"87e26ec0-5339-4d19-9d8f-cff1af90e72a\" (UID: \"87e26ec0-5339-4d19-9d8f-cff1af90e72a\") " Sep 16 05:02:28.650497 kubelet[3249]: I0916 05:02:28.649853 3249 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8m8l\" (UniqueName: \"kubernetes.io/projected/520ec74e-1110-4aa1-8d9f-84a7a476f809-kube-api-access-c8m8l\") pod \"520ec74e-1110-4aa1-8d9f-84a7a476f809\" (UID: \"520ec74e-1110-4aa1-8d9f-84a7a476f809\") " Sep 16 05:02:28.650497 kubelet[3249]: I0916 05:02:28.649906 3249 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/87e26ec0-5339-4d19-9d8f-cff1af90e72a-cilium-run\") on node \"ip-172-31-17-131\" DevicePath \"\"" Sep 16 05:02:28.650497 kubelet[3249]: I0916 05:02:28.649916 3249 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/87e26ec0-5339-4d19-9d8f-cff1af90e72a-bpf-maps\") on node \"ip-172-31-17-131\" DevicePath \"\"" Sep 16 05:02:28.650497 kubelet[3249]: I0916 05:02:28.649924 3249 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/87e26ec0-5339-4d19-9d8f-cff1af90e72a-cilium-cgroup\") on node \"ip-172-31-17-131\" DevicePath \"\"" Sep 16 05:02:28.650497 kubelet[3249]: I0916 05:02:28.649933 3249 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/87e26ec0-5339-4d19-9d8f-cff1af90e72a-host-proc-sys-kernel\") on node \"ip-172-31-17-131\" DevicePath \"\"" Sep 16 05:02:28.653410 kubelet[3249]: I0916 05:02:28.653369 3249 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87e26ec0-5339-4d19-9d8f-cff1af90e72a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "87e26ec0-5339-4d19-9d8f-cff1af90e72a" (UID: "87e26ec0-5339-4d19-9d8f-cff1af90e72a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 05:02:28.653547 kubelet[3249]: I0916 05:02:28.653444 3249 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87e26ec0-5339-4d19-9d8f-cff1af90e72a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "87e26ec0-5339-4d19-9d8f-cff1af90e72a" (UID: "87e26ec0-5339-4d19-9d8f-cff1af90e72a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 05:02:28.653547 kubelet[3249]: I0916 05:02:28.653533 3249 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87e26ec0-5339-4d19-9d8f-cff1af90e72a-hostproc" (OuterVolumeSpecName: "hostproc") pod "87e26ec0-5339-4d19-9d8f-cff1af90e72a" (UID: "87e26ec0-5339-4d19-9d8f-cff1af90e72a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 05:02:28.653670 kubelet[3249]: I0916 05:02:28.653557 3249 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87e26ec0-5339-4d19-9d8f-cff1af90e72a-cni-path" (OuterVolumeSpecName: "cni-path") pod "87e26ec0-5339-4d19-9d8f-cff1af90e72a" (UID: "87e26ec0-5339-4d19-9d8f-cff1af90e72a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 05:02:28.657339 kubelet[3249]: I0916 05:02:28.657113 3249 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/520ec74e-1110-4aa1-8d9f-84a7a476f809-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "520ec74e-1110-4aa1-8d9f-84a7a476f809" (UID: "520ec74e-1110-4aa1-8d9f-84a7a476f809"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 16 05:02:28.657339 kubelet[3249]: I0916 05:02:28.657193 3249 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87e26ec0-5339-4d19-9d8f-cff1af90e72a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "87e26ec0-5339-4d19-9d8f-cff1af90e72a" (UID: "87e26ec0-5339-4d19-9d8f-cff1af90e72a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 05:02:28.658804 kubelet[3249]: I0916 05:02:28.658771 3249 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87e26ec0-5339-4d19-9d8f-cff1af90e72a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "87e26ec0-5339-4d19-9d8f-cff1af90e72a" (UID: "87e26ec0-5339-4d19-9d8f-cff1af90e72a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 05:02:28.659388 kubelet[3249]: I0916 05:02:28.659359 3249 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87e26ec0-5339-4d19-9d8f-cff1af90e72a-kube-api-access-2lcnd" (OuterVolumeSpecName: "kube-api-access-2lcnd") pod "87e26ec0-5339-4d19-9d8f-cff1af90e72a" (UID: "87e26ec0-5339-4d19-9d8f-cff1af90e72a"). InnerVolumeSpecName "kube-api-access-2lcnd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 16 05:02:28.660914 kubelet[3249]: I0916 05:02:28.660890 3249 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/520ec74e-1110-4aa1-8d9f-84a7a476f809-kube-api-access-c8m8l" (OuterVolumeSpecName: "kube-api-access-c8m8l") pod "520ec74e-1110-4aa1-8d9f-84a7a476f809" (UID: "520ec74e-1110-4aa1-8d9f-84a7a476f809"). InnerVolumeSpecName "kube-api-access-c8m8l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 16 05:02:28.661080 kubelet[3249]: I0916 05:02:28.661064 3249 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87e26ec0-5339-4d19-9d8f-cff1af90e72a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "87e26ec0-5339-4d19-9d8f-cff1af90e72a" (UID: "87e26ec0-5339-4d19-9d8f-cff1af90e72a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 16 05:02:28.661239 kubelet[3249]: I0916 05:02:28.661206 3249 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87e26ec0-5339-4d19-9d8f-cff1af90e72a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "87e26ec0-5339-4d19-9d8f-cff1af90e72a" (UID: "87e26ec0-5339-4d19-9d8f-cff1af90e72a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 16 05:02:28.662982 kubelet[3249]: I0916 05:02:28.662949 3249 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87e26ec0-5339-4d19-9d8f-cff1af90e72a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "87e26ec0-5339-4d19-9d8f-cff1af90e72a" (UID: "87e26ec0-5339-4d19-9d8f-cff1af90e72a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 16 05:02:28.751060 kubelet[3249]: I0916 05:02:28.751000 3249 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/87e26ec0-5339-4d19-9d8f-cff1af90e72a-lib-modules\") on node \"ip-172-31-17-131\" DevicePath \"\"" Sep 16 05:02:28.751060 kubelet[3249]: I0916 05:02:28.751051 3249 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/87e26ec0-5339-4d19-9d8f-cff1af90e72a-hostproc\") on node \"ip-172-31-17-131\" DevicePath \"\"" Sep 16 05:02:28.751060 kubelet[3249]: I0916 05:02:28.751060 3249 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/87e26ec0-5339-4d19-9d8f-cff1af90e72a-cni-path\") on node \"ip-172-31-17-131\" DevicePath \"\"" Sep 16 05:02:28.751060 kubelet[3249]: I0916 05:02:28.751069 3249 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/520ec74e-1110-4aa1-8d9f-84a7a476f809-cilium-config-path\") on node \"ip-172-31-17-131\" DevicePath \"\"" Sep 16 05:02:28.751299 kubelet[3249]: I0916 05:02:28.751084 3249 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/87e26ec0-5339-4d19-9d8f-cff1af90e72a-xtables-lock\") on node \"ip-172-31-17-131\" DevicePath \"\"" Sep 16 05:02:28.751299 kubelet[3249]: I0916 05:02:28.751093 3249 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/87e26ec0-5339-4d19-9d8f-cff1af90e72a-clustermesh-secrets\") on node \"ip-172-31-17-131\" DevicePath \"\"" Sep 16 05:02:28.751299 kubelet[3249]: I0916 05:02:28.751100 3249 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/87e26ec0-5339-4d19-9d8f-cff1af90e72a-etc-cni-netd\") on node \"ip-172-31-17-131\" DevicePath \"\"" Sep 16 05:02:28.751299 kubelet[3249]: I0916 05:02:28.751115 3249 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-c8m8l\" (UniqueName: \"kubernetes.io/projected/520ec74e-1110-4aa1-8d9f-84a7a476f809-kube-api-access-c8m8l\") on node \"ip-172-31-17-131\" DevicePath \"\"" Sep 16 05:02:28.751299 kubelet[3249]: I0916 05:02:28.751122 3249 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/87e26ec0-5339-4d19-9d8f-cff1af90e72a-hubble-tls\") on node \"ip-172-31-17-131\" DevicePath \"\"" Sep 16 05:02:28.751299 kubelet[3249]: I0916 05:02:28.751130 3249 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2lcnd\" (UniqueName: \"kubernetes.io/projected/87e26ec0-5339-4d19-9d8f-cff1af90e72a-kube-api-access-2lcnd\") on node \"ip-172-31-17-131\" DevicePath \"\"" Sep 16 05:02:28.751299 kubelet[3249]: I0916 05:02:28.751139 3249 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/87e26ec0-5339-4d19-9d8f-cff1af90e72a-cilium-config-path\") on node \"ip-172-31-17-131\" DevicePath \"\"" Sep 16 05:02:28.751299 kubelet[3249]: I0916 05:02:28.751149 3249 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/87e26ec0-5339-4d19-9d8f-cff1af90e72a-host-proc-sys-net\") on node \"ip-172-31-17-131\" DevicePath \"\"" Sep 16 05:02:28.963449 kubelet[3249]: I0916 05:02:28.962602 3249 scope.go:117] "RemoveContainer" containerID="2ab71e8653291fc3f5f4334f61972322e1ed32a57637daab11d447426848ce5b" Sep 16 05:02:28.964493 systemd[1]: Removed slice kubepods-besteffort-pod520ec74e_1110_4aa1_8d9f_84a7a476f809.slice - libcontainer container kubepods-besteffort-pod520ec74e_1110_4aa1_8d9f_84a7a476f809.slice. Sep 16 05:02:28.964593 systemd[1]: kubepods-besteffort-pod520ec74e_1110_4aa1_8d9f_84a7a476f809.slice: Consumed 475ms CPU time, 39.9M memory peak, 16.7M read from disk, 4K written to disk. Sep 16 05:02:28.968740 containerd[1893]: time="2025-09-16T05:02:28.968220856Z" level=info msg="RemoveContainer for \"2ab71e8653291fc3f5f4334f61972322e1ed32a57637daab11d447426848ce5b\"" Sep 16 05:02:28.977826 containerd[1893]: time="2025-09-16T05:02:28.977727381Z" level=info msg="RemoveContainer for \"2ab71e8653291fc3f5f4334f61972322e1ed32a57637daab11d447426848ce5b\" returns successfully" Sep 16 05:02:28.978826 kubelet[3249]: I0916 05:02:28.978722 3249 scope.go:117] "RemoveContainer" containerID="2ab71e8653291fc3f5f4334f61972322e1ed32a57637daab11d447426848ce5b" Sep 16 05:02:28.982069 containerd[1893]: time="2025-09-16T05:02:28.979407721Z" level=error msg="ContainerStatus for \"2ab71e8653291fc3f5f4334f61972322e1ed32a57637daab11d447426848ce5b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2ab71e8653291fc3f5f4334f61972322e1ed32a57637daab11d447426848ce5b\": not found" Sep 16 05:02:28.982097 systemd[1]: Removed slice kubepods-burstable-pod87e26ec0_5339_4d19_9d8f_cff1af90e72a.slice - libcontainer container kubepods-burstable-pod87e26ec0_5339_4d19_9d8f_cff1af90e72a.slice. Sep 16 05:02:28.982216 systemd[1]: kubepods-burstable-pod87e26ec0_5339_4d19_9d8f_cff1af90e72a.slice: Consumed 8.094s CPU time, 200.1M memory peak, 83.6M read from disk, 13.3M written to disk. Sep 16 05:02:28.982672 kubelet[3249]: E0916 05:02:28.982606 3249 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2ab71e8653291fc3f5f4334f61972322e1ed32a57637daab11d447426848ce5b\": not found" containerID="2ab71e8653291fc3f5f4334f61972322e1ed32a57637daab11d447426848ce5b" Sep 16 05:02:28.982672 kubelet[3249]: I0916 05:02:28.982641 3249 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2ab71e8653291fc3f5f4334f61972322e1ed32a57637daab11d447426848ce5b"} err="failed to get container status \"2ab71e8653291fc3f5f4334f61972322e1ed32a57637daab11d447426848ce5b\": rpc error: code = NotFound desc = an error occurred when try to find container \"2ab71e8653291fc3f5f4334f61972322e1ed32a57637daab11d447426848ce5b\": not found" Sep 16 05:02:28.982877 kubelet[3249]: I0916 05:02:28.982681 3249 scope.go:117] "RemoveContainer" containerID="5c0259a48686195b00655b81edcbc366d2739e73ee89fdbc0406680f685945fd" Sep 16 05:02:28.991989 containerd[1893]: time="2025-09-16T05:02:28.991957342Z" level=info msg="RemoveContainer for \"5c0259a48686195b00655b81edcbc366d2739e73ee89fdbc0406680f685945fd\"" Sep 16 05:02:29.024999 containerd[1893]: time="2025-09-16T05:02:29.024942685Z" level=info msg="RemoveContainer for \"5c0259a48686195b00655b81edcbc366d2739e73ee89fdbc0406680f685945fd\" returns successfully" Sep 16 05:02:29.025338 kubelet[3249]: I0916 05:02:29.025298 3249 scope.go:117] "RemoveContainer" containerID="60098e801a93df21cd34b873bffbfe0206ca379cbfd80331500bd994236ac0ff" Sep 16 05:02:29.027132 containerd[1893]: time="2025-09-16T05:02:29.027096243Z" level=info msg="RemoveContainer for \"60098e801a93df21cd34b873bffbfe0206ca379cbfd80331500bd994236ac0ff\"" Sep 16 05:02:29.031497 containerd[1893]: time="2025-09-16T05:02:29.031447162Z" level=info msg="RemoveContainer for \"60098e801a93df21cd34b873bffbfe0206ca379cbfd80331500bd994236ac0ff\" returns successfully" Sep 16 05:02:29.031953 kubelet[3249]: I0916 05:02:29.031925 3249 scope.go:117] "RemoveContainer" containerID="0ce73fb7d0b241b387caed0028f95d8426f1e04293bd260cbfde363e058e5936" Sep 16 05:02:29.034878 containerd[1893]: time="2025-09-16T05:02:29.034808980Z" level=info msg="RemoveContainer for \"0ce73fb7d0b241b387caed0028f95d8426f1e04293bd260cbfde363e058e5936\"" Sep 16 05:02:29.038891 containerd[1893]: time="2025-09-16T05:02:29.038855754Z" level=info msg="RemoveContainer for \"0ce73fb7d0b241b387caed0028f95d8426f1e04293bd260cbfde363e058e5936\" returns successfully" Sep 16 05:02:29.039080 kubelet[3249]: I0916 05:02:29.039054 3249 scope.go:117] "RemoveContainer" containerID="cd82eb07ff28d822baf87ef5f471ea82e15b91ad1de94db5f3162c3f049a5299" Sep 16 05:02:29.040619 containerd[1893]: time="2025-09-16T05:02:29.040589289Z" level=info msg="RemoveContainer for \"cd82eb07ff28d822baf87ef5f471ea82e15b91ad1de94db5f3162c3f049a5299\"" Sep 16 05:02:29.043897 containerd[1893]: time="2025-09-16T05:02:29.043851627Z" level=info msg="RemoveContainer for \"cd82eb07ff28d822baf87ef5f471ea82e15b91ad1de94db5f3162c3f049a5299\" returns successfully" Sep 16 05:02:29.044339 kubelet[3249]: I0916 05:02:29.044138 3249 scope.go:117] "RemoveContainer" containerID="f780faf490e3a55bf4f1b191346f5cc92782ea343019eb7f8f275f725fb5130e" Sep 16 05:02:29.045667 containerd[1893]: time="2025-09-16T05:02:29.045622083Z" level=info msg="RemoveContainer for \"f780faf490e3a55bf4f1b191346f5cc92782ea343019eb7f8f275f725fb5130e\"" Sep 16 05:02:29.048998 containerd[1893]: time="2025-09-16T05:02:29.048969720Z" level=info msg="RemoveContainer for \"f780faf490e3a55bf4f1b191346f5cc92782ea343019eb7f8f275f725fb5130e\" returns successfully" Sep 16 05:02:29.049161 kubelet[3249]: I0916 05:02:29.049140 3249 scope.go:117] "RemoveContainer" containerID="5c0259a48686195b00655b81edcbc366d2739e73ee89fdbc0406680f685945fd" Sep 16 05:02:29.049414 containerd[1893]: time="2025-09-16T05:02:29.049321164Z" level=error msg="ContainerStatus for \"5c0259a48686195b00655b81edcbc366d2739e73ee89fdbc0406680f685945fd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5c0259a48686195b00655b81edcbc366d2739e73ee89fdbc0406680f685945fd\": not found" Sep 16 05:02:29.049521 kubelet[3249]: E0916 05:02:29.049462 3249 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5c0259a48686195b00655b81edcbc366d2739e73ee89fdbc0406680f685945fd\": not found" containerID="5c0259a48686195b00655b81edcbc366d2739e73ee89fdbc0406680f685945fd" Sep 16 05:02:29.049569 kubelet[3249]: I0916 05:02:29.049526 3249 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5c0259a48686195b00655b81edcbc366d2739e73ee89fdbc0406680f685945fd"} err="failed to get container status \"5c0259a48686195b00655b81edcbc366d2739e73ee89fdbc0406680f685945fd\": rpc error: code = NotFound desc = an error occurred when try to find container \"5c0259a48686195b00655b81edcbc366d2739e73ee89fdbc0406680f685945fd\": not found" Sep 16 05:02:29.049569 kubelet[3249]: I0916 05:02:29.049546 3249 scope.go:117] "RemoveContainer" containerID="60098e801a93df21cd34b873bffbfe0206ca379cbfd80331500bd994236ac0ff" Sep 16 05:02:29.049818 containerd[1893]: time="2025-09-16T05:02:29.049742335Z" level=error msg="ContainerStatus for \"60098e801a93df21cd34b873bffbfe0206ca379cbfd80331500bd994236ac0ff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"60098e801a93df21cd34b873bffbfe0206ca379cbfd80331500bd994236ac0ff\": not found" Sep 16 05:02:29.050045 kubelet[3249]: E0916 05:02:29.050022 3249 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"60098e801a93df21cd34b873bffbfe0206ca379cbfd80331500bd994236ac0ff\": not found" containerID="60098e801a93df21cd34b873bffbfe0206ca379cbfd80331500bd994236ac0ff" Sep 16 05:02:29.050092 kubelet[3249]: I0916 05:02:29.050046 3249 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"60098e801a93df21cd34b873bffbfe0206ca379cbfd80331500bd994236ac0ff"} err="failed to get container status \"60098e801a93df21cd34b873bffbfe0206ca379cbfd80331500bd994236ac0ff\": rpc error: code = NotFound desc = an error occurred when try to find container \"60098e801a93df21cd34b873bffbfe0206ca379cbfd80331500bd994236ac0ff\": not found" Sep 16 05:02:29.050092 kubelet[3249]: I0916 05:02:29.050061 3249 scope.go:117] "RemoveContainer" containerID="0ce73fb7d0b241b387caed0028f95d8426f1e04293bd260cbfde363e058e5936" Sep 16 05:02:29.050232 containerd[1893]: time="2025-09-16T05:02:29.050206623Z" level=error msg="ContainerStatus for \"0ce73fb7d0b241b387caed0028f95d8426f1e04293bd260cbfde363e058e5936\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0ce73fb7d0b241b387caed0028f95d8426f1e04293bd260cbfde363e058e5936\": not found" Sep 16 05:02:29.050314 kubelet[3249]: E0916 05:02:29.050295 3249 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0ce73fb7d0b241b387caed0028f95d8426f1e04293bd260cbfde363e058e5936\": not found" containerID="0ce73fb7d0b241b387caed0028f95d8426f1e04293bd260cbfde363e058e5936" Sep 16 05:02:29.050348 kubelet[3249]: I0916 05:02:29.050318 3249 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0ce73fb7d0b241b387caed0028f95d8426f1e04293bd260cbfde363e058e5936"} err="failed to get container status \"0ce73fb7d0b241b387caed0028f95d8426f1e04293bd260cbfde363e058e5936\": rpc error: code = NotFound desc = an error occurred when try to find container \"0ce73fb7d0b241b387caed0028f95d8426f1e04293bd260cbfde363e058e5936\": not found" Sep 16 05:02:29.050348 kubelet[3249]: I0916 05:02:29.050332 3249 scope.go:117] "RemoveContainer" containerID="cd82eb07ff28d822baf87ef5f471ea82e15b91ad1de94db5f3162c3f049a5299" Sep 16 05:02:29.050575 containerd[1893]: time="2025-09-16T05:02:29.050463216Z" level=error msg="ContainerStatus for \"cd82eb07ff28d822baf87ef5f471ea82e15b91ad1de94db5f3162c3f049a5299\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cd82eb07ff28d822baf87ef5f471ea82e15b91ad1de94db5f3162c3f049a5299\": not found" Sep 16 05:02:29.050604 kubelet[3249]: E0916 05:02:29.050584 3249 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cd82eb07ff28d822baf87ef5f471ea82e15b91ad1de94db5f3162c3f049a5299\": not found" containerID="cd82eb07ff28d822baf87ef5f471ea82e15b91ad1de94db5f3162c3f049a5299" Sep 16 05:02:29.050643 kubelet[3249]: I0916 05:02:29.050601 3249 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cd82eb07ff28d822baf87ef5f471ea82e15b91ad1de94db5f3162c3f049a5299"} err="failed to get container status \"cd82eb07ff28d822baf87ef5f471ea82e15b91ad1de94db5f3162c3f049a5299\": rpc error: code = NotFound desc = an error occurred when try to find container \"cd82eb07ff28d822baf87ef5f471ea82e15b91ad1de94db5f3162c3f049a5299\": not found" Sep 16 05:02:29.050643 kubelet[3249]: I0916 05:02:29.050614 3249 scope.go:117] "RemoveContainer" containerID="f780faf490e3a55bf4f1b191346f5cc92782ea343019eb7f8f275f725fb5130e" Sep 16 05:02:29.050748 containerd[1893]: time="2025-09-16T05:02:29.050720483Z" level=error msg="ContainerStatus for \"f780faf490e3a55bf4f1b191346f5cc92782ea343019eb7f8f275f725fb5130e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f780faf490e3a55bf4f1b191346f5cc92782ea343019eb7f8f275f725fb5130e\": not found" Sep 16 05:02:29.050869 kubelet[3249]: E0916 05:02:29.050847 3249 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f780faf490e3a55bf4f1b191346f5cc92782ea343019eb7f8f275f725fb5130e\": not found" containerID="f780faf490e3a55bf4f1b191346f5cc92782ea343019eb7f8f275f725fb5130e" Sep 16 05:02:29.050905 kubelet[3249]: I0916 05:02:29.050867 3249 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f780faf490e3a55bf4f1b191346f5cc92782ea343019eb7f8f275f725fb5130e"} err="failed to get container status \"f780faf490e3a55bf4f1b191346f5cc92782ea343019eb7f8f275f725fb5130e\": rpc error: code = NotFound desc = an error occurred when try to find container \"f780faf490e3a55bf4f1b191346f5cc92782ea343019eb7f8f275f725fb5130e\": not found" Sep 16 05:02:29.384416 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6213bf3b45aa9029be2ae36d57457a8838b1c22fc05035a8f6216376e1ef820f-shm.mount: Deactivated successfully. Sep 16 05:02:29.384561 systemd[1]: var-lib-kubelet-pods-87e26ec0\x2d5339\x2d4d19\x2d9d8f\x2dcff1af90e72a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 16 05:02:29.384637 systemd[1]: var-lib-kubelet-pods-87e26ec0\x2d5339\x2d4d19\x2d9d8f\x2dcff1af90e72a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 16 05:02:29.384698 systemd[1]: var-lib-kubelet-pods-520ec74e\x2d1110\x2d4aa1\x2d8d9f\x2d84a7a476f809-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc8m8l.mount: Deactivated successfully. Sep 16 05:02:29.384758 systemd[1]: var-lib-kubelet-pods-87e26ec0\x2d5339\x2d4d19\x2d9d8f\x2dcff1af90e72a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2lcnd.mount: Deactivated successfully. Sep 16 05:02:30.138094 sshd[4918]: Connection closed by 139.178.68.195 port 50554 Sep 16 05:02:30.138790 sshd-session[4915]: pam_unix(sshd:session): session closed for user core Sep 16 05:02:30.145648 systemd[1]: sshd@24-172.31.17.131:22-139.178.68.195:50554.service: Deactivated successfully. Sep 16 05:02:30.148379 systemd[1]: session-25.scope: Deactivated successfully. Sep 16 05:02:30.149600 systemd-logind[1863]: Session 25 logged out. Waiting for processes to exit. Sep 16 05:02:30.151933 systemd-logind[1863]: Removed session 25. Sep 16 05:02:30.172201 systemd[1]: Started sshd@25-172.31.17.131:22-139.178.68.195:50568.service - OpenSSH per-connection server daemon (139.178.68.195:50568). Sep 16 05:02:30.344807 sshd[5067]: Accepted publickey for core from 139.178.68.195 port 50568 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:02:30.346247 sshd-session[5067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:02:30.352868 systemd-logind[1863]: New session 26 of user core. Sep 16 05:02:30.357657 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 16 05:02:30.539939 kubelet[3249]: I0916 05:02:30.539894 3249 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="520ec74e-1110-4aa1-8d9f-84a7a476f809" path="/var/lib/kubelet/pods/520ec74e-1110-4aa1-8d9f-84a7a476f809/volumes" Sep 16 05:02:30.540947 kubelet[3249]: I0916 05:02:30.540898 3249 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87e26ec0-5339-4d19-9d8f-cff1af90e72a" path="/var/lib/kubelet/pods/87e26ec0-5339-4d19-9d8f-cff1af90e72a/volumes" Sep 16 05:02:30.645278 ntpd[1855]: Deleting 10 lxc_health, [fe80::cc87:87ff:fe81:e0d0%8]:123, stats: received=0, sent=0, dropped=0, active_time=60 secs Sep 16 05:02:30.645644 ntpd[1855]: 16 Sep 05:02:30 ntpd[1855]: Deleting 10 lxc_health, [fe80::cc87:87ff:fe81:e0d0%8]:123, stats: received=0, sent=0, dropped=0, active_time=60 secs Sep 16 05:02:31.455502 sshd[5070]: Connection closed by 139.178.68.195 port 50568 Sep 16 05:02:31.456637 sshd-session[5067]: pam_unix(sshd:session): session closed for user core Sep 16 05:02:31.465255 systemd-logind[1863]: Session 26 logged out. Waiting for processes to exit. Sep 16 05:02:31.467286 systemd[1]: sshd@25-172.31.17.131:22-139.178.68.195:50568.service: Deactivated successfully. Sep 16 05:02:31.470979 systemd[1]: session-26.scope: Deactivated successfully. Sep 16 05:02:31.490885 systemd-logind[1863]: Removed session 26. Sep 16 05:02:31.492614 systemd[1]: Started sshd@26-172.31.17.131:22-139.178.68.195:50574.service - OpenSSH per-connection server daemon (139.178.68.195:50574). Sep 16 05:02:31.664164 sshd[5081]: Accepted publickey for core from 139.178.68.195 port 50574 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:02:31.665566 sshd-session[5081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:02:31.670399 systemd-logind[1863]: New session 27 of user core. Sep 16 05:02:31.675630 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 16 05:02:31.710516 kubelet[3249]: E0916 05:02:31.710363 3249 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 16 05:02:31.792512 sshd[5085]: Connection closed by 139.178.68.195 port 50574 Sep 16 05:02:31.793206 sshd-session[5081]: pam_unix(sshd:session): session closed for user core Sep 16 05:02:31.797325 systemd[1]: sshd@26-172.31.17.131:22-139.178.68.195:50574.service: Deactivated successfully. Sep 16 05:02:31.799215 systemd[1]: session-27.scope: Deactivated successfully. Sep 16 05:02:31.800660 systemd-logind[1863]: Session 27 logged out. Waiting for processes to exit. Sep 16 05:02:31.802946 systemd-logind[1863]: Removed session 27. Sep 16 05:02:31.825871 systemd[1]: Started sshd@27-172.31.17.131:22-139.178.68.195:50588.service - OpenSSH per-connection server daemon (139.178.68.195:50588). Sep 16 05:02:31.999093 sshd[5092]: Accepted publickey for core from 139.178.68.195 port 50588 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:02:32.000807 sshd-session[5092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:02:32.008551 systemd-logind[1863]: New session 28 of user core. Sep 16 05:02:32.018730 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 16 05:02:32.194811 systemd[1]: Created slice kubepods-burstable-podab37dfd9_9238_478c_939a_a0596e018dd3.slice - libcontainer container kubepods-burstable-podab37dfd9_9238_478c_939a_a0596e018dd3.slice. Sep 16 05:02:32.274336 kubelet[3249]: I0916 05:02:32.273909 3249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ab37dfd9-9238-478c-939a-a0596e018dd3-host-proc-sys-kernel\") pod \"cilium-7r2wp\" (UID: \"ab37dfd9-9238-478c-939a-a0596e018dd3\") " pod="kube-system/cilium-7r2wp" Sep 16 05:02:32.274336 kubelet[3249]: I0916 05:02:32.273951 3249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ab37dfd9-9238-478c-939a-a0596e018dd3-hostproc\") pod \"cilium-7r2wp\" (UID: \"ab37dfd9-9238-478c-939a-a0596e018dd3\") " pod="kube-system/cilium-7r2wp" Sep 16 05:02:32.274336 kubelet[3249]: I0916 05:02:32.273974 3249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ab37dfd9-9238-478c-939a-a0596e018dd3-cni-path\") pod \"cilium-7r2wp\" (UID: \"ab37dfd9-9238-478c-939a-a0596e018dd3\") " pod="kube-system/cilium-7r2wp" Sep 16 05:02:32.274336 kubelet[3249]: I0916 05:02:32.274019 3249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ab37dfd9-9238-478c-939a-a0596e018dd3-xtables-lock\") pod \"cilium-7r2wp\" (UID: \"ab37dfd9-9238-478c-939a-a0596e018dd3\") " pod="kube-system/cilium-7r2wp" Sep 16 05:02:32.274336 kubelet[3249]: I0916 05:02:32.274057 3249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ab37dfd9-9238-478c-939a-a0596e018dd3-clustermesh-secrets\") pod \"cilium-7r2wp\" (UID: \"ab37dfd9-9238-478c-939a-a0596e018dd3\") " pod="kube-system/cilium-7r2wp" Sep 16 05:02:32.274336 kubelet[3249]: I0916 05:02:32.274095 3249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ab37dfd9-9238-478c-939a-a0596e018dd3-cilium-config-path\") pod \"cilium-7r2wp\" (UID: \"ab37dfd9-9238-478c-939a-a0596e018dd3\") " pod="kube-system/cilium-7r2wp" Sep 16 05:02:32.274708 kubelet[3249]: I0916 05:02:32.274113 3249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ab37dfd9-9238-478c-939a-a0596e018dd3-bpf-maps\") pod \"cilium-7r2wp\" (UID: \"ab37dfd9-9238-478c-939a-a0596e018dd3\") " pod="kube-system/cilium-7r2wp" Sep 16 05:02:32.274708 kubelet[3249]: I0916 05:02:32.274128 3249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ab37dfd9-9238-478c-939a-a0596e018dd3-cilium-cgroup\") pod \"cilium-7r2wp\" (UID: \"ab37dfd9-9238-478c-939a-a0596e018dd3\") " pod="kube-system/cilium-7r2wp" Sep 16 05:02:32.274708 kubelet[3249]: I0916 05:02:32.274143 3249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ab37dfd9-9238-478c-939a-a0596e018dd3-cilium-ipsec-secrets\") pod \"cilium-7r2wp\" (UID: \"ab37dfd9-9238-478c-939a-a0596e018dd3\") " pod="kube-system/cilium-7r2wp" Sep 16 05:02:32.274708 kubelet[3249]: I0916 05:02:32.274159 3249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ab37dfd9-9238-478c-939a-a0596e018dd3-lib-modules\") pod \"cilium-7r2wp\" (UID: \"ab37dfd9-9238-478c-939a-a0596e018dd3\") " pod="kube-system/cilium-7r2wp" Sep 16 05:02:32.274708 kubelet[3249]: I0916 05:02:32.274173 3249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ab37dfd9-9238-478c-939a-a0596e018dd3-host-proc-sys-net\") pod \"cilium-7r2wp\" (UID: \"ab37dfd9-9238-478c-939a-a0596e018dd3\") " pod="kube-system/cilium-7r2wp" Sep 16 05:02:32.274708 kubelet[3249]: I0916 05:02:32.274189 3249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ab37dfd9-9238-478c-939a-a0596e018dd3-hubble-tls\") pod \"cilium-7r2wp\" (UID: \"ab37dfd9-9238-478c-939a-a0596e018dd3\") " pod="kube-system/cilium-7r2wp" Sep 16 05:02:32.274861 kubelet[3249]: I0916 05:02:32.274202 3249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzk7r\" (UniqueName: \"kubernetes.io/projected/ab37dfd9-9238-478c-939a-a0596e018dd3-kube-api-access-lzk7r\") pod \"cilium-7r2wp\" (UID: \"ab37dfd9-9238-478c-939a-a0596e018dd3\") " pod="kube-system/cilium-7r2wp" Sep 16 05:02:32.274861 kubelet[3249]: I0916 05:02:32.274224 3249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ab37dfd9-9238-478c-939a-a0596e018dd3-cilium-run\") pod \"cilium-7r2wp\" (UID: \"ab37dfd9-9238-478c-939a-a0596e018dd3\") " pod="kube-system/cilium-7r2wp" Sep 16 05:02:32.274861 kubelet[3249]: I0916 05:02:32.274240 3249 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ab37dfd9-9238-478c-939a-a0596e018dd3-etc-cni-netd\") pod \"cilium-7r2wp\" (UID: \"ab37dfd9-9238-478c-939a-a0596e018dd3\") " pod="kube-system/cilium-7r2wp" Sep 16 05:02:32.507874 containerd[1893]: time="2025-09-16T05:02:32.507813627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7r2wp,Uid:ab37dfd9-9238-478c-939a-a0596e018dd3,Namespace:kube-system,Attempt:0,}" Sep 16 05:02:32.529409 containerd[1893]: time="2025-09-16T05:02:32.529261151Z" level=info msg="connecting to shim 28799e8ff10b596536343ca2151e4bf768588e5d56420cf6edbb613909f41d26" address="unix:///run/containerd/s/b53d0ba30af4d537ec553a8672d22472c1217f16ec252fb501686951e53c6a8b" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:02:32.562693 systemd[1]: Started cri-containerd-28799e8ff10b596536343ca2151e4bf768588e5d56420cf6edbb613909f41d26.scope - libcontainer container 28799e8ff10b596536343ca2151e4bf768588e5d56420cf6edbb613909f41d26. Sep 16 05:02:32.590715 containerd[1893]: time="2025-09-16T05:02:32.590681092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7r2wp,Uid:ab37dfd9-9238-478c-939a-a0596e018dd3,Namespace:kube-system,Attempt:0,} returns sandbox id \"28799e8ff10b596536343ca2151e4bf768588e5d56420cf6edbb613909f41d26\"" Sep 16 05:02:32.596463 containerd[1893]: time="2025-09-16T05:02:32.596421708Z" level=info msg="CreateContainer within sandbox \"28799e8ff10b596536343ca2151e4bf768588e5d56420cf6edbb613909f41d26\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 16 05:02:32.612497 containerd[1893]: time="2025-09-16T05:02:32.611788450Z" level=info msg="Container 051ac11d2926bf283e8c1979e0b8b8a915391ce21583d79acfcbf73c5c011577: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:02:32.621730 containerd[1893]: time="2025-09-16T05:02:32.621682210Z" level=info msg="CreateContainer within sandbox \"28799e8ff10b596536343ca2151e4bf768588e5d56420cf6edbb613909f41d26\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"051ac11d2926bf283e8c1979e0b8b8a915391ce21583d79acfcbf73c5c011577\"" Sep 16 05:02:32.622801 containerd[1893]: time="2025-09-16T05:02:32.622338195Z" level=info msg="StartContainer for \"051ac11d2926bf283e8c1979e0b8b8a915391ce21583d79acfcbf73c5c011577\"" Sep 16 05:02:32.623398 containerd[1893]: time="2025-09-16T05:02:32.623332422Z" level=info msg="connecting to shim 051ac11d2926bf283e8c1979e0b8b8a915391ce21583d79acfcbf73c5c011577" address="unix:///run/containerd/s/b53d0ba30af4d537ec553a8672d22472c1217f16ec252fb501686951e53c6a8b" protocol=ttrpc version=3 Sep 16 05:02:32.647725 systemd[1]: Started cri-containerd-051ac11d2926bf283e8c1979e0b8b8a915391ce21583d79acfcbf73c5c011577.scope - libcontainer container 051ac11d2926bf283e8c1979e0b8b8a915391ce21583d79acfcbf73c5c011577. Sep 16 05:02:32.682744 containerd[1893]: time="2025-09-16T05:02:32.682711050Z" level=info msg="StartContainer for \"051ac11d2926bf283e8c1979e0b8b8a915391ce21583d79acfcbf73c5c011577\" returns successfully" Sep 16 05:02:32.705637 systemd[1]: cri-containerd-051ac11d2926bf283e8c1979e0b8b8a915391ce21583d79acfcbf73c5c011577.scope: Deactivated successfully. Sep 16 05:02:32.706569 systemd[1]: cri-containerd-051ac11d2926bf283e8c1979e0b8b8a915391ce21583d79acfcbf73c5c011577.scope: Consumed 24ms CPU time, 9.1M memory peak, 2.7M read from disk. Sep 16 05:02:32.710843 containerd[1893]: time="2025-09-16T05:02:32.710419784Z" level=info msg="TaskExit event in podsandbox handler container_id:\"051ac11d2926bf283e8c1979e0b8b8a915391ce21583d79acfcbf73c5c011577\" id:\"051ac11d2926bf283e8c1979e0b8b8a915391ce21583d79acfcbf73c5c011577\" pid:5164 exited_at:{seconds:1757998952 nanos:709702314}" Sep 16 05:02:32.710843 containerd[1893]: time="2025-09-16T05:02:32.710583409Z" level=info msg="received exit event container_id:\"051ac11d2926bf283e8c1979e0b8b8a915391ce21583d79acfcbf73c5c011577\" id:\"051ac11d2926bf283e8c1979e0b8b8a915391ce21583d79acfcbf73c5c011577\" pid:5164 exited_at:{seconds:1757998952 nanos:709702314}" Sep 16 05:02:32.994909 containerd[1893]: time="2025-09-16T05:02:32.994861960Z" level=info msg="CreateContainer within sandbox \"28799e8ff10b596536343ca2151e4bf768588e5d56420cf6edbb613909f41d26\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 16 05:02:33.003344 containerd[1893]: time="2025-09-16T05:02:33.003294112Z" level=info msg="Container 2fb4fe3845bdc48afa527847d3d211982345326933f0b1a9628ff5f87de2fbf4: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:02:33.021517 containerd[1893]: time="2025-09-16T05:02:33.021456607Z" level=info msg="CreateContainer within sandbox \"28799e8ff10b596536343ca2151e4bf768588e5d56420cf6edbb613909f41d26\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2fb4fe3845bdc48afa527847d3d211982345326933f0b1a9628ff5f87de2fbf4\"" Sep 16 05:02:33.023657 containerd[1893]: time="2025-09-16T05:02:33.023615257Z" level=info msg="StartContainer for \"2fb4fe3845bdc48afa527847d3d211982345326933f0b1a9628ff5f87de2fbf4\"" Sep 16 05:02:33.024925 containerd[1893]: time="2025-09-16T05:02:33.024880125Z" level=info msg="connecting to shim 2fb4fe3845bdc48afa527847d3d211982345326933f0b1a9628ff5f87de2fbf4" address="unix:///run/containerd/s/b53d0ba30af4d537ec553a8672d22472c1217f16ec252fb501686951e53c6a8b" protocol=ttrpc version=3 Sep 16 05:02:33.050798 systemd[1]: Started cri-containerd-2fb4fe3845bdc48afa527847d3d211982345326933f0b1a9628ff5f87de2fbf4.scope - libcontainer container 2fb4fe3845bdc48afa527847d3d211982345326933f0b1a9628ff5f87de2fbf4. Sep 16 05:02:33.088813 containerd[1893]: time="2025-09-16T05:02:33.088764196Z" level=info msg="StartContainer for \"2fb4fe3845bdc48afa527847d3d211982345326933f0b1a9628ff5f87de2fbf4\" returns successfully" Sep 16 05:02:33.105678 systemd[1]: cri-containerd-2fb4fe3845bdc48afa527847d3d211982345326933f0b1a9628ff5f87de2fbf4.scope: Deactivated successfully. Sep 16 05:02:33.106023 systemd[1]: cri-containerd-2fb4fe3845bdc48afa527847d3d211982345326933f0b1a9628ff5f87de2fbf4.scope: Consumed 23ms CPU time, 7.5M memory peak, 2.1M read from disk. Sep 16 05:02:33.107660 containerd[1893]: time="2025-09-16T05:02:33.107599814Z" level=info msg="received exit event container_id:\"2fb4fe3845bdc48afa527847d3d211982345326933f0b1a9628ff5f87de2fbf4\" id:\"2fb4fe3845bdc48afa527847d3d211982345326933f0b1a9628ff5f87de2fbf4\" pid:5210 exited_at:{seconds:1757998953 nanos:107319390}" Sep 16 05:02:33.108490 containerd[1893]: time="2025-09-16T05:02:33.108248303Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2fb4fe3845bdc48afa527847d3d211982345326933f0b1a9628ff5f87de2fbf4\" id:\"2fb4fe3845bdc48afa527847d3d211982345326933f0b1a9628ff5f87de2fbf4\" pid:5210 exited_at:{seconds:1757998953 nanos:107319390}" Sep 16 05:02:34.000081 containerd[1893]: time="2025-09-16T05:02:33.999895670Z" level=info msg="CreateContainer within sandbox \"28799e8ff10b596536343ca2151e4bf768588e5d56420cf6edbb613909f41d26\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 16 05:02:34.016504 containerd[1893]: time="2025-09-16T05:02:34.014659743Z" level=info msg="Container cb51cce7b3cfd6f355df02d24b093033796e66d44fa234f9738d8b3097473ef1: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:02:34.026980 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2910722173.mount: Deactivated successfully. Sep 16 05:02:34.033073 containerd[1893]: time="2025-09-16T05:02:34.033026368Z" level=info msg="CreateContainer within sandbox \"28799e8ff10b596536343ca2151e4bf768588e5d56420cf6edbb613909f41d26\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cb51cce7b3cfd6f355df02d24b093033796e66d44fa234f9738d8b3097473ef1\"" Sep 16 05:02:34.033766 containerd[1893]: time="2025-09-16T05:02:34.033722591Z" level=info msg="StartContainer for \"cb51cce7b3cfd6f355df02d24b093033796e66d44fa234f9738d8b3097473ef1\"" Sep 16 05:02:34.035959 containerd[1893]: time="2025-09-16T05:02:34.035917210Z" level=info msg="connecting to shim cb51cce7b3cfd6f355df02d24b093033796e66d44fa234f9738d8b3097473ef1" address="unix:///run/containerd/s/b53d0ba30af4d537ec553a8672d22472c1217f16ec252fb501686951e53c6a8b" protocol=ttrpc version=3 Sep 16 05:02:34.063730 systemd[1]: Started cri-containerd-cb51cce7b3cfd6f355df02d24b093033796e66d44fa234f9738d8b3097473ef1.scope - libcontainer container cb51cce7b3cfd6f355df02d24b093033796e66d44fa234f9738d8b3097473ef1. Sep 16 05:02:34.111392 containerd[1893]: time="2025-09-16T05:02:34.111354790Z" level=info msg="StartContainer for \"cb51cce7b3cfd6f355df02d24b093033796e66d44fa234f9738d8b3097473ef1\" returns successfully" Sep 16 05:02:34.197631 systemd[1]: cri-containerd-cb51cce7b3cfd6f355df02d24b093033796e66d44fa234f9738d8b3097473ef1.scope: Deactivated successfully. Sep 16 05:02:34.200237 containerd[1893]: time="2025-09-16T05:02:34.200160361Z" level=info msg="received exit event container_id:\"cb51cce7b3cfd6f355df02d24b093033796e66d44fa234f9738d8b3097473ef1\" id:\"cb51cce7b3cfd6f355df02d24b093033796e66d44fa234f9738d8b3097473ef1\" pid:5256 exited_at:{seconds:1757998954 nanos:199945837}" Sep 16 05:02:34.200457 containerd[1893]: time="2025-09-16T05:02:34.200324069Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cb51cce7b3cfd6f355df02d24b093033796e66d44fa234f9738d8b3097473ef1\" id:\"cb51cce7b3cfd6f355df02d24b093033796e66d44fa234f9738d8b3097473ef1\" pid:5256 exited_at:{seconds:1757998954 nanos:199945837}" Sep 16 05:02:34.230417 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb51cce7b3cfd6f355df02d24b093033796e66d44fa234f9738d8b3097473ef1-rootfs.mount: Deactivated successfully. Sep 16 05:02:35.003837 containerd[1893]: time="2025-09-16T05:02:35.003771414Z" level=info msg="CreateContainer within sandbox \"28799e8ff10b596536343ca2151e4bf768588e5d56420cf6edbb613909f41d26\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 16 05:02:35.022101 containerd[1893]: time="2025-09-16T05:02:35.020359980Z" level=info msg="Container 793649c03a6a80d2d762f3110eca815be03f2c5e3a97fb2a6956acb42c4303e8: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:02:35.033748 containerd[1893]: time="2025-09-16T05:02:35.033683715Z" level=info msg="CreateContainer within sandbox \"28799e8ff10b596536343ca2151e4bf768588e5d56420cf6edbb613909f41d26\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"793649c03a6a80d2d762f3110eca815be03f2c5e3a97fb2a6956acb42c4303e8\"" Sep 16 05:02:35.034960 containerd[1893]: time="2025-09-16T05:02:35.034833451Z" level=info msg="StartContainer for \"793649c03a6a80d2d762f3110eca815be03f2c5e3a97fb2a6956acb42c4303e8\"" Sep 16 05:02:35.036505 containerd[1893]: time="2025-09-16T05:02:35.036427630Z" level=info msg="connecting to shim 793649c03a6a80d2d762f3110eca815be03f2c5e3a97fb2a6956acb42c4303e8" address="unix:///run/containerd/s/b53d0ba30af4d537ec553a8672d22472c1217f16ec252fb501686951e53c6a8b" protocol=ttrpc version=3 Sep 16 05:02:35.073677 systemd[1]: Started cri-containerd-793649c03a6a80d2d762f3110eca815be03f2c5e3a97fb2a6956acb42c4303e8.scope - libcontainer container 793649c03a6a80d2d762f3110eca815be03f2c5e3a97fb2a6956acb42c4303e8. Sep 16 05:02:35.113204 systemd[1]: cri-containerd-793649c03a6a80d2d762f3110eca815be03f2c5e3a97fb2a6956acb42c4303e8.scope: Deactivated successfully. Sep 16 05:02:35.118022 containerd[1893]: time="2025-09-16T05:02:35.117902186Z" level=info msg="received exit event container_id:\"793649c03a6a80d2d762f3110eca815be03f2c5e3a97fb2a6956acb42c4303e8\" id:\"793649c03a6a80d2d762f3110eca815be03f2c5e3a97fb2a6956acb42c4303e8\" pid:5296 exited_at:{seconds:1757998955 nanos:113739810}" Sep 16 05:02:35.119985 containerd[1893]: time="2025-09-16T05:02:35.119851672Z" level=info msg="StartContainer for \"793649c03a6a80d2d762f3110eca815be03f2c5e3a97fb2a6956acb42c4303e8\" returns successfully" Sep 16 05:02:35.127781 containerd[1893]: time="2025-09-16T05:02:35.127707422Z" level=info msg="TaskExit event in podsandbox handler container_id:\"793649c03a6a80d2d762f3110eca815be03f2c5e3a97fb2a6956acb42c4303e8\" id:\"793649c03a6a80d2d762f3110eca815be03f2c5e3a97fb2a6956acb42c4303e8\" pid:5296 exited_at:{seconds:1757998955 nanos:113739810}" Sep 16 05:02:35.163033 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-793649c03a6a80d2d762f3110eca815be03f2c5e3a97fb2a6956acb42c4303e8-rootfs.mount: Deactivated successfully. Sep 16 05:02:36.013188 containerd[1893]: time="2025-09-16T05:02:36.013077728Z" level=info msg="CreateContainer within sandbox \"28799e8ff10b596536343ca2151e4bf768588e5d56420cf6edbb613909f41d26\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 16 05:02:36.030419 containerd[1893]: time="2025-09-16T05:02:36.028641950Z" level=info msg="Container 255bf02f9ab4bed406fd46f61cdfb241fd6272d315a0316d3c74234fe165d0e7: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:02:36.042283 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2790276488.mount: Deactivated successfully. Sep 16 05:02:36.043027 containerd[1893]: time="2025-09-16T05:02:36.042984615Z" level=info msg="CreateContainer within sandbox \"28799e8ff10b596536343ca2151e4bf768588e5d56420cf6edbb613909f41d26\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"255bf02f9ab4bed406fd46f61cdfb241fd6272d315a0316d3c74234fe165d0e7\"" Sep 16 05:02:36.045697 containerd[1893]: time="2025-09-16T05:02:36.044775409Z" level=info msg="StartContainer for \"255bf02f9ab4bed406fd46f61cdfb241fd6272d315a0316d3c74234fe165d0e7\"" Sep 16 05:02:36.046397 containerd[1893]: time="2025-09-16T05:02:36.046225779Z" level=info msg="connecting to shim 255bf02f9ab4bed406fd46f61cdfb241fd6272d315a0316d3c74234fe165d0e7" address="unix:///run/containerd/s/b53d0ba30af4d537ec553a8672d22472c1217f16ec252fb501686951e53c6a8b" protocol=ttrpc version=3 Sep 16 05:02:36.078719 systemd[1]: Started cri-containerd-255bf02f9ab4bed406fd46f61cdfb241fd6272d315a0316d3c74234fe165d0e7.scope - libcontainer container 255bf02f9ab4bed406fd46f61cdfb241fd6272d315a0316d3c74234fe165d0e7. Sep 16 05:02:36.125555 containerd[1893]: time="2025-09-16T05:02:36.125458723Z" level=info msg="StartContainer for \"255bf02f9ab4bed406fd46f61cdfb241fd6272d315a0316d3c74234fe165d0e7\" returns successfully" Sep 16 05:02:36.640973 containerd[1893]: time="2025-09-16T05:02:36.640926562Z" level=info msg="TaskExit event in podsandbox handler container_id:\"255bf02f9ab4bed406fd46f61cdfb241fd6272d315a0316d3c74234fe165d0e7\" id:\"0a38b0976acf0dfcec8937dd7f906908ace2bae5c46157e68fe7d15aa6450979\" pid:5364 exited_at:{seconds:1757998956 nanos:640514032}" Sep 16 05:02:36.893506 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Sep 16 05:02:37.054512 kubelet[3249]: I0916 05:02:37.054434 3249 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7r2wp" podStartSLOduration=6.054412949 podStartE2EDuration="6.054412949s" podCreationTimestamp="2025-09-16 05:02:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 05:02:37.053937612 +0000 UTC m=+100.698269839" watchObservedRunningTime="2025-09-16 05:02:37.054412949 +0000 UTC m=+100.698745120" Sep 16 05:02:38.761973 containerd[1893]: time="2025-09-16T05:02:38.761914370Z" level=info msg="TaskExit event in podsandbox handler container_id:\"255bf02f9ab4bed406fd46f61cdfb241fd6272d315a0316d3c74234fe165d0e7\" id:\"543392578709409544a5ec995309aff4c13e0f90ffc676e5db09f1e1dbd761ae\" pid:5506 exit_status:1 exited_at:{seconds:1757998958 nanos:759833738}" Sep 16 05:02:40.043114 systemd-networkd[1819]: lxc_health: Link UP Sep 16 05:02:40.050117 systemd-networkd[1819]: lxc_health: Gained carrier Sep 16 05:02:40.057535 (udev-worker)[5855]: Network interface NamePolicy= disabled on kernel command line. Sep 16 05:02:41.018218 containerd[1893]: time="2025-09-16T05:02:41.018175188Z" level=info msg="TaskExit event in podsandbox handler container_id:\"255bf02f9ab4bed406fd46f61cdfb241fd6272d315a0316d3c74234fe165d0e7\" id:\"ba612cce1bcc9b8a029003c1fdee2a5b0e5fd34f0dec6675b8d0ddd616e40f18\" pid:5884 exited_at:{seconds:1757998961 nanos:16397127}" Sep 16 05:02:42.016758 systemd-networkd[1819]: lxc_health: Gained IPv6LL Sep 16 05:02:43.275310 containerd[1893]: time="2025-09-16T05:02:43.275260924Z" level=info msg="TaskExit event in podsandbox handler container_id:\"255bf02f9ab4bed406fd46f61cdfb241fd6272d315a0316d3c74234fe165d0e7\" id:\"d4674fa5e953b8a970e4b434215e785658bc575818c97de9fb7254456f138134\" pid:5917 exited_at:{seconds:1757998963 nanos:274723140}" Sep 16 05:02:44.645373 ntpd[1855]: Listen normally on 13 lxc_health [fe80::540d:e3ff:fe27:32a7%14]:123 Sep 16 05:02:44.646015 ntpd[1855]: 16 Sep 05:02:44 ntpd[1855]: Listen normally on 13 lxc_health [fe80::540d:e3ff:fe27:32a7%14]:123 Sep 16 05:02:45.395264 containerd[1893]: time="2025-09-16T05:02:45.395219075Z" level=info msg="TaskExit event in podsandbox handler container_id:\"255bf02f9ab4bed406fd46f61cdfb241fd6272d315a0316d3c74234fe165d0e7\" id:\"d88dd9f8a657db423edaeeffc6e88412d894c4d78dfd1fc71ac5e941d6d799f6\" pid:5951 exited_at:{seconds:1757998965 nanos:394862116}" Sep 16 05:02:47.536356 containerd[1893]: time="2025-09-16T05:02:47.536230842Z" level=info msg="TaskExit event in podsandbox handler container_id:\"255bf02f9ab4bed406fd46f61cdfb241fd6272d315a0316d3c74234fe165d0e7\" id:\"d0a24406ad359c30c8f729de4928d566467978b84a723143ba1d2a9ce22270b5\" pid:5975 exited_at:{seconds:1757998967 nanos:535432932}" Sep 16 05:02:47.563002 sshd[5096]: Connection closed by 139.178.68.195 port 50588 Sep 16 05:02:47.573572 systemd[1]: sshd@27-172.31.17.131:22-139.178.68.195:50588.service: Deactivated successfully. Sep 16 05:02:47.564224 sshd-session[5092]: pam_unix(sshd:session): session closed for user core Sep 16 05:02:47.575951 systemd[1]: session-28.scope: Deactivated successfully. Sep 16 05:02:47.577381 systemd-logind[1863]: Session 28 logged out. Waiting for processes to exit. Sep 16 05:02:47.578875 systemd-logind[1863]: Removed session 28. Sep 16 05:02:56.533848 containerd[1893]: time="2025-09-16T05:02:56.533808970Z" level=info msg="StopPodSandbox for \"770d89b5b4ad8b6628f3d05d4981af3b8cb905c5d7e3781819520aa59e5fce72\"" Sep 16 05:02:56.534345 containerd[1893]: time="2025-09-16T05:02:56.533943793Z" level=info msg="TearDown network for sandbox \"770d89b5b4ad8b6628f3d05d4981af3b8cb905c5d7e3781819520aa59e5fce72\" successfully" Sep 16 05:02:56.534345 containerd[1893]: time="2025-09-16T05:02:56.533955697Z" level=info msg="StopPodSandbox for \"770d89b5b4ad8b6628f3d05d4981af3b8cb905c5d7e3781819520aa59e5fce72\" returns successfully" Sep 16 05:02:56.534766 containerd[1893]: time="2025-09-16T05:02:56.534726478Z" level=info msg="RemovePodSandbox for \"770d89b5b4ad8b6628f3d05d4981af3b8cb905c5d7e3781819520aa59e5fce72\"" Sep 16 05:02:56.534861 containerd[1893]: time="2025-09-16T05:02:56.534768766Z" level=info msg="Forcibly stopping sandbox \"770d89b5b4ad8b6628f3d05d4981af3b8cb905c5d7e3781819520aa59e5fce72\"" Sep 16 05:02:56.534861 containerd[1893]: time="2025-09-16T05:02:56.534856506Z" level=info msg="TearDown network for sandbox \"770d89b5b4ad8b6628f3d05d4981af3b8cb905c5d7e3781819520aa59e5fce72\" successfully" Sep 16 05:02:56.536824 containerd[1893]: time="2025-09-16T05:02:56.536778763Z" level=info msg="Ensure that sandbox 770d89b5b4ad8b6628f3d05d4981af3b8cb905c5d7e3781819520aa59e5fce72 in task-service has been cleanup successfully" Sep 16 05:02:56.543886 containerd[1893]: time="2025-09-16T05:02:56.543816042Z" level=info msg="RemovePodSandbox \"770d89b5b4ad8b6628f3d05d4981af3b8cb905c5d7e3781819520aa59e5fce72\" returns successfully" Sep 16 05:02:56.544300 containerd[1893]: time="2025-09-16T05:02:56.544265336Z" level=info msg="StopPodSandbox for \"6213bf3b45aa9029be2ae36d57457a8838b1c22fc05035a8f6216376e1ef820f\"" Sep 16 05:02:56.544410 containerd[1893]: time="2025-09-16T05:02:56.544389524Z" level=info msg="TearDown network for sandbox \"6213bf3b45aa9029be2ae36d57457a8838b1c22fc05035a8f6216376e1ef820f\" successfully" Sep 16 05:02:56.544410 containerd[1893]: time="2025-09-16T05:02:56.544405688Z" level=info msg="StopPodSandbox for \"6213bf3b45aa9029be2ae36d57457a8838b1c22fc05035a8f6216376e1ef820f\" returns successfully" Sep 16 05:02:56.545401 containerd[1893]: time="2025-09-16T05:02:56.544796297Z" level=info msg="RemovePodSandbox for \"6213bf3b45aa9029be2ae36d57457a8838b1c22fc05035a8f6216376e1ef820f\"" Sep 16 05:02:56.545401 containerd[1893]: time="2025-09-16T05:02:56.544822348Z" level=info msg="Forcibly stopping sandbox \"6213bf3b45aa9029be2ae36d57457a8838b1c22fc05035a8f6216376e1ef820f\"" Sep 16 05:02:56.545401 containerd[1893]: time="2025-09-16T05:02:56.544903020Z" level=info msg="TearDown network for sandbox \"6213bf3b45aa9029be2ae36d57457a8838b1c22fc05035a8f6216376e1ef820f\" successfully" Sep 16 05:02:56.546386 containerd[1893]: time="2025-09-16T05:02:56.546365823Z" level=info msg="Ensure that sandbox 6213bf3b45aa9029be2ae36d57457a8838b1c22fc05035a8f6216376e1ef820f in task-service has been cleanup successfully" Sep 16 05:02:56.551538 containerd[1893]: time="2025-09-16T05:02:56.551453367Z" level=info msg="RemovePodSandbox \"6213bf3b45aa9029be2ae36d57457a8838b1c22fc05035a8f6216376e1ef820f\" returns successfully" Sep 16 05:03:02.304286 systemd[1]: cri-containerd-31efe49d3f08b81ca6af53f221cbc1a3af66c561985b54d701e5d63219f49390.scope: Deactivated successfully. Sep 16 05:03:02.306354 systemd[1]: cri-containerd-31efe49d3f08b81ca6af53f221cbc1a3af66c561985b54d701e5d63219f49390.scope: Consumed 4.275s CPU time, 81.4M memory peak, 32.7M read from disk. Sep 16 05:03:02.354958 containerd[1893]: time="2025-09-16T05:03:02.316498995Z" level=info msg="received exit event container_id:\"31efe49d3f08b81ca6af53f221cbc1a3af66c561985b54d701e5d63219f49390\" id:\"31efe49d3f08b81ca6af53f221cbc1a3af66c561985b54d701e5d63219f49390\" pid:3090 exit_status:1 exited_at:{seconds:1757998982 nanos:316045829}" Sep 16 05:03:02.354958 containerd[1893]: time="2025-09-16T05:03:02.316694319Z" level=info msg="TaskExit event in podsandbox handler container_id:\"31efe49d3f08b81ca6af53f221cbc1a3af66c561985b54d701e5d63219f49390\" id:\"31efe49d3f08b81ca6af53f221cbc1a3af66c561985b54d701e5d63219f49390\" pid:3090 exit_status:1 exited_at:{seconds:1757998982 nanos:316045829}" Sep 16 05:03:02.451531 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-31efe49d3f08b81ca6af53f221cbc1a3af66c561985b54d701e5d63219f49390-rootfs.mount: Deactivated successfully. Sep 16 05:03:03.095454 kubelet[3249]: I0916 05:03:03.094773 3249 scope.go:117] "RemoveContainer" containerID="31efe49d3f08b81ca6af53f221cbc1a3af66c561985b54d701e5d63219f49390" Sep 16 05:03:03.108834 containerd[1893]: time="2025-09-16T05:03:03.108788129Z" level=info msg="CreateContainer within sandbox \"9c867680e5fdd8083c060f28947f917973059e0e45361050c8b44fc3d6e3a99f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 16 05:03:03.175679 containerd[1893]: time="2025-09-16T05:03:03.175615926Z" level=info msg="Container 4fc1f6ca6c1f1d80a041929a06226c3b1568f51a9b94f387072af92d19382e8a: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:03:03.184830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3692850912.mount: Deactivated successfully. Sep 16 05:03:03.201593 containerd[1893]: time="2025-09-16T05:03:03.201117497Z" level=info msg="CreateContainer within sandbox \"9c867680e5fdd8083c060f28947f917973059e0e45361050c8b44fc3d6e3a99f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"4fc1f6ca6c1f1d80a041929a06226c3b1568f51a9b94f387072af92d19382e8a\"" Sep 16 05:03:03.204021 containerd[1893]: time="2025-09-16T05:03:03.203982433Z" level=info msg="StartContainer for \"4fc1f6ca6c1f1d80a041929a06226c3b1568f51a9b94f387072af92d19382e8a\"" Sep 16 05:03:03.205603 containerd[1893]: time="2025-09-16T05:03:03.205562761Z" level=info msg="connecting to shim 4fc1f6ca6c1f1d80a041929a06226c3b1568f51a9b94f387072af92d19382e8a" address="unix:///run/containerd/s/7c17e1f93b483157135ed244fb92dc1e916e27d14c04eb39211176f47b0791cd" protocol=ttrpc version=3 Sep 16 05:03:03.260446 systemd[1]: Started cri-containerd-4fc1f6ca6c1f1d80a041929a06226c3b1568f51a9b94f387072af92d19382e8a.scope - libcontainer container 4fc1f6ca6c1f1d80a041929a06226c3b1568f51a9b94f387072af92d19382e8a. Sep 16 05:03:03.448711 containerd[1893]: time="2025-09-16T05:03:03.448581123Z" level=info msg="StartContainer for \"4fc1f6ca6c1f1d80a041929a06226c3b1568f51a9b94f387072af92d19382e8a\" returns successfully" Sep 16 05:03:07.209115 systemd[1]: cri-containerd-2dc980dc7b707f3970bbbb3b94daa5f5105d547d4ab79a2a8a78a2d563f02dac.scope: Deactivated successfully. Sep 16 05:03:07.209495 systemd[1]: cri-containerd-2dc980dc7b707f3970bbbb3b94daa5f5105d547d4ab79a2a8a78a2d563f02dac.scope: Consumed 2.537s CPU time, 31.5M memory peak, 14M read from disk. Sep 16 05:03:07.213911 containerd[1893]: time="2025-09-16T05:03:07.213753174Z" level=info msg="received exit event container_id:\"2dc980dc7b707f3970bbbb3b94daa5f5105d547d4ab79a2a8a78a2d563f02dac\" id:\"2dc980dc7b707f3970bbbb3b94daa5f5105d547d4ab79a2a8a78a2d563f02dac\" pid:3061 exit_status:1 exited_at:{seconds:1757998987 nanos:212234287}" Sep 16 05:03:07.218646 containerd[1893]: time="2025-09-16T05:03:07.218570359Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2dc980dc7b707f3970bbbb3b94daa5f5105d547d4ab79a2a8a78a2d563f02dac\" id:\"2dc980dc7b707f3970bbbb3b94daa5f5105d547d4ab79a2a8a78a2d563f02dac\" pid:3061 exit_status:1 exited_at:{seconds:1757998987 nanos:212234287}" Sep 16 05:03:07.258619 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2dc980dc7b707f3970bbbb3b94daa5f5105d547d4ab79a2a8a78a2d563f02dac-rootfs.mount: Deactivated successfully. Sep 16 05:03:08.128657 kubelet[3249]: I0916 05:03:08.128625 3249 scope.go:117] "RemoveContainer" containerID="2dc980dc7b707f3970bbbb3b94daa5f5105d547d4ab79a2a8a78a2d563f02dac" Sep 16 05:03:08.130975 containerd[1893]: time="2025-09-16T05:03:08.130933154Z" level=info msg="CreateContainer within sandbox \"5e6118e99c6037f19a1ee3cd7e18a8600dc4841b2a34f37435f7dfb5f816aa91\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 16 05:03:08.147270 containerd[1893]: time="2025-09-16T05:03:08.146455784Z" level=info msg="Container a6ce1f0213e64cbea97f17bbba4afeb340ab1d81f528e8208a8564bc53972cb6: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:03:08.170876 containerd[1893]: time="2025-09-16T05:03:08.170815896Z" level=info msg="CreateContainer within sandbox \"5e6118e99c6037f19a1ee3cd7e18a8600dc4841b2a34f37435f7dfb5f816aa91\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"a6ce1f0213e64cbea97f17bbba4afeb340ab1d81f528e8208a8564bc53972cb6\"" Sep 16 05:03:08.171501 containerd[1893]: time="2025-09-16T05:03:08.171305969Z" level=info msg="StartContainer for \"a6ce1f0213e64cbea97f17bbba4afeb340ab1d81f528e8208a8564bc53972cb6\"" Sep 16 05:03:08.175122 containerd[1893]: time="2025-09-16T05:03:08.175045183Z" level=info msg="connecting to shim a6ce1f0213e64cbea97f17bbba4afeb340ab1d81f528e8208a8564bc53972cb6" address="unix:///run/containerd/s/387f703f2747c5aa5b602b29008828819fae7bf60ee4b27c226ec65b175464c9" protocol=ttrpc version=3 Sep 16 05:03:08.206718 systemd[1]: Started cri-containerd-a6ce1f0213e64cbea97f17bbba4afeb340ab1d81f528e8208a8564bc53972cb6.scope - libcontainer container a6ce1f0213e64cbea97f17bbba4afeb340ab1d81f528e8208a8564bc53972cb6. Sep 16 05:03:08.271009 containerd[1893]: time="2025-09-16T05:03:08.270974832Z" level=info msg="StartContainer for \"a6ce1f0213e64cbea97f17bbba4afeb340ab1d81f528e8208a8564bc53972cb6\" returns successfully" Sep 16 05:03:09.058733 kubelet[3249]: E0916 05:03:09.058671 3249 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-131?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 16 05:03:19.059939 kubelet[3249]: E0916 05:03:19.059867 3249 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-131?timeout=10s\": context deadline exceeded"