Sep 16 04:57:42.989768 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Sep 16 03:05:42 -00 2025 Sep 16 04:57:42.989805 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=0b876f86a632750e9937176808a48c2452d5168964273bcfc3c72f2a26140c06 Sep 16 04:57:42.989821 kernel: BIOS-provided physical RAM map: Sep 16 04:57:42.989832 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 16 04:57:42.989843 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Sep 16 04:57:42.989854 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Sep 16 04:57:42.989867 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Sep 16 04:57:42.989879 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Sep 16 04:57:42.989893 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Sep 16 04:57:42.989905 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Sep 16 04:57:42.989916 kernel: NX (Execute Disable) protection: active Sep 16 04:57:42.989927 kernel: APIC: Static calls initialized Sep 16 04:57:42.989939 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Sep 16 04:57:42.989951 kernel: extended physical RAM map: Sep 16 04:57:42.989968 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 16 04:57:42.989981 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Sep 16 04:57:42.989994 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Sep 16 04:57:42.990007 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Sep 16 04:57:42.990019 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Sep 16 04:57:42.990032 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Sep 16 04:57:42.990045 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Sep 16 04:57:42.990057 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Sep 16 04:57:42.990070 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Sep 16 04:57:42.990083 kernel: efi: EFI v2.7 by EDK II Sep 16 04:57:42.990099 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 Sep 16 04:57:42.990112 kernel: secureboot: Secure boot disabled Sep 16 04:57:42.990125 kernel: SMBIOS 2.7 present. Sep 16 04:57:42.990138 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Sep 16 04:57:42.990152 kernel: DMI: Memory slots populated: 1/1 Sep 16 04:57:42.990165 kernel: Hypervisor detected: KVM Sep 16 04:57:42.990178 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 16 04:57:42.990218 kernel: kvm-clock: using sched offset of 5112226199 cycles Sep 16 04:57:42.990232 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 16 04:57:42.990246 kernel: tsc: Detected 2500.006 MHz processor Sep 16 04:57:42.990259 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 16 04:57:42.990274 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 16 04:57:42.990287 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Sep 16 04:57:42.990299 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 16 04:57:42.990312 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 16 04:57:42.990325 kernel: Using GB pages for direct mapping Sep 16 04:57:42.990343 kernel: ACPI: Early table checksum verification disabled Sep 16 04:57:42.990358 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Sep 16 04:57:42.990372 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Sep 16 04:57:42.990386 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 16 04:57:42.990399 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Sep 16 04:57:42.990413 kernel: ACPI: FACS 0x00000000789D0000 000040 Sep 16 04:57:42.990426 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Sep 16 04:57:42.990439 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 16 04:57:42.990452 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 16 04:57:42.990468 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Sep 16 04:57:42.990482 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Sep 16 04:57:42.990495 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 16 04:57:42.990508 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 16 04:57:42.990521 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Sep 16 04:57:42.990535 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Sep 16 04:57:42.990548 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Sep 16 04:57:42.990561 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Sep 16 04:57:42.990577 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Sep 16 04:57:42.990590 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Sep 16 04:57:42.990603 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Sep 16 04:57:42.990617 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Sep 16 04:57:42.990630 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Sep 16 04:57:42.990643 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Sep 16 04:57:42.990656 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Sep 16 04:57:42.990669 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Sep 16 04:57:42.990683 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Sep 16 04:57:42.990696 kernel: NUMA: Initialized distance table, cnt=1 Sep 16 04:57:42.990711 kernel: NODE_DATA(0) allocated [mem 0x7a8eddc0-0x7a8f4fff] Sep 16 04:57:42.990725 kernel: Zone ranges: Sep 16 04:57:42.990738 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 16 04:57:42.990750 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Sep 16 04:57:42.990764 kernel: Normal empty Sep 16 04:57:42.990777 kernel: Device empty Sep 16 04:57:42.990790 kernel: Movable zone start for each node Sep 16 04:57:42.990803 kernel: Early memory node ranges Sep 16 04:57:42.990815 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 16 04:57:42.990831 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Sep 16 04:57:42.990845 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Sep 16 04:57:42.990858 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Sep 16 04:57:42.990871 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 16 04:57:42.990884 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 16 04:57:42.990898 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 16 04:57:42.990911 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Sep 16 04:57:42.990924 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 16 04:57:42.990937 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 16 04:57:42.990951 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Sep 16 04:57:42.990967 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 16 04:57:42.990980 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 16 04:57:42.990993 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 16 04:57:42.991007 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 16 04:57:42.991020 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 16 04:57:42.991033 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 16 04:57:42.991047 kernel: TSC deadline timer available Sep 16 04:57:42.991060 kernel: CPU topo: Max. logical packages: 1 Sep 16 04:57:42.991074 kernel: CPU topo: Max. logical dies: 1 Sep 16 04:57:42.991089 kernel: CPU topo: Max. dies per package: 1 Sep 16 04:57:42.991102 kernel: CPU topo: Max. threads per core: 2 Sep 16 04:57:42.991115 kernel: CPU topo: Num. cores per package: 1 Sep 16 04:57:42.991128 kernel: CPU topo: Num. threads per package: 2 Sep 16 04:57:42.991141 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Sep 16 04:57:42.991155 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 16 04:57:42.991169 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Sep 16 04:57:42.991182 kernel: Booting paravirtualized kernel on KVM Sep 16 04:57:42.991222 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 16 04:57:42.991240 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 16 04:57:42.991253 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Sep 16 04:57:42.991266 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Sep 16 04:57:42.991280 kernel: pcpu-alloc: [0] 0 1 Sep 16 04:57:42.991293 kernel: kvm-guest: PV spinlocks enabled Sep 16 04:57:42.991307 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 16 04:57:42.991323 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=0b876f86a632750e9937176808a48c2452d5168964273bcfc3c72f2a26140c06 Sep 16 04:57:42.991337 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 16 04:57:42.991353 kernel: random: crng init done Sep 16 04:57:42.991366 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 16 04:57:42.991380 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 16 04:57:42.991393 kernel: Fallback order for Node 0: 0 Sep 16 04:57:42.991407 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Sep 16 04:57:42.991421 kernel: Policy zone: DMA32 Sep 16 04:57:42.991444 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 16 04:57:42.991461 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 16 04:57:42.991475 kernel: Kernel/User page tables isolation: enabled Sep 16 04:57:42.991489 kernel: ftrace: allocating 40125 entries in 157 pages Sep 16 04:57:42.991503 kernel: ftrace: allocated 157 pages with 5 groups Sep 16 04:57:42.991517 kernel: Dynamic Preempt: voluntary Sep 16 04:57:42.991534 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 16 04:57:42.991549 kernel: rcu: RCU event tracing is enabled. Sep 16 04:57:42.991564 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 16 04:57:42.991578 kernel: Trampoline variant of Tasks RCU enabled. Sep 16 04:57:42.991593 kernel: Rude variant of Tasks RCU enabled. Sep 16 04:57:42.991610 kernel: Tracing variant of Tasks RCU enabled. Sep 16 04:57:42.991624 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 16 04:57:42.991639 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 16 04:57:42.991653 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 16 04:57:42.991668 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 16 04:57:42.991682 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 16 04:57:42.991696 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 16 04:57:42.991710 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 16 04:57:42.991725 kernel: Console: colour dummy device 80x25 Sep 16 04:57:42.991738 kernel: printk: legacy console [tty0] enabled Sep 16 04:57:42.991750 kernel: printk: legacy console [ttyS0] enabled Sep 16 04:57:42.991761 kernel: ACPI: Core revision 20240827 Sep 16 04:57:42.991773 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Sep 16 04:57:42.991786 kernel: APIC: Switch to symmetric I/O mode setup Sep 16 04:57:42.991800 kernel: x2apic enabled Sep 16 04:57:42.991812 kernel: APIC: Switched APIC routing to: physical x2apic Sep 16 04:57:42.991825 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093fa6a7c, max_idle_ns: 440795295209 ns Sep 16 04:57:42.991837 kernel: Calibrating delay loop (skipped) preset value.. 5000.01 BogoMIPS (lpj=2500006) Sep 16 04:57:42.991853 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 16 04:57:42.991866 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 16 04:57:42.991878 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 16 04:57:42.991892 kernel: Spectre V2 : Mitigation: Retpolines Sep 16 04:57:42.991905 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 16 04:57:42.991918 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 16 04:57:42.991932 kernel: RETBleed: Vulnerable Sep 16 04:57:42.991948 kernel: Speculative Store Bypass: Vulnerable Sep 16 04:57:42.991962 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Sep 16 04:57:42.991976 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 16 04:57:42.991992 kernel: GDS: Unknown: Dependent on hypervisor status Sep 16 04:57:42.992005 kernel: active return thunk: its_return_thunk Sep 16 04:57:42.992019 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 16 04:57:42.992033 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 16 04:57:42.992047 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 16 04:57:42.992062 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 16 04:57:42.992076 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Sep 16 04:57:42.992090 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Sep 16 04:57:42.992104 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 16 04:57:42.992118 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 16 04:57:42.992133 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 16 04:57:42.992150 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Sep 16 04:57:42.992164 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 16 04:57:42.994632 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Sep 16 04:57:42.995265 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Sep 16 04:57:42.995284 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Sep 16 04:57:42.995301 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Sep 16 04:57:42.995317 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Sep 16 04:57:42.995333 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Sep 16 04:57:42.995349 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Sep 16 04:57:42.995365 kernel: Freeing SMP alternatives memory: 32K Sep 16 04:57:42.995381 kernel: pid_max: default: 32768 minimum: 301 Sep 16 04:57:42.995402 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 16 04:57:42.995418 kernel: landlock: Up and running. Sep 16 04:57:42.995434 kernel: SELinux: Initializing. Sep 16 04:57:42.995450 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 16 04:57:42.995466 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 16 04:57:42.995482 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8175M CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x4) Sep 16 04:57:42.995498 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Sep 16 04:57:42.995514 kernel: signal: max sigframe size: 3632 Sep 16 04:57:42.995530 kernel: rcu: Hierarchical SRCU implementation. Sep 16 04:57:42.995547 kernel: rcu: Max phase no-delay instances is 400. Sep 16 04:57:42.995564 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 16 04:57:42.995583 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 16 04:57:42.995599 kernel: smp: Bringing up secondary CPUs ... Sep 16 04:57:42.995615 kernel: smpboot: x86: Booting SMP configuration: Sep 16 04:57:42.995631 kernel: .... node #0, CPUs: #1 Sep 16 04:57:42.995649 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 16 04:57:42.995666 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 16 04:57:42.995682 kernel: smp: Brought up 1 node, 2 CPUs Sep 16 04:57:42.995698 kernel: smpboot: Total of 2 processors activated (10000.02 BogoMIPS) Sep 16 04:57:42.995717 kernel: Memory: 1908056K/2037804K available (14336K kernel code, 2432K rwdata, 9992K rodata, 54096K init, 2868K bss, 125192K reserved, 0K cma-reserved) Sep 16 04:57:42.995733 kernel: devtmpfs: initialized Sep 16 04:57:42.995749 kernel: x86/mm: Memory block size: 128MB Sep 16 04:57:42.995765 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Sep 16 04:57:42.995781 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 16 04:57:42.995798 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 16 04:57:42.995813 kernel: pinctrl core: initialized pinctrl subsystem Sep 16 04:57:42.995829 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 16 04:57:42.995845 kernel: audit: initializing netlink subsys (disabled) Sep 16 04:57:42.995864 kernel: audit: type=2000 audit(1757998660.624:1): state=initialized audit_enabled=0 res=1 Sep 16 04:57:42.995880 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 16 04:57:42.995896 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 16 04:57:42.995912 kernel: cpuidle: using governor menu Sep 16 04:57:42.995928 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 16 04:57:42.995944 kernel: dca service started, version 1.12.1 Sep 16 04:57:42.995960 kernel: PCI: Using configuration type 1 for base access Sep 16 04:57:42.995976 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 16 04:57:42.995992 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 16 04:57:42.996011 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 16 04:57:42.996027 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 16 04:57:42.996043 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 16 04:57:42.996059 kernel: ACPI: Added _OSI(Module Device) Sep 16 04:57:42.996075 kernel: ACPI: Added _OSI(Processor Device) Sep 16 04:57:42.996091 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 16 04:57:42.996107 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Sep 16 04:57:42.996124 kernel: ACPI: Interpreter enabled Sep 16 04:57:42.996140 kernel: ACPI: PM: (supports S0 S5) Sep 16 04:57:42.996159 kernel: ACPI: Using IOAPIC for interrupt routing Sep 16 04:57:42.996175 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 16 04:57:42.996203 kernel: PCI: Using E820 reservations for host bridge windows Sep 16 04:57:42.996219 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 16 04:57:42.996235 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 16 04:57:42.996452 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 16 04:57:42.996590 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 16 04:57:42.996737 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 16 04:57:42.996756 kernel: acpiphp: Slot [3] registered Sep 16 04:57:42.996773 kernel: acpiphp: Slot [4] registered Sep 16 04:57:42.996789 kernel: acpiphp: Slot [5] registered Sep 16 04:57:42.996805 kernel: acpiphp: Slot [6] registered Sep 16 04:57:42.996821 kernel: acpiphp: Slot [7] registered Sep 16 04:57:42.996837 kernel: acpiphp: Slot [8] registered Sep 16 04:57:42.996854 kernel: acpiphp: Slot [9] registered Sep 16 04:57:42.996870 kernel: acpiphp: Slot [10] registered Sep 16 04:57:42.996888 kernel: acpiphp: Slot [11] registered Sep 16 04:57:42.996904 kernel: acpiphp: Slot [12] registered Sep 16 04:57:42.996920 kernel: acpiphp: Slot [13] registered Sep 16 04:57:42.996936 kernel: acpiphp: Slot [14] registered Sep 16 04:57:42.996952 kernel: acpiphp: Slot [15] registered Sep 16 04:57:42.996968 kernel: acpiphp: Slot [16] registered Sep 16 04:57:42.996984 kernel: acpiphp: Slot [17] registered Sep 16 04:57:42.998277 kernel: acpiphp: Slot [18] registered Sep 16 04:57:42.998304 kernel: acpiphp: Slot [19] registered Sep 16 04:57:42.998316 kernel: acpiphp: Slot [20] registered Sep 16 04:57:42.998334 kernel: acpiphp: Slot [21] registered Sep 16 04:57:42.998347 kernel: acpiphp: Slot [22] registered Sep 16 04:57:42.998360 kernel: acpiphp: Slot [23] registered Sep 16 04:57:42.998373 kernel: acpiphp: Slot [24] registered Sep 16 04:57:42.998388 kernel: acpiphp: Slot [25] registered Sep 16 04:57:42.998403 kernel: acpiphp: Slot [26] registered Sep 16 04:57:42.998418 kernel: acpiphp: Slot [27] registered Sep 16 04:57:42.998433 kernel: acpiphp: Slot [28] registered Sep 16 04:57:42.998448 kernel: acpiphp: Slot [29] registered Sep 16 04:57:42.998466 kernel: acpiphp: Slot [30] registered Sep 16 04:57:42.998481 kernel: acpiphp: Slot [31] registered Sep 16 04:57:42.998497 kernel: PCI host bridge to bus 0000:00 Sep 16 04:57:42.998674 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 16 04:57:42.998799 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 16 04:57:42.998916 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 16 04:57:42.999028 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 16 04:57:42.999138 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Sep 16 04:57:43.000356 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 16 04:57:43.000522 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Sep 16 04:57:43.000683 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Sep 16 04:57:43.000880 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Sep 16 04:57:43.001022 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 16 04:57:43.001160 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Sep 16 04:57:43.003434 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Sep 16 04:57:43.003577 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Sep 16 04:57:43.003704 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Sep 16 04:57:43.003829 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Sep 16 04:57:43.003953 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Sep 16 04:57:43.004086 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Sep 16 04:57:43.004274 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Sep 16 04:57:43.004426 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Sep 16 04:57:43.004577 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 16 04:57:43.004734 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Sep 16 04:57:43.004872 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Sep 16 04:57:43.005012 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Sep 16 04:57:43.005148 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Sep 16 04:57:43.005173 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 16 04:57:43.006943 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 16 04:57:43.006971 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 16 04:57:43.006988 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 16 04:57:43.007004 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 16 04:57:43.007020 kernel: iommu: Default domain type: Translated Sep 16 04:57:43.007036 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 16 04:57:43.007052 kernel: efivars: Registered efivars operations Sep 16 04:57:43.007068 kernel: PCI: Using ACPI for IRQ routing Sep 16 04:57:43.007088 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 16 04:57:43.007104 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Sep 16 04:57:43.007118 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Sep 16 04:57:43.007133 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Sep 16 04:57:43.007326 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Sep 16 04:57:43.007466 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Sep 16 04:57:43.007600 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 16 04:57:43.007620 kernel: vgaarb: loaded Sep 16 04:57:43.007640 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Sep 16 04:57:43.007656 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Sep 16 04:57:43.007672 kernel: clocksource: Switched to clocksource kvm-clock Sep 16 04:57:43.007688 kernel: VFS: Disk quotas dquot_6.6.0 Sep 16 04:57:43.007704 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 16 04:57:43.007720 kernel: pnp: PnP ACPI init Sep 16 04:57:43.007736 kernel: pnp: PnP ACPI: found 5 devices Sep 16 04:57:43.007752 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 16 04:57:43.007768 kernel: NET: Registered PF_INET protocol family Sep 16 04:57:43.007787 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 16 04:57:43.007803 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 16 04:57:43.007820 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 16 04:57:43.007836 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 16 04:57:43.007851 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 16 04:57:43.007867 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 16 04:57:43.007883 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 16 04:57:43.007899 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 16 04:57:43.007914 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 16 04:57:43.007933 kernel: NET: Registered PF_XDP protocol family Sep 16 04:57:43.008061 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 16 04:57:43.008182 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 16 04:57:43.012660 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 16 04:57:43.012830 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 16 04:57:43.012955 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Sep 16 04:57:43.013981 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 16 04:57:43.017287 kernel: PCI: CLS 0 bytes, default 64 Sep 16 04:57:43.017317 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 16 04:57:43.017334 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093fa6a7c, max_idle_ns: 440795295209 ns Sep 16 04:57:43.017350 kernel: clocksource: Switched to clocksource tsc Sep 16 04:57:43.017366 kernel: Initialise system trusted keyrings Sep 16 04:57:43.017382 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 16 04:57:43.017399 kernel: Key type asymmetric registered Sep 16 04:57:43.017414 kernel: Asymmetric key parser 'x509' registered Sep 16 04:57:43.017430 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 16 04:57:43.017446 kernel: io scheduler mq-deadline registered Sep 16 04:57:43.017465 kernel: io scheduler kyber registered Sep 16 04:57:43.017481 kernel: io scheduler bfq registered Sep 16 04:57:43.017497 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 16 04:57:43.017513 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 16 04:57:43.017528 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 16 04:57:43.017544 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 16 04:57:43.017560 kernel: i8042: Warning: Keylock active Sep 16 04:57:43.017575 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 16 04:57:43.017591 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 16 04:57:43.017773 kernel: rtc_cmos 00:00: RTC can wake from S4 Sep 16 04:57:43.017901 kernel: rtc_cmos 00:00: registered as rtc0 Sep 16 04:57:43.018023 kernel: rtc_cmos 00:00: setting system clock to 2025-09-16T04:57:42 UTC (1757998662) Sep 16 04:57:43.018145 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Sep 16 04:57:43.018189 kernel: intel_pstate: CPU model not supported Sep 16 04:57:43.019250 kernel: efifb: probing for efifb Sep 16 04:57:43.019269 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Sep 16 04:57:43.019286 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Sep 16 04:57:43.019305 kernel: efifb: scrolling: redraw Sep 16 04:57:43.019321 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 16 04:57:43.019338 kernel: Console: switching to colour frame buffer device 100x37 Sep 16 04:57:43.019355 kernel: fb0: EFI VGA frame buffer device Sep 16 04:57:43.019371 kernel: pstore: Using crash dump compression: deflate Sep 16 04:57:43.019388 kernel: pstore: Registered efi_pstore as persistent store backend Sep 16 04:57:43.019405 kernel: NET: Registered PF_INET6 protocol family Sep 16 04:57:43.019422 kernel: Segment Routing with IPv6 Sep 16 04:57:43.019439 kernel: In-situ OAM (IOAM) with IPv6 Sep 16 04:57:43.019458 kernel: NET: Registered PF_PACKET protocol family Sep 16 04:57:43.019475 kernel: Key type dns_resolver registered Sep 16 04:57:43.019491 kernel: IPI shorthand broadcast: enabled Sep 16 04:57:43.019509 kernel: sched_clock: Marking stable (2689009429, 195495043)->(3017721004, -133216532) Sep 16 04:57:43.019526 kernel: registered taskstats version 1 Sep 16 04:57:43.019542 kernel: Loading compiled-in X.509 certificates Sep 16 04:57:43.019559 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: d1d5b0d56b9b23dabf19e645632ff93bf659b3bf' Sep 16 04:57:43.019576 kernel: Demotion targets for Node 0: null Sep 16 04:57:43.019592 kernel: Key type .fscrypt registered Sep 16 04:57:43.019611 kernel: Key type fscrypt-provisioning registered Sep 16 04:57:43.019628 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 16 04:57:43.019645 kernel: ima: Allocated hash algorithm: sha1 Sep 16 04:57:43.019661 kernel: ima: No architecture policies found Sep 16 04:57:43.019678 kernel: clk: Disabling unused clocks Sep 16 04:57:43.019694 kernel: Warning: unable to open an initial console. Sep 16 04:57:43.019711 kernel: Freeing unused kernel image (initmem) memory: 54096K Sep 16 04:57:43.019727 kernel: Write protecting the kernel read-only data: 24576k Sep 16 04:57:43.019745 kernel: Freeing unused kernel image (rodata/data gap) memory: 248K Sep 16 04:57:43.019764 kernel: Run /init as init process Sep 16 04:57:43.019781 kernel: with arguments: Sep 16 04:57:43.019798 kernel: /init Sep 16 04:57:43.019814 kernel: with environment: Sep 16 04:57:43.019830 kernel: HOME=/ Sep 16 04:57:43.019847 kernel: TERM=linux Sep 16 04:57:43.019866 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 16 04:57:43.019885 systemd[1]: Successfully made /usr/ read-only. Sep 16 04:57:43.019907 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 16 04:57:43.019926 systemd[1]: Detected virtualization amazon. Sep 16 04:57:43.019943 systemd[1]: Detected architecture x86-64. Sep 16 04:57:43.019959 systemd[1]: Running in initrd. Sep 16 04:57:43.019975 systemd[1]: No hostname configured, using default hostname. Sep 16 04:57:43.019996 systemd[1]: Hostname set to . Sep 16 04:57:43.020013 systemd[1]: Initializing machine ID from VM UUID. Sep 16 04:57:43.020030 systemd[1]: Queued start job for default target initrd.target. Sep 16 04:57:43.020050 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 04:57:43.020067 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 04:57:43.020086 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 16 04:57:43.020104 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 16 04:57:43.020121 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 16 04:57:43.020143 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 16 04:57:43.020162 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 16 04:57:43.020180 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 16 04:57:43.022004 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 04:57:43.022034 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 16 04:57:43.022060 systemd[1]: Reached target paths.target - Path Units. Sep 16 04:57:43.022081 systemd[1]: Reached target slices.target - Slice Units. Sep 16 04:57:43.022098 systemd[1]: Reached target swap.target - Swaps. Sep 16 04:57:43.022115 systemd[1]: Reached target timers.target - Timer Units. Sep 16 04:57:43.022134 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 16 04:57:43.022152 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 16 04:57:43.022169 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 16 04:57:43.022186 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 16 04:57:43.022235 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 16 04:57:43.022253 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 16 04:57:43.022275 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 04:57:43.022292 systemd[1]: Reached target sockets.target - Socket Units. Sep 16 04:57:43.022310 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 16 04:57:43.022327 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 16 04:57:43.022345 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 16 04:57:43.022364 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 16 04:57:43.022382 systemd[1]: Starting systemd-fsck-usr.service... Sep 16 04:57:43.022399 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 16 04:57:43.022420 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 16 04:57:43.022437 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:57:43.022455 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 16 04:57:43.022514 systemd-journald[207]: Collecting audit messages is disabled. Sep 16 04:57:43.022556 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 04:57:43.022574 systemd[1]: Finished systemd-fsck-usr.service. Sep 16 04:57:43.022593 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 16 04:57:43.022611 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 16 04:57:43.022631 systemd-journald[207]: Journal started Sep 16 04:57:43.022670 systemd-journald[207]: Runtime Journal (/run/log/journal/ec2696a7b1a4f13cacd9f5c7d6bb0db4) is 4.8M, max 38.4M, 33.6M free. Sep 16 04:57:42.997377 systemd-modules-load[208]: Inserted module 'overlay' Sep 16 04:57:43.028262 systemd[1]: Started systemd-journald.service - Journal Service. Sep 16 04:57:43.044389 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 16 04:57:43.052445 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:57:43.065271 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 16 04:57:43.067478 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 16 04:57:43.072243 kernel: Bridge firewalling registered Sep 16 04:57:43.073301 systemd-modules-load[208]: Inserted module 'br_netfilter' Sep 16 04:57:43.077590 systemd-tmpfiles[221]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 16 04:57:43.079412 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 16 04:57:43.081388 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 16 04:57:43.083113 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 04:57:43.089357 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 16 04:57:43.092437 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 16 04:57:43.103723 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 16 04:57:43.111370 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 16 04:57:43.116805 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 04:57:43.119271 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:57:43.132406 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 16 04:57:43.149588 dracut-cmdline[242]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=0b876f86a632750e9937176808a48c2452d5168964273bcfc3c72f2a26140c06 Sep 16 04:57:43.196110 systemd-resolved[246]: Positive Trust Anchors: Sep 16 04:57:43.197113 systemd-resolved[246]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 16 04:57:43.197174 systemd-resolved[246]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 16 04:57:43.204382 systemd-resolved[246]: Defaulting to hostname 'linux'. Sep 16 04:57:43.208026 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 16 04:57:43.209061 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 16 04:57:43.257240 kernel: SCSI subsystem initialized Sep 16 04:57:43.267233 kernel: Loading iSCSI transport class v2.0-870. Sep 16 04:57:43.279227 kernel: iscsi: registered transport (tcp) Sep 16 04:57:43.302373 kernel: iscsi: registered transport (qla4xxx) Sep 16 04:57:43.302459 kernel: QLogic iSCSI HBA Driver Sep 16 04:57:43.323031 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 16 04:57:43.346085 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 04:57:43.349678 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 16 04:57:43.396418 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 16 04:57:43.398896 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 16 04:57:43.455243 kernel: raid6: avx512x4 gen() 17610 MB/s Sep 16 04:57:43.473249 kernel: raid6: avx512x2 gen() 17668 MB/s Sep 16 04:57:43.491236 kernel: raid6: avx512x1 gen() 17597 MB/s Sep 16 04:57:43.509245 kernel: raid6: avx2x4 gen() 17553 MB/s Sep 16 04:57:43.527235 kernel: raid6: avx2x2 gen() 17515 MB/s Sep 16 04:57:43.546375 kernel: raid6: avx2x1 gen() 13368 MB/s Sep 16 04:57:43.546465 kernel: raid6: using algorithm avx512x2 gen() 17668 MB/s Sep 16 04:57:43.566426 kernel: raid6: .... xor() 23868 MB/s, rmw enabled Sep 16 04:57:43.566509 kernel: raid6: using avx512x2 recovery algorithm Sep 16 04:57:43.590237 kernel: xor: automatically using best checksumming function avx Sep 16 04:57:43.764241 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 16 04:57:43.771667 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 16 04:57:43.774154 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 04:57:43.808168 systemd-udevd[455]: Using default interface naming scheme 'v255'. Sep 16 04:57:43.814516 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 04:57:43.817382 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 16 04:57:43.852227 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Sep 16 04:57:43.852418 dracut-pre-trigger[461]: rd.md=0: removing MD RAID activation Sep 16 04:57:43.884991 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 16 04:57:43.887235 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 16 04:57:43.962003 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 04:57:43.966777 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 16 04:57:44.063262 kernel: cryptd: max_cpu_qlen set to 1000 Sep 16 04:57:44.067496 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 16 04:57:44.070384 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 16 04:57:44.079733 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 16 04:57:44.079989 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Sep 16 04:57:44.086104 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 16 04:57:44.096742 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:dd:d5:aa:20:8f Sep 16 04:57:44.099222 kernel: AES CTR mode by8 optimization enabled Sep 16 04:57:44.117672 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 16 04:57:44.137232 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 16 04:57:44.137305 kernel: GPT:9289727 != 16777215 Sep 16 04:57:44.137328 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 16 04:57:44.137349 kernel: GPT:9289727 != 16777215 Sep 16 04:57:44.137366 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 16 04:57:44.137384 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 16 04:57:44.162295 (udev-worker)[519]: Network interface NamePolicy= disabled on kernel command line. Sep 16 04:57:44.167542 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 04:57:44.168698 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:57:44.171121 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:57:44.175687 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:57:44.178079 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 16 04:57:44.191650 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 04:57:44.191952 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:57:44.196371 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:57:44.204285 kernel: nvme nvme0: using unchecked data buffer Sep 16 04:57:44.248378 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:57:44.348709 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 16 04:57:44.361956 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 16 04:57:44.362935 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 16 04:57:44.373725 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 16 04:57:44.374422 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 16 04:57:44.393507 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 16 04:57:44.394161 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 16 04:57:44.395551 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 04:57:44.396906 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 16 04:57:44.398822 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 16 04:57:44.403355 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 16 04:57:44.423519 disk-uuid[695]: Primary Header is updated. Sep 16 04:57:44.423519 disk-uuid[695]: Secondary Entries is updated. Sep 16 04:57:44.423519 disk-uuid[695]: Secondary Header is updated. Sep 16 04:57:44.431255 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 16 04:57:44.431649 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 16 04:57:44.441225 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 16 04:57:45.450297 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 16 04:57:45.450695 disk-uuid[698]: The operation has completed successfully. Sep 16 04:57:45.577099 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 16 04:57:45.577267 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 16 04:57:45.618541 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 16 04:57:45.632814 sh[961]: Success Sep 16 04:57:45.653253 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 16 04:57:45.653330 kernel: device-mapper: uevent: version 1.0.3 Sep 16 04:57:45.657723 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 16 04:57:45.669288 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Sep 16 04:57:45.750719 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 16 04:57:45.756324 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 16 04:57:45.772238 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 16 04:57:45.793226 kernel: BTRFS: device fsid f1b91845-3914-4d21-a370-6d760ee45b2e devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (984) Sep 16 04:57:45.796830 kernel: BTRFS info (device dm-0): first mount of filesystem f1b91845-3914-4d21-a370-6d760ee45b2e Sep 16 04:57:45.798646 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 16 04:57:45.811299 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 16 04:57:45.811381 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 16 04:57:45.814295 kernel: BTRFS info (device dm-0): enabling free space tree Sep 16 04:57:45.817594 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 16 04:57:45.819011 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 16 04:57:45.819905 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 16 04:57:45.822372 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 16 04:57:45.823869 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 16 04:57:45.856227 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1015) Sep 16 04:57:45.860631 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 04:57:45.860717 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 16 04:57:45.881499 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 16 04:57:45.881573 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 16 04:57:45.890225 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 04:57:45.892447 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 16 04:57:45.895499 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 16 04:57:45.943704 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 16 04:57:45.946586 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 16 04:57:45.983753 systemd-networkd[1153]: lo: Link UP Sep 16 04:57:45.983769 systemd-networkd[1153]: lo: Gained carrier Sep 16 04:57:45.985116 systemd-networkd[1153]: Enumeration completed Sep 16 04:57:45.985241 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 16 04:57:45.985726 systemd[1]: Reached target network.target - Network. Sep 16 04:57:45.986426 systemd-networkd[1153]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:57:45.986431 systemd-networkd[1153]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 16 04:57:45.989093 systemd-networkd[1153]: eth0: Link UP Sep 16 04:57:45.989100 systemd-networkd[1153]: eth0: Gained carrier Sep 16 04:57:45.989112 systemd-networkd[1153]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:57:46.002291 systemd-networkd[1153]: eth0: DHCPv4 address 172.31.28.73/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 16 04:57:46.331018 ignition[1096]: Ignition 2.22.0 Sep 16 04:57:46.331036 ignition[1096]: Stage: fetch-offline Sep 16 04:57:46.331251 ignition[1096]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:57:46.331260 ignition[1096]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 16 04:57:46.331780 ignition[1096]: Ignition finished successfully Sep 16 04:57:46.333744 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 16 04:57:46.335060 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 16 04:57:46.368838 ignition[1164]: Ignition 2.22.0 Sep 16 04:57:46.368861 ignition[1164]: Stage: fetch Sep 16 04:57:46.369289 ignition[1164]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:57:46.369302 ignition[1164]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 16 04:57:46.369416 ignition[1164]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 16 04:57:46.377214 ignition[1164]: PUT result: OK Sep 16 04:57:46.378759 ignition[1164]: parsed url from cmdline: "" Sep 16 04:57:46.378767 ignition[1164]: no config URL provided Sep 16 04:57:46.378777 ignition[1164]: reading system config file "/usr/lib/ignition/user.ign" Sep 16 04:57:46.378788 ignition[1164]: no config at "/usr/lib/ignition/user.ign" Sep 16 04:57:46.378809 ignition[1164]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 16 04:57:46.379352 ignition[1164]: PUT result: OK Sep 16 04:57:46.379448 ignition[1164]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 16 04:57:46.379967 ignition[1164]: GET result: OK Sep 16 04:57:46.380023 ignition[1164]: parsing config with SHA512: b284cea0bc9dc6bfb7c0da6ec12f78e085d0353eec4d1a39199e479a02c1d30fdaba669d09a4d4ef49de031557139d19aa4e0822793624dcc05453429f1e9f12 Sep 16 04:57:46.387403 unknown[1164]: fetched base config from "system" Sep 16 04:57:46.387417 unknown[1164]: fetched base config from "system" Sep 16 04:57:46.387747 ignition[1164]: fetch: fetch complete Sep 16 04:57:46.387422 unknown[1164]: fetched user config from "aws" Sep 16 04:57:46.387752 ignition[1164]: fetch: fetch passed Sep 16 04:57:46.387790 ignition[1164]: Ignition finished successfully Sep 16 04:57:46.389940 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 16 04:57:46.391490 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 16 04:57:46.426349 ignition[1170]: Ignition 2.22.0 Sep 16 04:57:46.426371 ignition[1170]: Stage: kargs Sep 16 04:57:46.426746 ignition[1170]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:57:46.426758 ignition[1170]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 16 04:57:46.426865 ignition[1170]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 16 04:57:46.432851 ignition[1170]: PUT result: OK Sep 16 04:57:46.435717 ignition[1170]: kargs: kargs passed Sep 16 04:57:46.435800 ignition[1170]: Ignition finished successfully Sep 16 04:57:46.438409 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 16 04:57:46.440131 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 16 04:57:46.477477 ignition[1177]: Ignition 2.22.0 Sep 16 04:57:46.477499 ignition[1177]: Stage: disks Sep 16 04:57:46.477895 ignition[1177]: no configs at "/usr/lib/ignition/base.d" Sep 16 04:57:46.477907 ignition[1177]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 16 04:57:46.478023 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 16 04:57:46.478935 ignition[1177]: PUT result: OK Sep 16 04:57:46.481681 ignition[1177]: disks: disks passed Sep 16 04:57:46.481790 ignition[1177]: Ignition finished successfully Sep 16 04:57:46.483866 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 16 04:57:46.484623 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 16 04:57:46.485243 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 16 04:57:46.485820 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 16 04:57:46.486427 systemd[1]: Reached target sysinit.target - System Initialization. Sep 16 04:57:46.486991 systemd[1]: Reached target basic.target - Basic System. Sep 16 04:57:46.489379 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 16 04:57:46.541470 systemd-fsck[1185]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 16 04:57:46.544086 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 16 04:57:46.545947 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 16 04:57:46.693225 kernel: EXT4-fs (nvme0n1p9): mounted filesystem fb1cb44f-955b-4cd0-8849-33ce3640d547 r/w with ordered data mode. Quota mode: none. Sep 16 04:57:46.694093 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 16 04:57:46.695027 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 16 04:57:46.696959 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 16 04:57:46.699283 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 16 04:57:46.700393 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 16 04:57:46.700926 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 16 04:57:46.700951 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 16 04:57:46.707301 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 16 04:57:46.709054 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 16 04:57:46.724245 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1204) Sep 16 04:57:46.730043 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 04:57:46.730121 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 16 04:57:46.738882 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 16 04:57:46.738953 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 16 04:57:46.741067 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 16 04:57:46.875933 initrd-setup-root[1229]: cut: /sysroot/etc/passwd: No such file or directory Sep 16 04:57:46.881257 initrd-setup-root[1236]: cut: /sysroot/etc/group: No such file or directory Sep 16 04:57:46.886294 initrd-setup-root[1243]: cut: /sysroot/etc/shadow: No such file or directory Sep 16 04:57:46.891485 initrd-setup-root[1250]: cut: /sysroot/etc/gshadow: No such file or directory Sep 16 04:57:47.070521 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 16 04:57:47.072297 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 16 04:57:47.075388 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 16 04:57:47.093674 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 16 04:57:47.094402 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 04:57:47.123811 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 16 04:57:47.135692 ignition[1318]: INFO : Ignition 2.22.0 Sep 16 04:57:47.135692 ignition[1318]: INFO : Stage: mount Sep 16 04:57:47.137509 ignition[1318]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 04:57:47.137509 ignition[1318]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 16 04:57:47.137509 ignition[1318]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 16 04:57:47.138899 ignition[1318]: INFO : PUT result: OK Sep 16 04:57:47.143181 ignition[1318]: INFO : mount: mount passed Sep 16 04:57:47.144983 ignition[1318]: INFO : Ignition finished successfully Sep 16 04:57:47.145682 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 16 04:57:47.147331 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 16 04:57:47.164020 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 16 04:57:47.192270 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1329) Sep 16 04:57:47.196798 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 8b047ef5-4757-404a-b211-2a505a425364 Sep 16 04:57:47.198514 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 16 04:57:47.205889 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 16 04:57:47.206952 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 16 04:57:47.209919 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 16 04:57:47.246480 ignition[1346]: INFO : Ignition 2.22.0 Sep 16 04:57:47.246480 ignition[1346]: INFO : Stage: files Sep 16 04:57:47.248067 ignition[1346]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 04:57:47.248067 ignition[1346]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 16 04:57:47.248067 ignition[1346]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 16 04:57:47.248067 ignition[1346]: INFO : PUT result: OK Sep 16 04:57:47.250597 ignition[1346]: DEBUG : files: compiled without relabeling support, skipping Sep 16 04:57:47.251789 ignition[1346]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 16 04:57:47.251789 ignition[1346]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 16 04:57:47.256060 ignition[1346]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 16 04:57:47.257106 ignition[1346]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 16 04:57:47.258043 ignition[1346]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 16 04:57:47.257506 unknown[1346]: wrote ssh authorized keys file for user: core Sep 16 04:57:47.260288 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 16 04:57:47.261265 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 16 04:57:47.303800 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 16 04:57:47.317339 systemd-networkd[1153]: eth0: Gained IPv6LL Sep 16 04:57:47.473998 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 16 04:57:47.473998 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 16 04:57:47.473998 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 16 04:57:47.849295 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 16 04:57:48.977891 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 16 04:57:48.982188 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 16 04:57:48.982188 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 16 04:57:48.982188 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 16 04:57:48.982188 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 16 04:57:48.982188 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 16 04:57:48.982188 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 16 04:57:48.982188 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 16 04:57:48.982188 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 16 04:57:48.993266 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 16 04:57:48.993266 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 16 04:57:48.993266 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 16 04:57:48.993266 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 16 04:57:48.993266 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 16 04:57:48.993266 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 16 04:57:49.438843 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 16 04:57:52.695588 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 16 04:57:52.695588 ignition[1346]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 16 04:57:52.698359 ignition[1346]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 16 04:57:52.702527 ignition[1346]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 16 04:57:52.702527 ignition[1346]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 16 04:57:52.702527 ignition[1346]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 16 04:57:52.704967 ignition[1346]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 16 04:57:52.704967 ignition[1346]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 16 04:57:52.704967 ignition[1346]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 16 04:57:52.704967 ignition[1346]: INFO : files: files passed Sep 16 04:57:52.704967 ignition[1346]: INFO : Ignition finished successfully Sep 16 04:57:52.704940 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 16 04:57:52.705922 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 16 04:57:52.709312 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 16 04:57:52.726841 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 16 04:57:52.726965 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 16 04:57:52.734269 initrd-setup-root-after-ignition[1376]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 16 04:57:52.734269 initrd-setup-root-after-ignition[1376]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 16 04:57:52.736246 initrd-setup-root-after-ignition[1380]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 16 04:57:52.737599 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 16 04:57:52.738228 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 16 04:57:52.740807 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 16 04:57:52.787625 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 16 04:57:52.787776 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 16 04:57:52.789581 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 16 04:57:52.790546 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 16 04:57:52.791465 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 16 04:57:52.792833 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 16 04:57:52.813589 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 16 04:57:52.815994 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 16 04:57:52.839465 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 16 04:57:52.840380 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 04:57:52.841642 systemd[1]: Stopped target timers.target - Timer Units. Sep 16 04:57:52.842616 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 16 04:57:52.842862 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 16 04:57:52.844057 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 16 04:57:52.845160 systemd[1]: Stopped target basic.target - Basic System. Sep 16 04:57:52.846025 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 16 04:57:52.846864 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 16 04:57:52.847721 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 16 04:57:52.848516 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 16 04:57:52.849433 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 16 04:57:52.850315 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 16 04:57:52.851137 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 16 04:57:52.852442 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 16 04:57:52.853433 systemd[1]: Stopped target swap.target - Swaps. Sep 16 04:57:52.854216 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 16 04:57:52.854470 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 16 04:57:52.855528 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 16 04:57:52.856412 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 04:57:52.857265 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 16 04:57:52.857409 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 04:57:52.858092 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 16 04:57:52.858352 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 16 04:57:52.859706 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 16 04:57:52.859966 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 16 04:57:52.860878 systemd[1]: ignition-files.service: Deactivated successfully. Sep 16 04:57:52.861091 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 16 04:57:52.865357 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 16 04:57:52.865940 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 16 04:57:52.866142 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 04:57:52.870481 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 16 04:57:52.871760 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 16 04:57:52.872718 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 04:57:52.874220 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 16 04:57:52.875105 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 16 04:57:52.883009 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 16 04:57:52.883336 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 16 04:57:52.910239 ignition[1400]: INFO : Ignition 2.22.0 Sep 16 04:57:52.910239 ignition[1400]: INFO : Stage: umount Sep 16 04:57:52.910239 ignition[1400]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 16 04:57:52.910239 ignition[1400]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 16 04:57:52.913758 ignition[1400]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 16 04:57:52.913758 ignition[1400]: INFO : PUT result: OK Sep 16 04:57:52.912639 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 16 04:57:52.918890 ignition[1400]: INFO : umount: umount passed Sep 16 04:57:52.918890 ignition[1400]: INFO : Ignition finished successfully Sep 16 04:57:52.919440 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 16 04:57:52.919598 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 16 04:57:52.921098 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 16 04:57:52.921294 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 16 04:57:52.923130 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 16 04:57:52.923780 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 16 04:57:52.924417 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 16 04:57:52.924487 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 16 04:57:52.925253 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 16 04:57:52.925323 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 16 04:57:52.925918 systemd[1]: Stopped target network.target - Network. Sep 16 04:57:52.926569 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 16 04:57:52.926640 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 16 04:57:52.927290 systemd[1]: Stopped target paths.target - Path Units. Sep 16 04:57:52.927867 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 16 04:57:52.931399 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 04:57:52.932008 systemd[1]: Stopped target slices.target - Slice Units. Sep 16 04:57:52.933720 systemd[1]: Stopped target sockets.target - Socket Units. Sep 16 04:57:52.934595 systemd[1]: iscsid.socket: Deactivated successfully. Sep 16 04:57:52.934662 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 16 04:57:52.935302 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 16 04:57:52.935361 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 16 04:57:52.935972 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 16 04:57:52.936060 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 16 04:57:52.936852 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 16 04:57:52.936918 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 16 04:57:52.937548 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 16 04:57:52.937621 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 16 04:57:52.938418 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 16 04:57:52.939060 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 16 04:57:52.942453 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 16 04:57:52.942580 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 16 04:57:52.947041 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 16 04:57:52.947873 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 16 04:57:52.947965 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 04:57:52.951489 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 16 04:57:52.951850 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 16 04:57:52.952025 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 16 04:57:52.954303 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 16 04:57:52.955426 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 16 04:57:52.955950 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 16 04:57:52.956023 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 16 04:57:52.957941 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 16 04:57:52.958658 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 16 04:57:52.958734 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 16 04:57:52.959455 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 16 04:57:52.959522 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:57:52.963396 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 16 04:57:52.963477 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 16 04:57:52.964104 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 04:57:52.966395 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 16 04:57:52.976075 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 16 04:57:52.976346 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 04:57:52.979383 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 16 04:57:52.979445 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 16 04:57:52.980465 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 16 04:57:52.980516 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 04:57:52.981394 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 16 04:57:52.981465 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 16 04:57:52.983800 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 16 04:57:52.983871 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 16 04:57:52.985144 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 16 04:57:52.985234 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 16 04:57:52.987852 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 16 04:57:52.990642 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 16 04:57:52.990737 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 04:57:52.993152 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 16 04:57:52.993275 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 04:57:52.994455 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 04:57:52.994524 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:57:52.996744 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 16 04:57:52.998720 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 16 04:57:53.006710 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 16 04:57:53.006854 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 16 04:57:53.008041 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 16 04:57:53.010010 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 16 04:57:53.033732 systemd[1]: Switching root. Sep 16 04:57:53.070560 systemd-journald[207]: Journal stopped Sep 16 04:57:54.577563 systemd-journald[207]: Received SIGTERM from PID 1 (systemd). Sep 16 04:57:54.577664 kernel: SELinux: policy capability network_peer_controls=1 Sep 16 04:57:54.577695 kernel: SELinux: policy capability open_perms=1 Sep 16 04:57:54.577716 kernel: SELinux: policy capability extended_socket_class=1 Sep 16 04:57:54.577742 kernel: SELinux: policy capability always_check_network=0 Sep 16 04:57:54.577762 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 16 04:57:54.577783 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 16 04:57:54.577809 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 16 04:57:54.577834 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 16 04:57:54.577853 kernel: SELinux: policy capability userspace_initial_context=0 Sep 16 04:57:54.577872 kernel: audit: type=1403 audit(1757998673.395:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 16 04:57:54.577891 systemd[1]: Successfully loaded SELinux policy in 68.511ms. Sep 16 04:57:54.577926 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.454ms. Sep 16 04:57:54.577947 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 16 04:57:54.577966 systemd[1]: Detected virtualization amazon. Sep 16 04:57:54.577991 systemd[1]: Detected architecture x86-64. Sep 16 04:57:54.578011 systemd[1]: Detected first boot. Sep 16 04:57:54.578030 systemd[1]: Initializing machine ID from VM UUID. Sep 16 04:57:54.578050 zram_generator::config[1444]: No configuration found. Sep 16 04:57:54.578070 kernel: Guest personality initialized and is inactive Sep 16 04:57:54.578088 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 16 04:57:54.578106 kernel: Initialized host personality Sep 16 04:57:54.578124 kernel: NET: Registered PF_VSOCK protocol family Sep 16 04:57:54.578143 systemd[1]: Populated /etc with preset unit settings. Sep 16 04:57:54.578170 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 16 04:57:54.585698 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 16 04:57:54.585771 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 16 04:57:54.585794 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 16 04:57:54.585816 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 16 04:57:54.585836 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 16 04:57:54.585856 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 16 04:57:54.585879 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 16 04:57:54.585901 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 16 04:57:54.585930 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 16 04:57:54.585952 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 16 04:57:54.585972 systemd[1]: Created slice user.slice - User and Session Slice. Sep 16 04:57:54.585993 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 16 04:57:54.586014 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 16 04:57:54.586034 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 16 04:57:54.586053 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 16 04:57:54.586073 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 16 04:57:54.586095 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 16 04:57:54.586115 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 16 04:57:54.586134 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 16 04:57:54.586153 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 16 04:57:54.586173 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 16 04:57:54.586220 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 16 04:57:54.586240 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 16 04:57:54.586259 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 16 04:57:54.586282 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 16 04:57:54.586303 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 16 04:57:54.586324 systemd[1]: Reached target slices.target - Slice Units. Sep 16 04:57:54.586343 systemd[1]: Reached target swap.target - Swaps. Sep 16 04:57:54.586363 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 16 04:57:54.586382 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 16 04:57:54.586406 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 16 04:57:54.586425 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 16 04:57:54.586445 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 16 04:57:54.586468 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 16 04:57:54.586488 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 16 04:57:54.586508 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 16 04:57:54.586528 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 16 04:57:54.586547 systemd[1]: Mounting media.mount - External Media Directory... Sep 16 04:57:54.586572 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:57:54.586591 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 16 04:57:54.586612 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 16 04:57:54.586631 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 16 04:57:54.586657 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 16 04:57:54.586676 systemd[1]: Reached target machines.target - Containers. Sep 16 04:57:54.586697 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 16 04:57:54.586716 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:57:54.586743 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 16 04:57:54.586763 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 16 04:57:54.586783 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 04:57:54.586803 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 16 04:57:54.586825 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 04:57:54.586845 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 16 04:57:54.586866 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 04:57:54.586886 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 16 04:57:54.586908 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 16 04:57:54.586928 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 16 04:57:54.586948 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 16 04:57:54.586969 systemd[1]: Stopped systemd-fsck-usr.service. Sep 16 04:57:54.586991 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:57:54.587016 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 16 04:57:54.587036 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 16 04:57:54.587057 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 16 04:57:54.587079 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 16 04:57:54.587099 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 16 04:57:54.587120 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 16 04:57:54.587146 systemd[1]: verity-setup.service: Deactivated successfully. Sep 16 04:57:54.587168 systemd[1]: Stopped verity-setup.service. Sep 16 04:57:54.587881 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:57:54.587931 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 16 04:57:54.587956 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 16 04:57:54.587977 systemd[1]: Mounted media.mount - External Media Directory. Sep 16 04:57:54.587998 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 16 04:57:54.588019 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 16 04:57:54.588040 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 16 04:57:54.588063 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 16 04:57:54.588084 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 16 04:57:54.588105 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 16 04:57:54.588126 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 04:57:54.588150 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 04:57:54.588171 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 04:57:54.588222 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 04:57:54.588244 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 16 04:57:54.588263 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 16 04:57:54.588284 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 16 04:57:54.588303 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 16 04:57:54.588320 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 16 04:57:54.588344 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 16 04:57:54.588366 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 16 04:57:54.588386 kernel: loop: module loaded Sep 16 04:57:54.588406 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 16 04:57:54.588429 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:57:54.588507 systemd-journald[1523]: Collecting audit messages is disabled. Sep 16 04:57:54.588553 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 16 04:57:54.588580 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 16 04:57:54.588604 systemd-journald[1523]: Journal started Sep 16 04:57:54.588653 systemd-journald[1523]: Runtime Journal (/run/log/journal/ec2696a7b1a4f13cacd9f5c7d6bb0db4) is 4.8M, max 38.4M, 33.6M free. Sep 16 04:57:54.593929 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 16 04:57:54.106751 systemd[1]: Queued start job for default target multi-user.target. Sep 16 04:57:54.115700 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 16 04:57:54.116265 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 16 04:57:54.608285 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 16 04:57:54.624222 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 16 04:57:54.624302 systemd[1]: Started systemd-journald.service - Journal Service. Sep 16 04:57:54.625881 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 04:57:54.627101 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 04:57:54.629314 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 16 04:57:54.631264 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 16 04:57:54.648720 kernel: ACPI: bus type drm_connector registered Sep 16 04:57:54.651676 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 16 04:57:54.653521 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 16 04:57:54.665303 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 16 04:57:54.682438 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 16 04:57:54.690068 kernel: fuse: init (API version 7.41) Sep 16 04:57:54.684784 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 16 04:57:54.696574 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 16 04:57:54.706363 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 16 04:57:54.707579 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 16 04:57:54.708189 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 16 04:57:54.709537 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 16 04:57:54.718094 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 16 04:57:54.731348 kernel: loop0: detected capacity change from 0 to 72368 Sep 16 04:57:54.748754 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 16 04:57:54.758672 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 16 04:57:54.762986 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 16 04:57:54.771950 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 16 04:57:54.774670 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:57:54.779291 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 16 04:57:54.792351 systemd-journald[1523]: Time spent on flushing to /var/log/journal/ec2696a7b1a4f13cacd9f5c7d6bb0db4 is 33.106ms for 1028 entries. Sep 16 04:57:54.792351 systemd-journald[1523]: System Journal (/var/log/journal/ec2696a7b1a4f13cacd9f5c7d6bb0db4) is 8M, max 195.6M, 187.6M free. Sep 16 04:57:54.832679 systemd-journald[1523]: Received client request to flush runtime journal. Sep 16 04:57:54.838574 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 16 04:57:54.849409 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 16 04:57:54.866292 kernel: loop1: detected capacity change from 0 to 224512 Sep 16 04:57:54.869627 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 16 04:57:54.873377 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 16 04:57:54.916620 systemd-tmpfiles[1598]: ACLs are not supported, ignoring. Sep 16 04:57:54.917092 systemd-tmpfiles[1598]: ACLs are not supported, ignoring. Sep 16 04:57:54.925605 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 16 04:57:54.981502 kernel: loop2: detected capacity change from 0 to 128016 Sep 16 04:57:55.031097 kernel: loop3: detected capacity change from 0 to 110984 Sep 16 04:57:55.106245 kernel: loop4: detected capacity change from 0 to 72368 Sep 16 04:57:55.122593 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 16 04:57:55.132551 kernel: loop5: detected capacity change from 0 to 224512 Sep 16 04:57:55.178238 kernel: loop6: detected capacity change from 0 to 128016 Sep 16 04:57:55.221869 kernel: loop7: detected capacity change from 0 to 110984 Sep 16 04:57:55.250330 (sd-merge)[1604]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 16 04:57:55.252413 (sd-merge)[1604]: Merged extensions into '/usr'. Sep 16 04:57:55.258354 systemd[1]: Reload requested from client PID 1555 ('systemd-sysext') (unit systemd-sysext.service)... Sep 16 04:57:55.258893 systemd[1]: Reloading... Sep 16 04:57:55.415281 zram_generator::config[1636]: No configuration found. Sep 16 04:57:55.567795 ldconfig[1547]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 16 04:57:55.793234 systemd[1]: Reloading finished in 533 ms. Sep 16 04:57:55.808178 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 16 04:57:55.814369 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 16 04:57:55.824387 systemd[1]: Starting ensure-sysext.service... Sep 16 04:57:55.827382 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 16 04:57:55.861262 systemd[1]: Reload requested from client PID 1682 ('systemctl') (unit ensure-sysext.service)... Sep 16 04:57:55.861409 systemd[1]: Reloading... Sep 16 04:57:55.883554 systemd-tmpfiles[1683]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 16 04:57:55.883963 systemd-tmpfiles[1683]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 16 04:57:55.886526 systemd-tmpfiles[1683]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 16 04:57:55.888698 systemd-tmpfiles[1683]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 16 04:57:55.893083 systemd-tmpfiles[1683]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 16 04:57:55.893654 systemd-tmpfiles[1683]: ACLs are not supported, ignoring. Sep 16 04:57:55.894585 systemd-tmpfiles[1683]: ACLs are not supported, ignoring. Sep 16 04:57:55.903498 systemd-tmpfiles[1683]: Detected autofs mount point /boot during canonicalization of boot. Sep 16 04:57:55.903655 systemd-tmpfiles[1683]: Skipping /boot Sep 16 04:57:55.922270 systemd-tmpfiles[1683]: Detected autofs mount point /boot during canonicalization of boot. Sep 16 04:57:55.922442 systemd-tmpfiles[1683]: Skipping /boot Sep 16 04:57:55.958237 zram_generator::config[1706]: No configuration found. Sep 16 04:57:56.192761 systemd[1]: Reloading finished in 330 ms. Sep 16 04:57:56.213474 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 16 04:57:56.219261 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 16 04:57:56.225612 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 16 04:57:56.229332 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 16 04:57:56.231793 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 16 04:57:56.237936 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 16 04:57:56.242344 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 16 04:57:56.244421 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 16 04:57:56.253985 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:57:56.254421 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:57:56.258286 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 16 04:57:56.259710 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 16 04:57:56.264399 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 16 04:57:56.265386 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:57:56.265513 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:57:56.265609 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:57:56.271214 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 16 04:57:56.275739 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:57:56.275928 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:57:56.276074 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:57:56.276175 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:57:56.276455 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:57:56.283615 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:57:56.283877 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 16 04:57:56.287536 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 16 04:57:56.289254 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 16 04:57:56.289370 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 16 04:57:56.289530 systemd[1]: Reached target time-set.target - System Time Set. Sep 16 04:57:56.290014 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 16 04:57:56.297452 systemd[1]: Finished ensure-sysext.service. Sep 16 04:57:56.312641 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 16 04:57:56.312833 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 16 04:57:56.313517 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 16 04:57:56.314258 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 16 04:57:56.315175 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 16 04:57:56.316284 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 16 04:57:56.316958 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 16 04:57:56.317096 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 16 04:57:56.320490 systemd-udevd[1768]: Using default interface naming scheme 'v255'. Sep 16 04:57:56.325815 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 16 04:57:56.327152 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 16 04:57:56.328079 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 16 04:57:56.341173 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 16 04:57:56.344488 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 16 04:57:56.365796 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 16 04:57:56.366325 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 16 04:57:56.373508 augenrules[1804]: No rules Sep 16 04:57:56.373601 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 16 04:57:56.375616 systemd[1]: audit-rules.service: Deactivated successfully. Sep 16 04:57:56.375821 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 16 04:57:56.378394 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 16 04:57:56.386915 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 16 04:57:56.391336 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 16 04:57:56.474080 systemd-resolved[1767]: Positive Trust Anchors: Sep 16 04:57:56.474435 systemd-resolved[1767]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 16 04:57:56.474533 systemd-resolved[1767]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 16 04:57:56.481032 systemd-resolved[1767]: Defaulting to hostname 'linux'. Sep 16 04:57:56.483294 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 16 04:57:56.483792 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 16 04:57:56.485270 systemd[1]: Reached target sysinit.target - System Initialization. Sep 16 04:57:56.485752 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 16 04:57:56.486133 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 16 04:57:56.486487 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 16 04:57:56.488404 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 16 04:57:56.488870 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 16 04:57:56.489213 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 16 04:57:56.489535 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 16 04:57:56.489570 systemd[1]: Reached target paths.target - Path Units. Sep 16 04:57:56.489855 systemd[1]: Reached target timers.target - Timer Units. Sep 16 04:57:56.491349 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 16 04:57:56.494622 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 16 04:57:56.498478 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 16 04:57:56.500493 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 16 04:57:56.500903 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 16 04:57:56.505713 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 16 04:57:56.506869 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 16 04:57:56.508611 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 16 04:57:56.510892 systemd[1]: Reached target sockets.target - Socket Units. Sep 16 04:57:56.511316 systemd[1]: Reached target basic.target - Basic System. Sep 16 04:57:56.511776 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 16 04:57:56.511807 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 16 04:57:56.514635 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 16 04:57:56.517980 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 16 04:57:56.523349 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 16 04:57:56.528316 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 16 04:57:56.530921 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 16 04:57:56.532550 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 16 04:57:56.535384 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 16 04:57:56.541540 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 16 04:57:56.546208 systemd[1]: Started ntpd.service - Network Time Service. Sep 16 04:57:56.552795 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 16 04:57:56.560381 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 16 04:57:56.573297 jq[1846]: false Sep 16 04:57:56.573539 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 16 04:57:56.578540 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 16 04:57:56.581081 (udev-worker)[1839]: Network interface NamePolicy= disabled on kernel command line. Sep 16 04:57:56.594414 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 16 04:57:56.608181 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 16 04:57:56.608738 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 16 04:57:56.612406 systemd[1]: Starting update-engine.service - Update Engine... Sep 16 04:57:56.617700 extend-filesystems[1847]: Found /dev/nvme0n1p6 Sep 16 04:57:56.622308 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 16 04:57:56.624137 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 16 04:57:56.624919 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 16 04:57:56.625119 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 16 04:57:56.629497 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 16 04:57:56.655287 extend-filesystems[1847]: Found /dev/nvme0n1p9 Sep 16 04:57:56.661236 google_oslogin_nss_cache[1848]: oslogin_cache_refresh[1848]: Refreshing passwd entry cache Sep 16 04:57:56.658052 oslogin_cache_refresh[1848]: Refreshing passwd entry cache Sep 16 04:57:56.670791 dbus-daemon[1844]: [system] SELinux support is enabled Sep 16 04:57:56.672513 extend-filesystems[1847]: Checking size of /dev/nvme0n1p9 Sep 16 04:57:56.674735 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 16 04:57:56.681584 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 16 04:57:56.681774 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 16 04:57:56.687790 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 16 04:57:56.691186 oslogin_cache_refresh[1848]: Failure getting users, quitting Sep 16 04:57:56.693051 google_oslogin_nss_cache[1848]: oslogin_cache_refresh[1848]: Failure getting users, quitting Sep 16 04:57:56.689327 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 16 04:57:56.689770 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 16 04:57:56.689788 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 16 04:57:56.701711 google_oslogin_nss_cache[1848]: oslogin_cache_refresh[1848]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 16 04:57:56.701711 google_oslogin_nss_cache[1848]: oslogin_cache_refresh[1848]: Refreshing group entry cache Sep 16 04:57:56.701711 google_oslogin_nss_cache[1848]: oslogin_cache_refresh[1848]: Failure getting groups, quitting Sep 16 04:57:56.701711 google_oslogin_nss_cache[1848]: oslogin_cache_refresh[1848]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 16 04:57:56.699067 oslogin_cache_refresh[1848]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 16 04:57:56.699113 oslogin_cache_refresh[1848]: Refreshing group entry cache Sep 16 04:57:56.699960 oslogin_cache_refresh[1848]: Failure getting groups, quitting Sep 16 04:57:56.699969 oslogin_cache_refresh[1848]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 16 04:57:56.716224 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Sep 16 04:57:56.711335 systemd-networkd[1815]: lo: Link UP Sep 16 04:57:56.711339 systemd-networkd[1815]: lo: Gained carrier Sep 16 04:57:56.713146 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 16 04:57:56.713944 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 16 04:57:56.720420 systemd-networkd[1815]: Enumeration completed Sep 16 04:57:56.722287 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 16 04:57:56.722976 systemd[1]: Reached target network.target - Network. Sep 16 04:57:56.726030 update_engine[1861]: I20250916 04:57:56.725922 1861 main.cc:92] Flatcar Update Engine starting Sep 16 04:57:56.726437 systemd[1]: Starting containerd.service - containerd container runtime... Sep 16 04:57:56.729755 update_engine[1861]: I20250916 04:57:56.729621 1861 update_check_scheduler.cc:74] Next update check in 8m37s Sep 16 04:57:56.730551 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 16 04:57:56.734037 jq[1862]: true Sep 16 04:57:56.735340 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 16 04:57:56.737415 systemd[1]: Started update-engine.service - Update Engine. Sep 16 04:57:56.753233 extend-filesystems[1847]: Resized partition /dev/nvme0n1p9 Sep 16 04:57:56.760716 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 16 04:57:56.764641 extend-filesystems[1896]: resize2fs 1.47.3 (8-Jul-2025) Sep 16 04:57:56.771339 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 16 04:57:56.777659 systemd[1]: motdgen.service: Deactivated successfully. Sep 16 04:57:56.779268 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 16 04:57:56.783342 kernel: ACPI: button: Power Button [PWRF] Sep 16 04:57:56.785641 coreos-metadata[1843]: Sep 16 04:57:56.785 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 16 04:57:56.802963 tar[1869]: linux-amd64/LICENSE Sep 16 04:57:56.802963 tar[1869]: linux-amd64/helm Sep 16 04:57:56.807503 kernel: mousedev: PS/2 mouse device common for all mice Sep 16 04:57:56.808959 jq[1897]: true Sep 16 04:57:56.809369 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 16 04:57:56.821149 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Sep 16 04:57:56.823306 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 16 04:57:56.835565 (ntainerd)[1909]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 16 04:57:56.857013 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 16 04:57:56.857082 kernel: ACPI: button: Sleep Button [SLPF] Sep 16 04:57:56.869282 extend-filesystems[1896]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 16 04:57:56.869282 extend-filesystems[1896]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 16 04:57:56.869282 extend-filesystems[1896]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 16 04:57:56.877277 extend-filesystems[1847]: Resized filesystem in /dev/nvme0n1p9 Sep 16 04:57:56.870848 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 16 04:57:56.871058 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 16 04:57:56.917780 bash[1929]: Updated "/home/core/.ssh/authorized_keys" Sep 16 04:57:56.919257 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 16 04:57:56.938212 systemd[1]: Starting sshkeys.service... Sep 16 04:57:56.986293 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 16 04:57:56.990239 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 16 04:57:57.009038 ntpd[1850]: ntpd 4.2.8p18@1.4062-o Tue Sep 16 02:36:08 UTC 2025 (1): Starting Sep 16 04:57:57.009973 ntpd[1850]: 16 Sep 04:57:57 ntpd[1850]: ntpd 4.2.8p18@1.4062-o Tue Sep 16 02:36:08 UTC 2025 (1): Starting Sep 16 04:57:57.009973 ntpd[1850]: 16 Sep 04:57:57 ntpd[1850]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 16 04:57:57.009973 ntpd[1850]: 16 Sep 04:57:57 ntpd[1850]: ---------------------------------------------------- Sep 16 04:57:57.009973 ntpd[1850]: 16 Sep 04:57:57 ntpd[1850]: ntp-4 is maintained by Network Time Foundation, Sep 16 04:57:57.009973 ntpd[1850]: 16 Sep 04:57:57 ntpd[1850]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 16 04:57:57.009973 ntpd[1850]: 16 Sep 04:57:57 ntpd[1850]: corporation. Support and training for ntp-4 are Sep 16 04:57:57.009973 ntpd[1850]: 16 Sep 04:57:57 ntpd[1850]: available at https://www.nwtime.org/support Sep 16 04:57:57.009973 ntpd[1850]: 16 Sep 04:57:57 ntpd[1850]: ---------------------------------------------------- Sep 16 04:57:57.009098 ntpd[1850]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 16 04:57:57.009105 ntpd[1850]: ---------------------------------------------------- Sep 16 04:57:57.009111 ntpd[1850]: ntp-4 is maintained by Network Time Foundation, Sep 16 04:57:57.009117 ntpd[1850]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 16 04:57:57.009124 ntpd[1850]: corporation. Support and training for ntp-4 are Sep 16 04:57:57.009130 ntpd[1850]: available at https://www.nwtime.org/support Sep 16 04:57:57.009136 ntpd[1850]: ---------------------------------------------------- Sep 16 04:57:57.015060 systemd-networkd[1815]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:57:57.015069 systemd-networkd[1815]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 16 04:57:57.020504 systemd-networkd[1815]: eth0: Link UP Sep 16 04:57:57.020689 systemd-networkd[1815]: eth0: Gained carrier Sep 16 04:57:57.020718 systemd-networkd[1815]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 16 04:57:57.022885 ntpd[1850]: proto: precision = 0.056 usec (-24) Sep 16 04:57:57.023161 ntpd[1850]: 16 Sep 04:57:57 ntpd[1850]: proto: precision = 0.056 usec (-24) Sep 16 04:57:57.025122 ntpd[1850]: basedate set to 2025-09-04 Sep 16 04:57:57.025237 ntpd[1850]: 16 Sep 04:57:57 ntpd[1850]: basedate set to 2025-09-04 Sep 16 04:57:57.025320 ntpd[1850]: gps base set to 2025-09-07 (week 2383) Sep 16 04:57:57.025515 ntpd[1850]: 16 Sep 04:57:57 ntpd[1850]: gps base set to 2025-09-07 (week 2383) Sep 16 04:57:57.025682 ntpd[1850]: Listen and drop on 0 v6wildcard [::]:123 Sep 16 04:57:57.026393 ntpd[1850]: 16 Sep 04:57:57 ntpd[1850]: Listen and drop on 0 v6wildcard [::]:123 Sep 16 04:57:57.036724 kernel: ntpd[1850]: segfault at 24 ip 0000556b34b92aeb sp 00007fffc7d5d620 error 4 in ntpd[68aeb,556b34b30000+80000] likely on CPU 0 (core 0, socket 0) Sep 16 04:57:57.036803 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Sep 16 04:57:57.036824 ntpd[1850]: 16 Sep 04:57:57 ntpd[1850]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 16 04:57:57.036824 ntpd[1850]: 16 Sep 04:57:57 ntpd[1850]: Listen normally on 2 lo 127.0.0.1:123 Sep 16 04:57:57.036824 ntpd[1850]: 16 Sep 04:57:57 ntpd[1850]: Listen normally on 3 lo [::1]:123 Sep 16 04:57:57.036824 ntpd[1850]: 16 Sep 04:57:57 ntpd[1850]: bind(20) AF_INET6 [fe80::4dd:d5ff:feaa:208f%2]:123 flags 0x811 failed: Cannot assign requested address Sep 16 04:57:57.036824 ntpd[1850]: 16 Sep 04:57:57 ntpd[1850]: unable to create socket on eth0 (4) for [fe80::4dd:d5ff:feaa:208f%2]:123 Sep 16 04:57:57.026898 ntpd[1850]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 16 04:57:57.035101 systemd-networkd[1815]: eth0: DHCPv4 address 172.31.28.73/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 16 04:57:57.027089 ntpd[1850]: Listen normally on 2 lo 127.0.0.1:123 Sep 16 04:57:57.027115 ntpd[1850]: Listen normally on 3 lo [::1]:123 Sep 16 04:57:57.027140 ntpd[1850]: bind(20) AF_INET6 [fe80::4dd:d5ff:feaa:208f%2]:123 flags 0x811 failed: Cannot assign requested address Sep 16 04:57:57.027157 ntpd[1850]: unable to create socket on eth0 (4) for [fe80::4dd:d5ff:feaa:208f%2]:123 Sep 16 04:57:57.034406 dbus-daemon[1844]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1815 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 16 04:57:57.043148 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 16 04:57:57.074171 systemd-coredump[1944]: Process 1850 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Sep 16 04:57:57.077220 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Sep 16 04:57:57.088301 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Sep 16 04:57:57.094400 systemd[1]: Started systemd-coredump@0-1944-0.service - Process Core Dump (PID 1944/UID 0). Sep 16 04:57:57.152968 coreos-metadata[1939]: Sep 16 04:57:57.152 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 16 04:57:57.155865 coreos-metadata[1939]: Sep 16 04:57:57.155 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 16 04:57:57.157866 coreos-metadata[1939]: Sep 16 04:57:57.157 INFO Fetch successful Sep 16 04:57:57.157939 coreos-metadata[1939]: Sep 16 04:57:57.157 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 16 04:57:57.160581 coreos-metadata[1939]: Sep 16 04:57:57.160 INFO Fetch successful Sep 16 04:57:57.165139 unknown[1939]: wrote ssh authorized keys file for user: core Sep 16 04:57:57.220495 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:57:57.228418 update-ssh-keys[1962]: Updated "/home/core/.ssh/authorized_keys" Sep 16 04:57:57.233655 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 16 04:57:57.240824 systemd[1]: Finished sshkeys.service. Sep 16 04:57:57.262349 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 16 04:57:57.262563 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:57:57.264549 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 16 04:57:57.272084 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 16 04:57:57.309000 locksmithd[1891]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 16 04:57:57.354698 systemd-logind[1858]: Watching system buttons on /dev/input/event2 (Power Button) Sep 16 04:57:57.354722 systemd-logind[1858]: Watching system buttons on /dev/input/event3 (Sleep Button) Sep 16 04:57:57.354741 systemd-logind[1858]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 16 04:57:57.359370 systemd-logind[1858]: New seat seat0. Sep 16 04:57:57.362269 systemd[1]: Started systemd-logind.service - User Login Management. Sep 16 04:57:57.376698 containerd[1909]: time="2025-09-16T04:57:57Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 16 04:57:57.378725 containerd[1909]: time="2025-09-16T04:57:57.377840744Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 16 04:57:57.398332 containerd[1909]: time="2025-09-16T04:57:57.398249498Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.083µs" Sep 16 04:57:57.398332 containerd[1909]: time="2025-09-16T04:57:57.398286474Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 16 04:57:57.398332 containerd[1909]: time="2025-09-16T04:57:57.398308921Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 16 04:57:57.398447 containerd[1909]: time="2025-09-16T04:57:57.398439441Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 16 04:57:57.398469 containerd[1909]: time="2025-09-16T04:57:57.398453218Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 16 04:57:57.398505 containerd[1909]: time="2025-09-16T04:57:57.398486679Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 16 04:57:57.399112 containerd[1909]: time="2025-09-16T04:57:57.398536091Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 16 04:57:57.399112 containerd[1909]: time="2025-09-16T04:57:57.398552459Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 16 04:57:57.399547 containerd[1909]: time="2025-09-16T04:57:57.399333702Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 16 04:57:57.399547 containerd[1909]: time="2025-09-16T04:57:57.399359849Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 16 04:57:57.399547 containerd[1909]: time="2025-09-16T04:57:57.399372249Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 16 04:57:57.399547 containerd[1909]: time="2025-09-16T04:57:57.399380479Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 16 04:57:57.399547 containerd[1909]: time="2025-09-16T04:57:57.399464289Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 16 04:57:57.405218 containerd[1909]: time="2025-09-16T04:57:57.400379570Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 16 04:57:57.405218 containerd[1909]: time="2025-09-16T04:57:57.400416677Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 16 04:57:57.405218 containerd[1909]: time="2025-09-16T04:57:57.400426703Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 16 04:57:57.405218 containerd[1909]: time="2025-09-16T04:57:57.400582481Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 16 04:57:57.405218 containerd[1909]: time="2025-09-16T04:57:57.401327250Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 16 04:57:57.405218 containerd[1909]: time="2025-09-16T04:57:57.401390917Z" level=info msg="metadata content store policy set" policy=shared Sep 16 04:57:57.409681 containerd[1909]: time="2025-09-16T04:57:57.409045480Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 16 04:57:57.409681 containerd[1909]: time="2025-09-16T04:57:57.409122752Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 16 04:57:57.409681 containerd[1909]: time="2025-09-16T04:57:57.409228193Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 16 04:57:57.409681 containerd[1909]: time="2025-09-16T04:57:57.409241348Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 16 04:57:57.409681 containerd[1909]: time="2025-09-16T04:57:57.409260312Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 16 04:57:57.409681 containerd[1909]: time="2025-09-16T04:57:57.409271478Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 16 04:57:57.409681 containerd[1909]: time="2025-09-16T04:57:57.409283900Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 16 04:57:57.409681 containerd[1909]: time="2025-09-16T04:57:57.409295459Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 16 04:57:57.409681 containerd[1909]: time="2025-09-16T04:57:57.409310791Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 16 04:57:57.409681 containerd[1909]: time="2025-09-16T04:57:57.409322503Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 16 04:57:57.409681 containerd[1909]: time="2025-09-16T04:57:57.409331922Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 16 04:57:57.409681 containerd[1909]: time="2025-09-16T04:57:57.409344003Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 16 04:57:57.409681 containerd[1909]: time="2025-09-16T04:57:57.409457095Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 16 04:57:57.409681 containerd[1909]: time="2025-09-16T04:57:57.409475445Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 16 04:57:57.410014 containerd[1909]: time="2025-09-16T04:57:57.409489626Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 16 04:57:57.410014 containerd[1909]: time="2025-09-16T04:57:57.409505146Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 16 04:57:57.410014 containerd[1909]: time="2025-09-16T04:57:57.409518452Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 16 04:57:57.410014 containerd[1909]: time="2025-09-16T04:57:57.409529083Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 16 04:57:57.410014 containerd[1909]: time="2025-09-16T04:57:57.409539338Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 16 04:57:57.410014 containerd[1909]: time="2025-09-16T04:57:57.409548974Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 16 04:57:57.410014 containerd[1909]: time="2025-09-16T04:57:57.409559341Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 16 04:57:57.410014 containerd[1909]: time="2025-09-16T04:57:57.409569151Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 16 04:57:57.410014 containerd[1909]: time="2025-09-16T04:57:57.409578516Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 16 04:57:57.410014 containerd[1909]: time="2025-09-16T04:57:57.409637977Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 16 04:57:57.410014 containerd[1909]: time="2025-09-16T04:57:57.409650443Z" level=info msg="Start snapshots syncer" Sep 16 04:57:57.415928 containerd[1909]: time="2025-09-16T04:57:57.410993742Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 16 04:57:57.415928 containerd[1909]: time="2025-09-16T04:57:57.413419657Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 16 04:57:57.416135 containerd[1909]: time="2025-09-16T04:57:57.413473693Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 16 04:57:57.416135 containerd[1909]: time="2025-09-16T04:57:57.415308169Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 16 04:57:57.416135 containerd[1909]: time="2025-09-16T04:57:57.415480088Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 16 04:57:57.416135 containerd[1909]: time="2025-09-16T04:57:57.415505352Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 16 04:57:57.416135 containerd[1909]: time="2025-09-16T04:57:57.415516439Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 16 04:57:57.416135 containerd[1909]: time="2025-09-16T04:57:57.415528207Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 16 04:57:57.416135 containerd[1909]: time="2025-09-16T04:57:57.415541552Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 16 04:57:57.416135 containerd[1909]: time="2025-09-16T04:57:57.415552432Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 16 04:57:57.416135 containerd[1909]: time="2025-09-16T04:57:57.415563762Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 16 04:57:57.416135 containerd[1909]: time="2025-09-16T04:57:57.415589888Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 16 04:57:57.416135 containerd[1909]: time="2025-09-16T04:57:57.415600213Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 16 04:57:57.416135 containerd[1909]: time="2025-09-16T04:57:57.415611349Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 16 04:57:57.421225 containerd[1909]: time="2025-09-16T04:57:57.419802430Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 16 04:57:57.421225 containerd[1909]: time="2025-09-16T04:57:57.419910372Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 16 04:57:57.421225 containerd[1909]: time="2025-09-16T04:57:57.419922141Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 16 04:57:57.421225 containerd[1909]: time="2025-09-16T04:57:57.419932012Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 16 04:57:57.421225 containerd[1909]: time="2025-09-16T04:57:57.419939852Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 16 04:57:57.421225 containerd[1909]: time="2025-09-16T04:57:57.419949653Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 16 04:57:57.421225 containerd[1909]: time="2025-09-16T04:57:57.419960499Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 16 04:57:57.421225 containerd[1909]: time="2025-09-16T04:57:57.420022715Z" level=info msg="runtime interface created" Sep 16 04:57:57.421225 containerd[1909]: time="2025-09-16T04:57:57.420029585Z" level=info msg="created NRI interface" Sep 16 04:57:57.421225 containerd[1909]: time="2025-09-16T04:57:57.420039806Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 16 04:57:57.421225 containerd[1909]: time="2025-09-16T04:57:57.420055886Z" level=info msg="Connect containerd service" Sep 16 04:57:57.421225 containerd[1909]: time="2025-09-16T04:57:57.420100769Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 16 04:57:57.421225 containerd[1909]: time="2025-09-16T04:57:57.420810306Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 16 04:57:57.531408 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 16 04:57:57.570615 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 16 04:57:57.576594 dbus-daemon[1844]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 16 04:57:57.579964 dbus-daemon[1844]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1941 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 16 04:57:57.591187 systemd[1]: Starting polkit.service - Authorization Manager... Sep 16 04:57:57.672473 systemd-coredump[1952]: Process 1850 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1850: #0 0x0000556b34b92aeb n/a (ntpd + 0x68aeb) #1 0x0000556b34b3bcdf n/a (ntpd + 0x11cdf) #2 0x0000556b34b3c575 n/a (ntpd + 0x12575) #3 0x0000556b34b37d8a n/a (ntpd + 0xdd8a) #4 0x0000556b34b395d3 n/a (ntpd + 0xf5d3) #5 0x0000556b34b41fd1 n/a (ntpd + 0x17fd1) #6 0x0000556b34b32c2d n/a (ntpd + 0x8c2d) #7 0x00007f441b9aa16c n/a (libc.so.6 + 0x2716c) #8 0x00007f441b9aa229 __libc_start_main (libc.so.6 + 0x27229) #9 0x0000556b34b32c55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Sep 16 04:57:57.678223 systemd[1]: systemd-coredump@0-1944-0.service: Deactivated successfully. Sep 16 04:57:57.687698 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Sep 16 04:57:57.687878 systemd[1]: ntpd.service: Failed with result 'core-dump'. Sep 16 04:57:57.724952 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 16 04:57:57.738578 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 16 04:57:57.812271 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Sep 16 04:57:57.819712 systemd[1]: Started ntpd.service - Network Time Service. Sep 16 04:57:57.822243 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 16 04:57:57.853532 polkitd[2037]: Started polkitd version 126 Sep 16 04:57:57.859641 ntpd[2080]: ntpd 4.2.8p18@1.4062-o Tue Sep 16 02:36:08 UTC 2025 (1): Starting Sep 16 04:57:57.860055 ntpd[2080]: 16 Sep 04:57:57 ntpd[2080]: ntpd 4.2.8p18@1.4062-o Tue Sep 16 02:36:08 UTC 2025 (1): Starting Sep 16 04:57:57.860369 ntpd[2080]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 16 04:57:57.860568 ntpd[2080]: 16 Sep 04:57:57 ntpd[2080]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 16 04:57:57.860620 ntpd[2080]: ---------------------------------------------------- Sep 16 04:57:57.861275 ntpd[2080]: 16 Sep 04:57:57 ntpd[2080]: ---------------------------------------------------- Sep 16 04:57:57.861275 ntpd[2080]: 16 Sep 04:57:57 ntpd[2080]: ntp-4 is maintained by Network Time Foundation, Sep 16 04:57:57.861275 ntpd[2080]: 16 Sep 04:57:57 ntpd[2080]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 16 04:57:57.861275 ntpd[2080]: 16 Sep 04:57:57 ntpd[2080]: corporation. Support and training for ntp-4 are Sep 16 04:57:57.861275 ntpd[2080]: 16 Sep 04:57:57 ntpd[2080]: available at https://www.nwtime.org/support Sep 16 04:57:57.861275 ntpd[2080]: 16 Sep 04:57:57 ntpd[2080]: ---------------------------------------------------- Sep 16 04:57:57.860692 ntpd[2080]: ntp-4 is maintained by Network Time Foundation, Sep 16 04:57:57.860702 ntpd[2080]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 16 04:57:57.860711 ntpd[2080]: corporation. Support and training for ntp-4 are Sep 16 04:57:57.860720 ntpd[2080]: available at https://www.nwtime.org/support Sep 16 04:57:57.860728 ntpd[2080]: ---------------------------------------------------- Sep 16 04:57:57.871261 kernel: ntpd[2080]: segfault at 24 ip 000056075a650aeb sp 00007ffc08a91c10 error 4 in ntpd[68aeb,56075a5ee000+80000] likely on CPU 0 (core 0, socket 0) Sep 16 04:57:57.871382 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Sep 16 04:57:57.862798 ntpd[2080]: proto: precision = 0.084 usec (-23) Sep 16 04:57:57.871500 ntpd[2080]: 16 Sep 04:57:57 ntpd[2080]: proto: precision = 0.084 usec (-23) Sep 16 04:57:57.871500 ntpd[2080]: 16 Sep 04:57:57 ntpd[2080]: basedate set to 2025-09-04 Sep 16 04:57:57.871500 ntpd[2080]: 16 Sep 04:57:57 ntpd[2080]: gps base set to 2025-09-07 (week 2383) Sep 16 04:57:57.871500 ntpd[2080]: 16 Sep 04:57:57 ntpd[2080]: Listen and drop on 0 v6wildcard [::]:123 Sep 16 04:57:57.871500 ntpd[2080]: 16 Sep 04:57:57 ntpd[2080]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 16 04:57:57.871500 ntpd[2080]: 16 Sep 04:57:57 ntpd[2080]: Listen normally on 2 lo 127.0.0.1:123 Sep 16 04:57:57.871500 ntpd[2080]: 16 Sep 04:57:57 ntpd[2080]: Listen normally on 3 eth0 172.31.28.73:123 Sep 16 04:57:57.871500 ntpd[2080]: 16 Sep 04:57:57 ntpd[2080]: Listen normally on 4 lo [::1]:123 Sep 16 04:57:57.871500 ntpd[2080]: 16 Sep 04:57:57 ntpd[2080]: bind(21) AF_INET6 [fe80::4dd:d5ff:feaa:208f%2]:123 flags 0x811 failed: Cannot assign requested address Sep 16 04:57:57.871500 ntpd[2080]: 16 Sep 04:57:57 ntpd[2080]: unable to create socket on eth0 (5) for [fe80::4dd:d5ff:feaa:208f%2]:123 Sep 16 04:57:57.863275 ntpd[2080]: basedate set to 2025-09-04 Sep 16 04:57:57.863286 ntpd[2080]: gps base set to 2025-09-07 (week 2383) Sep 16 04:57:57.863462 ntpd[2080]: Listen and drop on 0 v6wildcard [::]:123 Sep 16 04:57:57.863493 ntpd[2080]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 16 04:57:57.863661 ntpd[2080]: Listen normally on 2 lo 127.0.0.1:123 Sep 16 04:57:57.863687 ntpd[2080]: Listen normally on 3 eth0 172.31.28.73:123 Sep 16 04:57:57.863713 ntpd[2080]: Listen normally on 4 lo [::1]:123 Sep 16 04:57:57.863741 ntpd[2080]: bind(21) AF_INET6 [fe80::4dd:d5ff:feaa:208f%2]:123 flags 0x811 failed: Cannot assign requested address Sep 16 04:57:57.863760 ntpd[2080]: unable to create socket on eth0 (5) for [fe80::4dd:d5ff:feaa:208f%2]:123 Sep 16 04:57:57.877736 systemd-coredump[2089]: Process 2080 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Sep 16 04:57:57.898083 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 16 04:57:57.905766 systemd[1]: Started systemd-coredump@1-2089-0.service - Process Core Dump (PID 2089/UID 0). Sep 16 04:57:57.915150 polkitd[2037]: Loading rules from directory /etc/polkit-1/rules.d Sep 16 04:57:57.918559 polkitd[2037]: Loading rules from directory /run/polkit-1/rules.d Sep 16 04:57:57.918623 polkitd[2037]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 16 04:57:57.919053 polkitd[2037]: Loading rules from directory /usr/local/share/polkit-1/rules.d Sep 16 04:57:57.919087 polkitd[2037]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 16 04:57:57.919134 polkitd[2037]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 16 04:57:57.921664 polkitd[2037]: Finished loading, compiling and executing 2 rules Sep 16 04:57:57.922138 systemd[1]: Started polkit.service - Authorization Manager. Sep 16 04:57:57.925142 dbus-daemon[1844]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 16 04:57:57.926024 polkitd[2037]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 16 04:57:57.934542 coreos-metadata[1843]: Sep 16 04:57:57.934 INFO Putting http://169.254.169.254/latest/api/token: Attempt #2 Sep 16 04:57:57.939941 coreos-metadata[1843]: Sep 16 04:57:57.938 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 16 04:57:57.940675 coreos-metadata[1843]: Sep 16 04:57:57.940 INFO Fetch successful Sep 16 04:57:57.941225 coreos-metadata[1843]: Sep 16 04:57:57.940 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 16 04:57:57.944972 coreos-metadata[1843]: Sep 16 04:57:57.944 INFO Fetch successful Sep 16 04:57:57.944972 coreos-metadata[1843]: Sep 16 04:57:57.944 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 16 04:57:57.952059 coreos-metadata[1843]: Sep 16 04:57:57.946 INFO Fetch successful Sep 16 04:57:57.952059 coreos-metadata[1843]: Sep 16 04:57:57.946 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 16 04:57:57.952245 containerd[1909]: time="2025-09-16T04:57:57.951677080Z" level=info msg="Start subscribing containerd event" Sep 16 04:57:57.952245 containerd[1909]: time="2025-09-16T04:57:57.951739779Z" level=info msg="Start recovering state" Sep 16 04:57:57.952245 containerd[1909]: time="2025-09-16T04:57:57.951867993Z" level=info msg="Start event monitor" Sep 16 04:57:57.952245 containerd[1909]: time="2025-09-16T04:57:57.951885465Z" level=info msg="Start cni network conf syncer for default" Sep 16 04:57:57.952245 containerd[1909]: time="2025-09-16T04:57:57.951897002Z" level=info msg="Start streaming server" Sep 16 04:57:57.952245 containerd[1909]: time="2025-09-16T04:57:57.951915512Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 16 04:57:57.952245 containerd[1909]: time="2025-09-16T04:57:57.951924970Z" level=info msg="runtime interface starting up..." Sep 16 04:57:57.952245 containerd[1909]: time="2025-09-16T04:57:57.951932991Z" level=info msg="starting plugins..." Sep 16 04:57:57.952245 containerd[1909]: time="2025-09-16T04:57:57.951947966Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 16 04:57:57.955849 coreos-metadata[1843]: Sep 16 04:57:57.954 INFO Fetch successful Sep 16 04:57:57.955849 coreos-metadata[1843]: Sep 16 04:57:57.954 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 16 04:57:57.960654 coreos-metadata[1843]: Sep 16 04:57:57.956 INFO Fetch failed with 404: resource not found Sep 16 04:57:57.960654 coreos-metadata[1843]: Sep 16 04:57:57.956 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 16 04:57:57.962114 containerd[1909]: time="2025-09-16T04:57:57.961578562Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 16 04:57:57.962114 containerd[1909]: time="2025-09-16T04:57:57.961762804Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 16 04:57:57.962902 coreos-metadata[1843]: Sep 16 04:57:57.962 INFO Fetch successful Sep 16 04:57:57.962902 coreos-metadata[1843]: Sep 16 04:57:57.962 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 16 04:57:57.963440 systemd[1]: Started containerd.service - containerd container runtime. Sep 16 04:57:57.965510 containerd[1909]: time="2025-09-16T04:57:57.963358713Z" level=info msg="containerd successfully booted in 0.587028s" Sep 16 04:57:57.968631 coreos-metadata[1843]: Sep 16 04:57:57.968 INFO Fetch successful Sep 16 04:57:57.968631 coreos-metadata[1843]: Sep 16 04:57:57.968 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 16 04:57:57.970168 coreos-metadata[1843]: Sep 16 04:57:57.969 INFO Fetch successful Sep 16 04:57:57.970168 coreos-metadata[1843]: Sep 16 04:57:57.970 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 16 04:57:57.972723 coreos-metadata[1843]: Sep 16 04:57:57.972 INFO Fetch successful Sep 16 04:57:57.972723 coreos-metadata[1843]: Sep 16 04:57:57.972 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 16 04:57:57.976221 coreos-metadata[1843]: Sep 16 04:57:57.975 INFO Fetch successful Sep 16 04:57:58.054782 systemd-hostnamed[1941]: Hostname set to (transient) Sep 16 04:57:58.055677 systemd-resolved[1767]: System hostname changed to 'ip-172-31-28-73'. Sep 16 04:57:58.066475 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 16 04:57:58.067767 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 16 04:57:58.168636 sshd_keygen[1880]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 16 04:57:58.208341 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 16 04:57:58.214329 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 16 04:57:58.218583 systemd[1]: Started sshd@0-172.31.28.73:22-139.178.68.195:57718.service - OpenSSH per-connection server daemon (139.178.68.195:57718). Sep 16 04:57:58.228474 systemd-coredump[2090]: Process 2080 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 2080: #0 0x000056075a650aeb n/a (ntpd + 0x68aeb) #1 0x000056075a5f9cdf n/a (ntpd + 0x11cdf) #2 0x000056075a5fa575 n/a (ntpd + 0x12575) #3 0x000056075a5f5d8a n/a (ntpd + 0xdd8a) #4 0x000056075a5f75d3 n/a (ntpd + 0xf5d3) #5 0x000056075a5fffd1 n/a (ntpd + 0x17fd1) #6 0x000056075a5f0c2d n/a (ntpd + 0x8c2d) #7 0x00007f4136ce616c n/a (libc.so.6 + 0x2716c) #8 0x00007f4136ce6229 __libc_start_main (libc.so.6 + 0x27229) #9 0x000056075a5f0c55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Sep 16 04:57:58.232112 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Sep 16 04:57:58.232419 systemd[1]: ntpd.service: Failed with result 'core-dump'. Sep 16 04:57:58.240682 systemd[1]: systemd-coredump@1-2089-0.service: Deactivated successfully. Sep 16 04:57:58.258420 systemd[1]: issuegen.service: Deactivated successfully. Sep 16 04:57:58.258881 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 16 04:57:58.264560 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 16 04:57:58.311350 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 16 04:57:58.316091 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 16 04:57:58.319144 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 16 04:57:58.320658 systemd[1]: Reached target getty.target - Login Prompts. Sep 16 04:57:58.325352 systemd-networkd[1815]: eth0: Gained IPv6LL Sep 16 04:57:58.330033 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 16 04:57:58.333943 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 2. Sep 16 04:57:58.334775 systemd[1]: Reached target network-online.target - Network is Online. Sep 16 04:57:58.338019 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 16 04:57:58.341219 tar[1869]: linux-amd64/README.md Sep 16 04:57:58.343674 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:57:58.346380 systemd[1]: Started ntpd.service - Network Time Service. Sep 16 04:57:58.350718 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 16 04:57:58.399389 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 16 04:57:58.407233 ntpd[2144]: ntpd 4.2.8p18@1.4062-o Tue Sep 16 02:36:08 UTC 2025 (1): Starting Sep 16 04:57:58.407552 ntpd[2144]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 16 04:57:58.407664 ntpd[2144]: 16 Sep 04:57:58 ntpd[2144]: ntpd 4.2.8p18@1.4062-o Tue Sep 16 02:36:08 UTC 2025 (1): Starting Sep 16 04:57:58.407664 ntpd[2144]: 16 Sep 04:57:58 ntpd[2144]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 16 04:57:58.407664 ntpd[2144]: 16 Sep 04:57:58 ntpd[2144]: ---------------------------------------------------- Sep 16 04:57:58.407664 ntpd[2144]: 16 Sep 04:57:58 ntpd[2144]: ntp-4 is maintained by Network Time Foundation, Sep 16 04:57:58.407664 ntpd[2144]: 16 Sep 04:57:58 ntpd[2144]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 16 04:57:58.407664 ntpd[2144]: 16 Sep 04:57:58 ntpd[2144]: corporation. Support and training for ntp-4 are Sep 16 04:57:58.407664 ntpd[2144]: 16 Sep 04:57:58 ntpd[2144]: available at https://www.nwtime.org/support Sep 16 04:57:58.407664 ntpd[2144]: 16 Sep 04:57:58 ntpd[2144]: ---------------------------------------------------- Sep 16 04:57:58.407565 ntpd[2144]: ---------------------------------------------------- Sep 16 04:57:58.407574 ntpd[2144]: ntp-4 is maintained by Network Time Foundation, Sep 16 04:57:58.410010 ntpd[2144]: 16 Sep 04:57:58 ntpd[2144]: proto: precision = 0.064 usec (-24) Sep 16 04:57:58.408931 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 16 04:57:58.407582 ntpd[2144]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 16 04:57:58.410184 ntpd[2144]: 16 Sep 04:57:58 ntpd[2144]: basedate set to 2025-09-04 Sep 16 04:57:58.410184 ntpd[2144]: 16 Sep 04:57:58 ntpd[2144]: gps base set to 2025-09-07 (week 2383) Sep 16 04:57:58.407591 ntpd[2144]: corporation. Support and training for ntp-4 are Sep 16 04:57:58.411794 ntpd[2144]: 16 Sep 04:57:58 ntpd[2144]: Listen and drop on 0 v6wildcard [::]:123 Sep 16 04:57:58.411794 ntpd[2144]: 16 Sep 04:57:58 ntpd[2144]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 16 04:57:58.411794 ntpd[2144]: 16 Sep 04:57:58 ntpd[2144]: Listen normally on 2 lo 127.0.0.1:123 Sep 16 04:57:58.411794 ntpd[2144]: 16 Sep 04:57:58 ntpd[2144]: Listen normally on 3 eth0 172.31.28.73:123 Sep 16 04:57:58.411794 ntpd[2144]: 16 Sep 04:57:58 ntpd[2144]: Listen normally on 4 lo [::1]:123 Sep 16 04:57:58.411794 ntpd[2144]: 16 Sep 04:57:58 ntpd[2144]: Listen normally on 5 eth0 [fe80::4dd:d5ff:feaa:208f%2]:123 Sep 16 04:57:58.411794 ntpd[2144]: 16 Sep 04:57:58 ntpd[2144]: Listening on routing socket on fd #22 for interface updates Sep 16 04:57:58.407599 ntpd[2144]: available at https://www.nwtime.org/support Sep 16 04:57:58.407607 ntpd[2144]: ---------------------------------------------------- Sep 16 04:57:58.409858 ntpd[2144]: proto: precision = 0.064 usec (-24) Sep 16 04:57:58.410111 ntpd[2144]: basedate set to 2025-09-04 Sep 16 04:57:58.410124 ntpd[2144]: gps base set to 2025-09-07 (week 2383) Sep 16 04:57:58.411264 ntpd[2144]: Listen and drop on 0 v6wildcard [::]:123 Sep 16 04:57:58.411305 ntpd[2144]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 16 04:57:58.411491 ntpd[2144]: Listen normally on 2 lo 127.0.0.1:123 Sep 16 04:57:58.411517 ntpd[2144]: Listen normally on 3 eth0 172.31.28.73:123 Sep 16 04:57:58.411545 ntpd[2144]: Listen normally on 4 lo [::1]:123 Sep 16 04:57:58.411571 ntpd[2144]: Listen normally on 5 eth0 [fe80::4dd:d5ff:feaa:208f%2]:123 Sep 16 04:57:58.411603 ntpd[2144]: Listening on routing socket on fd #22 for interface updates Sep 16 04:57:58.417103 ntpd[2144]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 16 04:57:58.417543 ntpd[2144]: 16 Sep 04:57:58 ntpd[2144]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 16 04:57:58.417543 ntpd[2144]: 16 Sep 04:57:58 ntpd[2144]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 16 04:57:58.417137 ntpd[2144]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 16 04:57:58.471671 sshd[2122]: Accepted publickey for core from 139.178.68.195 port 57718 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 04:57:58.474227 sshd-session[2122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:57:58.482678 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 16 04:57:58.485498 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 16 04:57:58.498789 systemd-logind[1858]: New session 1 of user core. Sep 16 04:57:58.518115 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 16 04:57:58.526942 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 16 04:57:58.529722 amazon-ssm-agent[2142]: Initializing new seelog logger Sep 16 04:57:58.529722 amazon-ssm-agent[2142]: New Seelog Logger Creation Complete Sep 16 04:57:58.529722 amazon-ssm-agent[2142]: 2025/09/16 04:57:58 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 16 04:57:58.529722 amazon-ssm-agent[2142]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 16 04:57:58.529722 amazon-ssm-agent[2142]: 2025/09/16 04:57:58 processing appconfig overrides Sep 16 04:57:58.532217 amazon-ssm-agent[2142]: 2025/09/16 04:57:58 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 16 04:57:58.532217 amazon-ssm-agent[2142]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 16 04:57:58.532217 amazon-ssm-agent[2142]: 2025/09/16 04:57:58 processing appconfig overrides Sep 16 04:57:58.532388 amazon-ssm-agent[2142]: 2025/09/16 04:57:58 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 16 04:57:58.532388 amazon-ssm-agent[2142]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 16 04:57:58.532461 amazon-ssm-agent[2142]: 2025/09/16 04:57:58 processing appconfig overrides Sep 16 04:57:58.532919 amazon-ssm-agent[2142]: 2025-09-16 04:57:58.5305 INFO Proxy environment variables: Sep 16 04:57:58.538537 amazon-ssm-agent[2142]: 2025/09/16 04:57:58 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 16 04:57:58.538537 amazon-ssm-agent[2142]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 16 04:57:58.538537 amazon-ssm-agent[2142]: 2025/09/16 04:57:58 processing appconfig overrides Sep 16 04:57:58.548442 (systemd)[2166]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 16 04:57:58.557397 systemd-logind[1858]: New session c1 of user core. Sep 16 04:57:58.635358 amazon-ssm-agent[2142]: 2025-09-16 04:57:58.5306 INFO https_proxy: Sep 16 04:57:58.737115 amazon-ssm-agent[2142]: 2025-09-16 04:57:58.5306 INFO http_proxy: Sep 16 04:57:58.834338 amazon-ssm-agent[2142]: 2025-09-16 04:57:58.5306 INFO no_proxy: Sep 16 04:57:58.843594 systemd[2166]: Queued start job for default target default.target. Sep 16 04:57:58.850595 systemd[2166]: Created slice app.slice - User Application Slice. Sep 16 04:57:58.850641 systemd[2166]: Reached target paths.target - Paths. Sep 16 04:57:58.850694 systemd[2166]: Reached target timers.target - Timers. Sep 16 04:57:58.853355 systemd[2166]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 16 04:57:58.882981 systemd[2166]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 16 04:57:58.883150 systemd[2166]: Reached target sockets.target - Sockets. Sep 16 04:57:58.883242 systemd[2166]: Reached target basic.target - Basic System. Sep 16 04:57:58.883295 systemd[2166]: Reached target default.target - Main User Target. Sep 16 04:57:58.883334 systemd[2166]: Startup finished in 307ms. Sep 16 04:57:58.883366 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 16 04:57:58.888448 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 16 04:57:58.932613 amazon-ssm-agent[2142]: 2025-09-16 04:57:58.5319 INFO Checking if agent identity type OnPrem can be assumed Sep 16 04:57:59.010409 amazon-ssm-agent[2142]: 2025/09/16 04:57:59 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 16 04:57:59.010409 amazon-ssm-agent[2142]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 16 04:57:59.010409 amazon-ssm-agent[2142]: 2025/09/16 04:57:59 processing appconfig overrides Sep 16 04:57:59.036972 amazon-ssm-agent[2142]: 2025-09-16 04:57:58.5322 INFO Checking if agent identity type EC2 can be assumed Sep 16 04:57:59.083393 systemd[1]: Started sshd@1-172.31.28.73:22-139.178.68.195:36490.service - OpenSSH per-connection server daemon (139.178.68.195:36490). Sep 16 04:57:59.103979 amazon-ssm-agent[2142]: 2025-09-16 04:57:58.6059 INFO Agent will take identity from EC2 Sep 16 04:57:59.103979 amazon-ssm-agent[2142]: 2025-09-16 04:57:58.6104 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Sep 16 04:57:59.103979 amazon-ssm-agent[2142]: 2025-09-16 04:57:58.6104 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Sep 16 04:57:59.103979 amazon-ssm-agent[2142]: 2025-09-16 04:57:58.6104 INFO [amazon-ssm-agent] Starting Core Agent Sep 16 04:57:59.103979 amazon-ssm-agent[2142]: 2025-09-16 04:57:58.6104 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Sep 16 04:57:59.103979 amazon-ssm-agent[2142]: 2025-09-16 04:57:58.6105 INFO [Registrar] Starting registrar module Sep 16 04:57:59.103979 amazon-ssm-agent[2142]: 2025-09-16 04:57:58.6137 INFO [EC2Identity] Checking disk for registration info Sep 16 04:57:59.103979 amazon-ssm-agent[2142]: 2025-09-16 04:57:58.6137 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Sep 16 04:57:59.103979 amazon-ssm-agent[2142]: 2025-09-16 04:57:58.6138 INFO [EC2Identity] Generating registration keypair Sep 16 04:57:59.103979 amazon-ssm-agent[2142]: 2025-09-16 04:57:58.9478 INFO [EC2Identity] Checking write access before registering Sep 16 04:57:59.103979 amazon-ssm-agent[2142]: 2025-09-16 04:57:58.9484 INFO [EC2Identity] Registering EC2 instance with Systems Manager Sep 16 04:57:59.103979 amazon-ssm-agent[2142]: 2025-09-16 04:57:59.0092 INFO [EC2Identity] EC2 registration was successful. Sep 16 04:57:59.103979 amazon-ssm-agent[2142]: 2025-09-16 04:57:59.0093 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Sep 16 04:57:59.103979 amazon-ssm-agent[2142]: 2025-09-16 04:57:59.0099 INFO [CredentialRefresher] credentialRefresher has started Sep 16 04:57:59.103979 amazon-ssm-agent[2142]: 2025-09-16 04:57:59.0099 INFO [CredentialRefresher] Starting credentials refresher loop Sep 16 04:57:59.103979 amazon-ssm-agent[2142]: 2025-09-16 04:57:59.1031 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 16 04:57:59.103979 amazon-ssm-agent[2142]: 2025-09-16 04:57:59.1034 INFO [CredentialRefresher] Credentials ready Sep 16 04:57:59.137288 amazon-ssm-agent[2142]: 2025-09-16 04:57:59.1038 INFO [CredentialRefresher] Next credential rotation will be in 29.9999892131 minutes Sep 16 04:57:59.360551 sshd[2181]: Accepted publickey for core from 139.178.68.195 port 36490 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 04:57:59.363297 sshd-session[2181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:57:59.372141 systemd-logind[1858]: New session 2 of user core. Sep 16 04:57:59.376526 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 16 04:57:59.504346 sshd[2184]: Connection closed by 139.178.68.195 port 36490 Sep 16 04:57:59.505448 sshd-session[2181]: pam_unix(sshd:session): session closed for user core Sep 16 04:57:59.513864 systemd[1]: sshd@1-172.31.28.73:22-139.178.68.195:36490.service: Deactivated successfully. Sep 16 04:57:59.516809 systemd[1]: session-2.scope: Deactivated successfully. Sep 16 04:57:59.518286 systemd-logind[1858]: Session 2 logged out. Waiting for processes to exit. Sep 16 04:57:59.520816 systemd-logind[1858]: Removed session 2. Sep 16 04:57:59.537313 systemd[1]: Started sshd@2-172.31.28.73:22-139.178.68.195:36504.service - OpenSSH per-connection server daemon (139.178.68.195:36504). Sep 16 04:57:59.710084 sshd[2190]: Accepted publickey for core from 139.178.68.195 port 36504 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 04:57:59.711278 sshd-session[2190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:57:59.719064 systemd-logind[1858]: New session 3 of user core. Sep 16 04:57:59.724533 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 16 04:57:59.840228 sshd[2193]: Connection closed by 139.178.68.195 port 36504 Sep 16 04:57:59.840817 sshd-session[2190]: pam_unix(sshd:session): session closed for user core Sep 16 04:57:59.845789 systemd-logind[1858]: Session 3 logged out. Waiting for processes to exit. Sep 16 04:57:59.846054 systemd[1]: sshd@2-172.31.28.73:22-139.178.68.195:36504.service: Deactivated successfully. Sep 16 04:57:59.848156 systemd[1]: session-3.scope: Deactivated successfully. Sep 16 04:57:59.850162 systemd-logind[1858]: Removed session 3. Sep 16 04:58:00.120973 amazon-ssm-agent[2142]: 2025-09-16 04:58:00.1208 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 16 04:58:00.221280 amazon-ssm-agent[2142]: 2025-09-16 04:58:00.1238 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2200) started Sep 16 04:58:00.285145 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:58:00.287071 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 16 04:58:00.288606 systemd[1]: Startup finished in 2.785s (kernel) + 10.710s (initrd) + 6.959s (userspace) = 20.455s. Sep 16 04:58:00.301308 (kubelet)[2212]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:58:00.322072 amazon-ssm-agent[2142]: 2025-09-16 04:58:00.1239 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 16 04:58:01.594255 kubelet[2212]: E0916 04:58:01.594144 2212 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:58:01.597066 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:58:01.597283 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:58:01.597980 systemd[1]: kubelet.service: Consumed 1.140s CPU time, 265.3M memory peak. Sep 16 04:58:06.475063 systemd-resolved[1767]: Clock change detected. Flushing caches. Sep 16 04:58:10.941588 systemd[1]: Started sshd@3-172.31.28.73:22-139.178.68.195:38256.service - OpenSSH per-connection server daemon (139.178.68.195:38256). Sep 16 04:58:11.122928 sshd[2228]: Accepted publickey for core from 139.178.68.195 port 38256 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 04:58:11.124584 sshd-session[2228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:58:11.129807 systemd-logind[1858]: New session 4 of user core. Sep 16 04:58:11.135476 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 16 04:58:11.256632 sshd[2231]: Connection closed by 139.178.68.195 port 38256 Sep 16 04:58:11.257684 sshd-session[2228]: pam_unix(sshd:session): session closed for user core Sep 16 04:58:11.261753 systemd[1]: sshd@3-172.31.28.73:22-139.178.68.195:38256.service: Deactivated successfully. Sep 16 04:58:11.263850 systemd[1]: session-4.scope: Deactivated successfully. Sep 16 04:58:11.265493 systemd-logind[1858]: Session 4 logged out. Waiting for processes to exit. Sep 16 04:58:11.267536 systemd-logind[1858]: Removed session 4. Sep 16 04:58:11.289987 systemd[1]: Started sshd@4-172.31.28.73:22-139.178.68.195:38262.service - OpenSSH per-connection server daemon (139.178.68.195:38262). Sep 16 04:58:11.464912 sshd[2237]: Accepted publickey for core from 139.178.68.195 port 38262 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 04:58:11.466234 sshd-session[2237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:58:11.472822 systemd-logind[1858]: New session 5 of user core. Sep 16 04:58:11.478361 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 16 04:58:11.596004 sshd[2240]: Connection closed by 139.178.68.195 port 38262 Sep 16 04:58:11.597033 sshd-session[2237]: pam_unix(sshd:session): session closed for user core Sep 16 04:58:11.601906 systemd[1]: sshd@4-172.31.28.73:22-139.178.68.195:38262.service: Deactivated successfully. Sep 16 04:58:11.603991 systemd[1]: session-5.scope: Deactivated successfully. Sep 16 04:58:11.605148 systemd-logind[1858]: Session 5 logged out. Waiting for processes to exit. Sep 16 04:58:11.606819 systemd-logind[1858]: Removed session 5. Sep 16 04:58:11.628023 systemd[1]: Started sshd@5-172.31.28.73:22-139.178.68.195:38268.service - OpenSSH per-connection server daemon (139.178.68.195:38268). Sep 16 04:58:11.803407 sshd[2246]: Accepted publickey for core from 139.178.68.195 port 38268 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 04:58:11.804747 sshd-session[2246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:58:11.809293 systemd-logind[1858]: New session 6 of user core. Sep 16 04:58:11.820331 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 16 04:58:11.936251 sshd[2249]: Connection closed by 139.178.68.195 port 38268 Sep 16 04:58:11.937095 sshd-session[2246]: pam_unix(sshd:session): session closed for user core Sep 16 04:58:11.941427 systemd[1]: sshd@5-172.31.28.73:22-139.178.68.195:38268.service: Deactivated successfully. Sep 16 04:58:11.943386 systemd[1]: session-6.scope: Deactivated successfully. Sep 16 04:58:11.944433 systemd-logind[1858]: Session 6 logged out. Waiting for processes to exit. Sep 16 04:58:11.946248 systemd-logind[1858]: Removed session 6. Sep 16 04:58:11.973968 systemd[1]: Started sshd@6-172.31.28.73:22-139.178.68.195:38274.service - OpenSSH per-connection server daemon (139.178.68.195:38274). Sep 16 04:58:12.140234 sshd[2255]: Accepted publickey for core from 139.178.68.195 port 38274 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 04:58:12.141582 sshd-session[2255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:58:12.148695 systemd-logind[1858]: New session 7 of user core. Sep 16 04:58:12.152324 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 16 04:58:12.264014 sudo[2259]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 16 04:58:12.264307 sudo[2259]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:58:12.277710 sudo[2259]: pam_unix(sudo:session): session closed for user root Sep 16 04:58:12.300225 sshd[2258]: Connection closed by 139.178.68.195 port 38274 Sep 16 04:58:12.300992 sshd-session[2255]: pam_unix(sshd:session): session closed for user core Sep 16 04:58:12.305516 systemd[1]: sshd@6-172.31.28.73:22-139.178.68.195:38274.service: Deactivated successfully. Sep 16 04:58:12.307652 systemd[1]: session-7.scope: Deactivated successfully. Sep 16 04:58:12.308752 systemd-logind[1858]: Session 7 logged out. Waiting for processes to exit. Sep 16 04:58:12.310646 systemd-logind[1858]: Removed session 7. Sep 16 04:58:12.336843 systemd[1]: Started sshd@7-172.31.28.73:22-139.178.68.195:38288.service - OpenSSH per-connection server daemon (139.178.68.195:38288). Sep 16 04:58:12.503832 sshd[2265]: Accepted publickey for core from 139.178.68.195 port 38288 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 04:58:12.505217 sshd-session[2265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:58:12.512020 systemd-logind[1858]: New session 8 of user core. Sep 16 04:58:12.521375 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 16 04:58:12.618314 sudo[2270]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 16 04:58:12.618588 sudo[2270]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:58:12.624380 sudo[2270]: pam_unix(sudo:session): session closed for user root Sep 16 04:58:12.630298 sudo[2269]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 16 04:58:12.630823 sudo[2269]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:58:12.645595 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 16 04:58:12.689012 augenrules[2292]: No rules Sep 16 04:58:12.690434 systemd[1]: audit-rules.service: Deactivated successfully. Sep 16 04:58:12.690721 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 16 04:58:12.692067 sudo[2269]: pam_unix(sudo:session): session closed for user root Sep 16 04:58:12.693474 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 16 04:58:12.696848 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:58:12.716149 sshd[2268]: Connection closed by 139.178.68.195 port 38288 Sep 16 04:58:12.717252 sshd-session[2265]: pam_unix(sshd:session): session closed for user core Sep 16 04:58:12.722636 systemd[1]: sshd@7-172.31.28.73:22-139.178.68.195:38288.service: Deactivated successfully. Sep 16 04:58:12.727304 systemd[1]: session-8.scope: Deactivated successfully. Sep 16 04:58:12.729186 systemd-logind[1858]: Session 8 logged out. Waiting for processes to exit. Sep 16 04:58:12.732282 systemd-logind[1858]: Removed session 8. Sep 16 04:58:12.752204 systemd[1]: Started sshd@8-172.31.28.73:22-139.178.68.195:38304.service - OpenSSH per-connection server daemon (139.178.68.195:38304). Sep 16 04:58:12.918756 sshd[2304]: Accepted publickey for core from 139.178.68.195 port 38304 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 04:58:12.920771 sshd-session[2304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:58:12.927591 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:58:12.933210 systemd-logind[1858]: New session 9 of user core. Sep 16 04:58:12.939489 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 16 04:58:12.940196 (kubelet)[2312]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:58:12.996512 kubelet[2312]: E0916 04:58:12.996457 2312 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:58:13.006680 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:58:13.006863 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:58:13.007641 systemd[1]: kubelet.service: Consumed 186ms CPU time, 108.5M memory peak. Sep 16 04:58:13.038256 sudo[2321]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 16 04:58:13.038518 sudo[2321]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 16 04:58:13.446234 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 16 04:58:13.467636 (dockerd)[2339]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 16 04:58:13.766333 dockerd[2339]: time="2025-09-16T04:58:13.766185441Z" level=info msg="Starting up" Sep 16 04:58:13.767343 dockerd[2339]: time="2025-09-16T04:58:13.767305073Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 16 04:58:13.778934 dockerd[2339]: time="2025-09-16T04:58:13.778881389Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 16 04:58:13.838567 dockerd[2339]: time="2025-09-16T04:58:13.838279893Z" level=info msg="Loading containers: start." Sep 16 04:58:13.851489 kernel: Initializing XFRM netlink socket Sep 16 04:58:14.107959 (udev-worker)[2361]: Network interface NamePolicy= disabled on kernel command line. Sep 16 04:58:14.161926 systemd-networkd[1815]: docker0: Link UP Sep 16 04:58:14.167067 dockerd[2339]: time="2025-09-16T04:58:14.167013600Z" level=info msg="Loading containers: done." Sep 16 04:58:14.181730 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck465195378-merged.mount: Deactivated successfully. Sep 16 04:58:14.186863 dockerd[2339]: time="2025-09-16T04:58:14.186806101Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 16 04:58:14.187028 dockerd[2339]: time="2025-09-16T04:58:14.186897304Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 16 04:58:14.187028 dockerd[2339]: time="2025-09-16T04:58:14.186990690Z" level=info msg="Initializing buildkit" Sep 16 04:58:14.214963 dockerd[2339]: time="2025-09-16T04:58:14.214919366Z" level=info msg="Completed buildkit initialization" Sep 16 04:58:14.224454 dockerd[2339]: time="2025-09-16T04:58:14.224369475Z" level=info msg="Daemon has completed initialization" Sep 16 04:58:14.224454 dockerd[2339]: time="2025-09-16T04:58:14.224444993Z" level=info msg="API listen on /run/docker.sock" Sep 16 04:58:14.224827 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 16 04:58:15.240149 containerd[1909]: time="2025-09-16T04:58:15.240107282Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 16 04:58:15.770973 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2468434992.mount: Deactivated successfully. Sep 16 04:58:17.288102 containerd[1909]: time="2025-09-16T04:58:17.288021551Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:17.289117 containerd[1909]: time="2025-09-16T04:58:17.289033463Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Sep 16 04:58:17.290216 containerd[1909]: time="2025-09-16T04:58:17.290160437Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:17.292905 containerd[1909]: time="2025-09-16T04:58:17.292850122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:17.293701 containerd[1909]: time="2025-09-16T04:58:17.293521992Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 2.053377223s" Sep 16 04:58:17.293701 containerd[1909]: time="2025-09-16T04:58:17.293557513Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Sep 16 04:58:17.294195 containerd[1909]: time="2025-09-16T04:58:17.294172416Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 16 04:58:19.138347 containerd[1909]: time="2025-09-16T04:58:19.138285669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:19.139685 containerd[1909]: time="2025-09-16T04:58:19.139630738Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Sep 16 04:58:19.140910 containerd[1909]: time="2025-09-16T04:58:19.140462140Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:19.143372 containerd[1909]: time="2025-09-16T04:58:19.143336093Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:19.144396 containerd[1909]: time="2025-09-16T04:58:19.144358437Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.850153636s" Sep 16 04:58:19.144534 containerd[1909]: time="2025-09-16T04:58:19.144513074Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Sep 16 04:58:19.145035 containerd[1909]: time="2025-09-16T04:58:19.144987313Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 16 04:58:20.549710 containerd[1909]: time="2025-09-16T04:58:20.549627299Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:20.550784 containerd[1909]: time="2025-09-16T04:58:20.550739950Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Sep 16 04:58:20.553103 containerd[1909]: time="2025-09-16T04:58:20.552202927Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:20.554557 containerd[1909]: time="2025-09-16T04:58:20.554519244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:20.555971 containerd[1909]: time="2025-09-16T04:58:20.555934216Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.410892709s" Sep 16 04:58:20.556149 containerd[1909]: time="2025-09-16T04:58:20.556127326Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Sep 16 04:58:20.556889 containerd[1909]: time="2025-09-16T04:58:20.556844307Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 16 04:58:21.738730 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1043698931.mount: Deactivated successfully. Sep 16 04:58:22.319927 containerd[1909]: time="2025-09-16T04:58:22.319869368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:22.321021 containerd[1909]: time="2025-09-16T04:58:22.320906316Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Sep 16 04:58:22.322819 containerd[1909]: time="2025-09-16T04:58:22.321949811Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:22.323790 containerd[1909]: time="2025-09-16T04:58:22.323755310Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:22.324355 containerd[1909]: time="2025-09-16T04:58:22.324326959Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 1.767320558s" Sep 16 04:58:22.324465 containerd[1909]: time="2025-09-16T04:58:22.324449879Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Sep 16 04:58:22.325006 containerd[1909]: time="2025-09-16T04:58:22.324952117Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 16 04:58:22.791801 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2156298158.mount: Deactivated successfully. Sep 16 04:58:23.257422 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 16 04:58:23.259458 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:58:23.523976 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:58:23.537697 (kubelet)[2684]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 16 04:58:23.618721 kubelet[2684]: E0916 04:58:23.618672 2684 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 16 04:58:23.623000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 16 04:58:23.623413 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 16 04:58:23.624155 systemd[1]: kubelet.service: Consumed 215ms CPU time, 108.1M memory peak. Sep 16 04:58:24.039307 containerd[1909]: time="2025-09-16T04:58:24.039248312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:24.041291 containerd[1909]: time="2025-09-16T04:58:24.041245630Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 16 04:58:24.043781 containerd[1909]: time="2025-09-16T04:58:24.043719961Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:24.047877 containerd[1909]: time="2025-09-16T04:58:24.047811016Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:24.048727 containerd[1909]: time="2025-09-16T04:58:24.048552808Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.723569425s" Sep 16 04:58:24.048727 containerd[1909]: time="2025-09-16T04:58:24.048587800Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 16 04:58:24.049874 containerd[1909]: time="2025-09-16T04:58:24.049842177Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 16 04:58:24.517540 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2521352803.mount: Deactivated successfully. Sep 16 04:58:24.531355 containerd[1909]: time="2025-09-16T04:58:24.531293358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 04:58:24.535250 containerd[1909]: time="2025-09-16T04:58:24.535206727Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 16 04:58:24.537695 containerd[1909]: time="2025-09-16T04:58:24.537622824Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 04:58:24.543143 containerd[1909]: time="2025-09-16T04:58:24.542309470Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 16 04:58:24.543143 containerd[1909]: time="2025-09-16T04:58:24.542805488Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 492.929337ms" Sep 16 04:58:24.543143 containerd[1909]: time="2025-09-16T04:58:24.542836712Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 16 04:58:24.543475 containerd[1909]: time="2025-09-16T04:58:24.543443152Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 16 04:58:25.114402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2607834125.mount: Deactivated successfully. Sep 16 04:58:27.871711 containerd[1909]: time="2025-09-16T04:58:27.871648539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:27.872858 containerd[1909]: time="2025-09-16T04:58:27.872658286Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Sep 16 04:58:27.873738 containerd[1909]: time="2025-09-16T04:58:27.873708989Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:27.876843 containerd[1909]: time="2025-09-16T04:58:27.876808870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:27.877998 containerd[1909]: time="2025-09-16T04:58:27.877963266Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.334485391s" Sep 16 04:58:27.878258 containerd[1909]: time="2025-09-16T04:58:27.878130103Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 16 04:58:29.142546 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 16 04:58:30.611220 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:58:30.611537 systemd[1]: kubelet.service: Consumed 215ms CPU time, 108.1M memory peak. Sep 16 04:58:30.614506 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:58:30.660643 systemd[1]: Reload requested from client PID 2781 ('systemctl') (unit session-9.scope)... Sep 16 04:58:30.660667 systemd[1]: Reloading... Sep 16 04:58:30.818120 zram_generator::config[2828]: No configuration found. Sep 16 04:58:31.091252 systemd[1]: Reloading finished in 429 ms. Sep 16 04:58:31.155985 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 16 04:58:31.156124 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 16 04:58:31.156488 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:58:31.156549 systemd[1]: kubelet.service: Consumed 148ms CPU time, 98.1M memory peak. Sep 16 04:58:31.158692 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:58:31.391602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:58:31.402480 (kubelet)[2888]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 16 04:58:31.462103 kubelet[2888]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:58:31.462563 kubelet[2888]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 16 04:58:31.462563 kubelet[2888]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:58:31.464519 kubelet[2888]: I0916 04:58:31.464346 2888 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 16 04:58:31.736146 kubelet[2888]: I0916 04:58:31.735983 2888 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 16 04:58:31.736146 kubelet[2888]: I0916 04:58:31.736034 2888 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 16 04:58:31.736599 kubelet[2888]: I0916 04:58:31.736366 2888 server.go:954] "Client rotation is on, will bootstrap in background" Sep 16 04:58:31.798111 kubelet[2888]: E0916 04:58:31.797530 2888 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.28.73:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.28.73:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:58:31.800725 kubelet[2888]: I0916 04:58:31.800688 2888 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 16 04:58:31.814287 kubelet[2888]: I0916 04:58:31.814246 2888 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 16 04:58:31.820820 kubelet[2888]: I0916 04:58:31.820777 2888 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 16 04:58:31.826116 kubelet[2888]: I0916 04:58:31.823031 2888 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 16 04:58:31.826116 kubelet[2888]: I0916 04:58:31.823077 2888 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-73","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 16 04:58:31.828918 kubelet[2888]: I0916 04:58:31.828838 2888 topology_manager.go:138] "Creating topology manager with none policy" Sep 16 04:58:31.828918 kubelet[2888]: I0916 04:58:31.828882 2888 container_manager_linux.go:304] "Creating device plugin manager" Sep 16 04:58:31.830426 kubelet[2888]: I0916 04:58:31.830371 2888 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:58:31.838405 kubelet[2888]: I0916 04:58:31.838124 2888 kubelet.go:446] "Attempting to sync node with API server" Sep 16 04:58:31.838405 kubelet[2888]: I0916 04:58:31.838177 2888 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 16 04:58:31.838405 kubelet[2888]: I0916 04:58:31.838207 2888 kubelet.go:352] "Adding apiserver pod source" Sep 16 04:58:31.838405 kubelet[2888]: I0916 04:58:31.838219 2888 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 16 04:58:31.846621 kubelet[2888]: W0916 04:58:31.844983 2888 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.28.73:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-73&limit=500&resourceVersion=0": dial tcp 172.31.28.73:6443: connect: connection refused Sep 16 04:58:31.846621 kubelet[2888]: E0916 04:58:31.846398 2888 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.28.73:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-73&limit=500&resourceVersion=0\": dial tcp 172.31.28.73:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:58:31.846621 kubelet[2888]: I0916 04:58:31.846491 2888 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 16 04:58:31.850917 kubelet[2888]: I0916 04:58:31.850746 2888 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 16 04:58:31.850917 kubelet[2888]: W0916 04:58:31.850910 2888 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 16 04:58:31.856587 kubelet[2888]: I0916 04:58:31.856543 2888 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 16 04:58:31.856587 kubelet[2888]: I0916 04:58:31.856588 2888 server.go:1287] "Started kubelet" Sep 16 04:58:31.858540 kubelet[2888]: W0916 04:58:31.858492 2888 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.28.73:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.28.73:6443: connect: connection refused Sep 16 04:58:31.858623 kubelet[2888]: E0916 04:58:31.858546 2888 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.28.73:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.73:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:58:31.858682 kubelet[2888]: I0916 04:58:31.858628 2888 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 16 04:58:31.864726 kubelet[2888]: I0916 04:58:31.863647 2888 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 16 04:58:31.864726 kubelet[2888]: I0916 04:58:31.864323 2888 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 16 04:58:31.864726 kubelet[2888]: I0916 04:58:31.864651 2888 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 16 04:58:31.872162 kubelet[2888]: E0916 04:58:31.866723 2888 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.28.73:6443/api/v1/namespaces/default/events\": dial tcp 172.31.28.73:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-28-73.1865aa839fce1411 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-73,UID:ip-172-31-28-73,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-28-73,},FirstTimestamp:2025-09-16 04:58:31.856567313 +0000 UTC m=+0.449651853,LastTimestamp:2025-09-16 04:58:31.856567313 +0000 UTC m=+0.449651853,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-73,}" Sep 16 04:58:31.872397 kubelet[2888]: I0916 04:58:31.872369 2888 server.go:479] "Adding debug handlers to kubelet server" Sep 16 04:58:31.878721 kubelet[2888]: I0916 04:58:31.877295 2888 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 16 04:58:31.878721 kubelet[2888]: E0916 04:58:31.877595 2888 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-73\" not found" Sep 16 04:58:31.881236 kubelet[2888]: I0916 04:58:31.881211 2888 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 16 04:58:31.881386 kubelet[2888]: I0916 04:58:31.881365 2888 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 16 04:58:31.881445 kubelet[2888]: I0916 04:58:31.881430 2888 reconciler.go:26] "Reconciler: start to sync state" Sep 16 04:58:31.886073 kubelet[2888]: E0916 04:58:31.886006 2888 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-73?timeout=10s\": dial tcp 172.31.28.73:6443: connect: connection refused" interval="200ms" Sep 16 04:58:31.887461 kubelet[2888]: W0916 04:58:31.887241 2888 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.28.73:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.28.73:6443: connect: connection refused Sep 16 04:58:31.890603 kubelet[2888]: E0916 04:58:31.889193 2888 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.28.73:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.73:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:58:31.896638 kubelet[2888]: I0916 04:58:31.896610 2888 factory.go:221] Registration of the systemd container factory successfully Sep 16 04:58:31.897419 kubelet[2888]: I0916 04:58:31.897389 2888 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 16 04:58:31.909024 kubelet[2888]: I0916 04:58:31.908963 2888 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 16 04:58:31.910893 kubelet[2888]: I0916 04:58:31.910775 2888 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 16 04:58:31.910893 kubelet[2888]: I0916 04:58:31.910882 2888 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 16 04:58:31.911051 kubelet[2888]: I0916 04:58:31.910912 2888 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 16 04:58:31.911051 kubelet[2888]: I0916 04:58:31.910923 2888 kubelet.go:2382] "Starting kubelet main sync loop" Sep 16 04:58:31.911051 kubelet[2888]: E0916 04:58:31.910976 2888 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 16 04:58:31.912742 kubelet[2888]: I0916 04:58:31.911694 2888 factory.go:221] Registration of the containerd container factory successfully Sep 16 04:58:31.922487 kubelet[2888]: W0916 04:58:31.922434 2888 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.28.73:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.28.73:6443: connect: connection refused Sep 16 04:58:31.922661 kubelet[2888]: E0916 04:58:31.922639 2888 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.28.73:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.73:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:58:31.923037 kubelet[2888]: E0916 04:58:31.923003 2888 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 16 04:58:31.942564 kubelet[2888]: I0916 04:58:31.942537 2888 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 16 04:58:31.942564 kubelet[2888]: I0916 04:58:31.942554 2888 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 16 04:58:31.942564 kubelet[2888]: I0916 04:58:31.942571 2888 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:58:31.946268 kubelet[2888]: I0916 04:58:31.946231 2888 policy_none.go:49] "None policy: Start" Sep 16 04:58:31.946268 kubelet[2888]: I0916 04:58:31.946260 2888 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 16 04:58:31.946268 kubelet[2888]: I0916 04:58:31.946271 2888 state_mem.go:35] "Initializing new in-memory state store" Sep 16 04:58:31.966417 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 16 04:58:31.977324 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 16 04:58:31.977940 kubelet[2888]: E0916 04:58:31.977763 2888 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-73\" not found" Sep 16 04:58:31.981366 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 16 04:58:31.989616 kubelet[2888]: I0916 04:58:31.989020 2888 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 16 04:58:31.989616 kubelet[2888]: I0916 04:58:31.989230 2888 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 16 04:58:31.989616 kubelet[2888]: I0916 04:58:31.989241 2888 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 16 04:58:31.991985 kubelet[2888]: I0916 04:58:31.991205 2888 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 16 04:58:31.993120 kubelet[2888]: E0916 04:58:31.993062 2888 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 16 04:58:31.993291 kubelet[2888]: E0916 04:58:31.993274 2888 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-28-73\" not found" Sep 16 04:58:32.026050 systemd[1]: Created slice kubepods-burstable-pod54297050fc671ece3fea6c4a273d1aa5.slice - libcontainer container kubepods-burstable-pod54297050fc671ece3fea6c4a273d1aa5.slice. Sep 16 04:58:32.035434 kubelet[2888]: E0916 04:58:32.035339 2888 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-73\" not found" node="ip-172-31-28-73" Sep 16 04:58:32.038689 systemd[1]: Created slice kubepods-burstable-podd5b7ca9021073c131330808a61d7c887.slice - libcontainer container kubepods-burstable-podd5b7ca9021073c131330808a61d7c887.slice. Sep 16 04:58:32.043449 kubelet[2888]: E0916 04:58:32.043419 2888 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-73\" not found" node="ip-172-31-28-73" Sep 16 04:58:32.044960 systemd[1]: Created slice kubepods-burstable-pod6cb982fbc4c71d688fa9a41f8d442ea4.slice - libcontainer container kubepods-burstable-pod6cb982fbc4c71d688fa9a41f8d442ea4.slice. Sep 16 04:58:32.047748 kubelet[2888]: E0916 04:58:32.047718 2888 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-73\" not found" node="ip-172-31-28-73" Sep 16 04:58:32.086756 kubelet[2888]: E0916 04:58:32.086706 2888 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-73?timeout=10s\": dial tcp 172.31.28.73:6443: connect: connection refused" interval="400ms" Sep 16 04:58:32.091768 kubelet[2888]: I0916 04:58:32.091579 2888 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-73" Sep 16 04:58:32.092600 kubelet[2888]: E0916 04:58:32.092566 2888 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.73:6443/api/v1/nodes\": dial tcp 172.31.28.73:6443: connect: connection refused" node="ip-172-31-28-73" Sep 16 04:58:32.182527 kubelet[2888]: I0916 04:58:32.182405 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6cb982fbc4c71d688fa9a41f8d442ea4-ca-certs\") pod \"kube-apiserver-ip-172-31-28-73\" (UID: \"6cb982fbc4c71d688fa9a41f8d442ea4\") " pod="kube-system/kube-apiserver-ip-172-31-28-73" Sep 16 04:58:32.182527 kubelet[2888]: I0916 04:58:32.182453 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6cb982fbc4c71d688fa9a41f8d442ea4-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-73\" (UID: \"6cb982fbc4c71d688fa9a41f8d442ea4\") " pod="kube-system/kube-apiserver-ip-172-31-28-73" Sep 16 04:58:32.182527 kubelet[2888]: I0916 04:58:32.182470 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/54297050fc671ece3fea6c4a273d1aa5-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-73\" (UID: \"54297050fc671ece3fea6c4a273d1aa5\") " pod="kube-system/kube-controller-manager-ip-172-31-28-73" Sep 16 04:58:32.182527 kubelet[2888]: I0916 04:58:32.182488 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/54297050fc671ece3fea6c4a273d1aa5-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-73\" (UID: \"54297050fc671ece3fea6c4a273d1aa5\") " pod="kube-system/kube-controller-manager-ip-172-31-28-73" Sep 16 04:58:32.182527 kubelet[2888]: I0916 04:58:32.182503 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/54297050fc671ece3fea6c4a273d1aa5-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-73\" (UID: \"54297050fc671ece3fea6c4a273d1aa5\") " pod="kube-system/kube-controller-manager-ip-172-31-28-73" Sep 16 04:58:32.182983 kubelet[2888]: I0916 04:58:32.182521 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d5b7ca9021073c131330808a61d7c887-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-73\" (UID: \"d5b7ca9021073c131330808a61d7c887\") " pod="kube-system/kube-scheduler-ip-172-31-28-73" Sep 16 04:58:32.182983 kubelet[2888]: I0916 04:58:32.182538 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6cb982fbc4c71d688fa9a41f8d442ea4-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-73\" (UID: \"6cb982fbc4c71d688fa9a41f8d442ea4\") " pod="kube-system/kube-apiserver-ip-172-31-28-73" Sep 16 04:58:32.182983 kubelet[2888]: I0916 04:58:32.182555 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/54297050fc671ece3fea6c4a273d1aa5-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-73\" (UID: \"54297050fc671ece3fea6c4a273d1aa5\") " pod="kube-system/kube-controller-manager-ip-172-31-28-73" Sep 16 04:58:32.182983 kubelet[2888]: I0916 04:58:32.182573 2888 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/54297050fc671ece3fea6c4a273d1aa5-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-73\" (UID: \"54297050fc671ece3fea6c4a273d1aa5\") " pod="kube-system/kube-controller-manager-ip-172-31-28-73" Sep 16 04:58:32.295444 kubelet[2888]: I0916 04:58:32.295313 2888 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-73" Sep 16 04:58:32.295766 kubelet[2888]: E0916 04:58:32.295722 2888 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.73:6443/api/v1/nodes\": dial tcp 172.31.28.73:6443: connect: connection refused" node="ip-172-31-28-73" Sep 16 04:58:32.336661 containerd[1909]: time="2025-09-16T04:58:32.336617989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-73,Uid:54297050fc671ece3fea6c4a273d1aa5,Namespace:kube-system,Attempt:0,}" Sep 16 04:58:32.344647 containerd[1909]: time="2025-09-16T04:58:32.344417159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-73,Uid:d5b7ca9021073c131330808a61d7c887,Namespace:kube-system,Attempt:0,}" Sep 16 04:58:32.349266 containerd[1909]: time="2025-09-16T04:58:32.349203250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-73,Uid:6cb982fbc4c71d688fa9a41f8d442ea4,Namespace:kube-system,Attempt:0,}" Sep 16 04:58:32.488057 kubelet[2888]: E0916 04:58:32.487981 2888 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-73?timeout=10s\": dial tcp 172.31.28.73:6443: connect: connection refused" interval="800ms" Sep 16 04:58:32.497029 containerd[1909]: time="2025-09-16T04:58:32.496957256Z" level=info msg="connecting to shim 6360fbed32c68e1d26ab304356e2d1d05e60d7c4ea3d2e4e183976f984dea2bf" address="unix:///run/containerd/s/b57ce2309e9eddaf5e1c3e735a9e288df09fd00271aa76648e841f25e95e9a2f" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:58:32.510275 containerd[1909]: time="2025-09-16T04:58:32.510219330Z" level=info msg="connecting to shim 69df08bec008f5d5343244b7dedb576a835d38db1283fd0fe884b4f1086c8eab" address="unix:///run/containerd/s/b538fc7763431c49261db2ad4c054ba1e15f93f5877f8d97919845ebad9555a4" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:58:32.511504 containerd[1909]: time="2025-09-16T04:58:32.511457584Z" level=info msg="connecting to shim 169f21647db58d97dece93f672dce17351964d9e3887e85a94aa541079ed0a6d" address="unix:///run/containerd/s/607254b4c029e2c12daa053691f8adb0f662b78fb9ef06b44640edc0b4375241" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:58:32.613381 systemd[1]: Started cri-containerd-169f21647db58d97dece93f672dce17351964d9e3887e85a94aa541079ed0a6d.scope - libcontainer container 169f21647db58d97dece93f672dce17351964d9e3887e85a94aa541079ed0a6d. Sep 16 04:58:32.623554 systemd[1]: Started cri-containerd-6360fbed32c68e1d26ab304356e2d1d05e60d7c4ea3d2e4e183976f984dea2bf.scope - libcontainer container 6360fbed32c68e1d26ab304356e2d1d05e60d7c4ea3d2e4e183976f984dea2bf. Sep 16 04:58:32.626255 systemd[1]: Started cri-containerd-69df08bec008f5d5343244b7dedb576a835d38db1283fd0fe884b4f1086c8eab.scope - libcontainer container 69df08bec008f5d5343244b7dedb576a835d38db1283fd0fe884b4f1086c8eab. Sep 16 04:58:32.699109 kubelet[2888]: I0916 04:58:32.699065 2888 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-73" Sep 16 04:58:32.699767 kubelet[2888]: E0916 04:58:32.699728 2888 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.73:6443/api/v1/nodes\": dial tcp 172.31.28.73:6443: connect: connection refused" node="ip-172-31-28-73" Sep 16 04:58:32.702572 kubelet[2888]: W0916 04:58:32.702545 2888 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.28.73:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.28.73:6443: connect: connection refused Sep 16 04:58:32.702739 kubelet[2888]: E0916 04:58:32.702720 2888 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.28.73:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.73:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:58:32.744766 containerd[1909]: time="2025-09-16T04:58:32.744713447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-73,Uid:54297050fc671ece3fea6c4a273d1aa5,Namespace:kube-system,Attempt:0,} returns sandbox id \"169f21647db58d97dece93f672dce17351964d9e3887e85a94aa541079ed0a6d\"" Sep 16 04:58:32.753558 containerd[1909]: time="2025-09-16T04:58:32.753519529Z" level=info msg="CreateContainer within sandbox \"169f21647db58d97dece93f672dce17351964d9e3887e85a94aa541079ed0a6d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 16 04:58:32.758882 containerd[1909]: time="2025-09-16T04:58:32.758786531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-73,Uid:6cb982fbc4c71d688fa9a41f8d442ea4,Namespace:kube-system,Attempt:0,} returns sandbox id \"6360fbed32c68e1d26ab304356e2d1d05e60d7c4ea3d2e4e183976f984dea2bf\"" Sep 16 04:58:32.764608 containerd[1909]: time="2025-09-16T04:58:32.763714412Z" level=info msg="CreateContainer within sandbox \"6360fbed32c68e1d26ab304356e2d1d05e60d7c4ea3d2e4e183976f984dea2bf\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 16 04:58:32.772070 containerd[1909]: time="2025-09-16T04:58:32.772033704Z" level=info msg="Container 9e8dc75ee8c9b9c0cad90ab4f555d8566657c8dfd4e73a5150eb9e2c0143e41f: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:58:32.772990 containerd[1909]: time="2025-09-16T04:58:32.772585029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-73,Uid:d5b7ca9021073c131330808a61d7c887,Namespace:kube-system,Attempt:0,} returns sandbox id \"69df08bec008f5d5343244b7dedb576a835d38db1283fd0fe884b4f1086c8eab\"" Sep 16 04:58:32.779424 containerd[1909]: time="2025-09-16T04:58:32.779392754Z" level=info msg="CreateContainer within sandbox \"69df08bec008f5d5343244b7dedb576a835d38db1283fd0fe884b4f1086c8eab\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 16 04:58:32.784489 containerd[1909]: time="2025-09-16T04:58:32.784452973Z" level=info msg="Container 554ab5f8a66bcbef6cd7f4c5fa9f09968d8112f0dcd7bf0367262eefe88438e7: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:58:32.791730 containerd[1909]: time="2025-09-16T04:58:32.791665909Z" level=info msg="Container 25b5a84e1f0a3a54ea07cd13f6d6990b36e4c9ab5ffc2413a1b646f8c079f5a4: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:58:32.797110 containerd[1909]: time="2025-09-16T04:58:32.796962719Z" level=info msg="CreateContainer within sandbox \"169f21647db58d97dece93f672dce17351964d9e3887e85a94aa541079ed0a6d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9e8dc75ee8c9b9c0cad90ab4f555d8566657c8dfd4e73a5150eb9e2c0143e41f\"" Sep 16 04:58:32.799303 containerd[1909]: time="2025-09-16T04:58:32.798158739Z" level=info msg="StartContainer for \"9e8dc75ee8c9b9c0cad90ab4f555d8566657c8dfd4e73a5150eb9e2c0143e41f\"" Sep 16 04:58:32.799303 containerd[1909]: time="2025-09-16T04:58:32.798537608Z" level=info msg="CreateContainer within sandbox \"6360fbed32c68e1d26ab304356e2d1d05e60d7c4ea3d2e4e183976f984dea2bf\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"554ab5f8a66bcbef6cd7f4c5fa9f09968d8112f0dcd7bf0367262eefe88438e7\"" Sep 16 04:58:32.799426 containerd[1909]: time="2025-09-16T04:58:32.799390238Z" level=info msg="StartContainer for \"554ab5f8a66bcbef6cd7f4c5fa9f09968d8112f0dcd7bf0367262eefe88438e7\"" Sep 16 04:58:32.799763 containerd[1909]: time="2025-09-16T04:58:32.799744113Z" level=info msg="connecting to shim 9e8dc75ee8c9b9c0cad90ab4f555d8566657c8dfd4e73a5150eb9e2c0143e41f" address="unix:///run/containerd/s/607254b4c029e2c12daa053691f8adb0f662b78fb9ef06b44640edc0b4375241" protocol=ttrpc version=3 Sep 16 04:58:32.800876 containerd[1909]: time="2025-09-16T04:58:32.800855033Z" level=info msg="connecting to shim 554ab5f8a66bcbef6cd7f4c5fa9f09968d8112f0dcd7bf0367262eefe88438e7" address="unix:///run/containerd/s/b57ce2309e9eddaf5e1c3e735a9e288df09fd00271aa76648e841f25e95e9a2f" protocol=ttrpc version=3 Sep 16 04:58:32.801857 containerd[1909]: time="2025-09-16T04:58:32.801827390Z" level=info msg="CreateContainer within sandbox \"69df08bec008f5d5343244b7dedb576a835d38db1283fd0fe884b4f1086c8eab\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"25b5a84e1f0a3a54ea07cd13f6d6990b36e4c9ab5ffc2413a1b646f8c079f5a4\"" Sep 16 04:58:32.802795 containerd[1909]: time="2025-09-16T04:58:32.802771776Z" level=info msg="StartContainer for \"25b5a84e1f0a3a54ea07cd13f6d6990b36e4c9ab5ffc2413a1b646f8c079f5a4\"" Sep 16 04:58:32.803835 containerd[1909]: time="2025-09-16T04:58:32.803813843Z" level=info msg="connecting to shim 25b5a84e1f0a3a54ea07cd13f6d6990b36e4c9ab5ffc2413a1b646f8c079f5a4" address="unix:///run/containerd/s/b538fc7763431c49261db2ad4c054ba1e15f93f5877f8d97919845ebad9555a4" protocol=ttrpc version=3 Sep 16 04:58:32.828480 systemd[1]: Started cri-containerd-25b5a84e1f0a3a54ea07cd13f6d6990b36e4c9ab5ffc2413a1b646f8c079f5a4.scope - libcontainer container 25b5a84e1f0a3a54ea07cd13f6d6990b36e4c9ab5ffc2413a1b646f8c079f5a4. Sep 16 04:58:32.838339 systemd[1]: Started cri-containerd-554ab5f8a66bcbef6cd7f4c5fa9f09968d8112f0dcd7bf0367262eefe88438e7.scope - libcontainer container 554ab5f8a66bcbef6cd7f4c5fa9f09968d8112f0dcd7bf0367262eefe88438e7. Sep 16 04:58:32.840864 systemd[1]: Started cri-containerd-9e8dc75ee8c9b9c0cad90ab4f555d8566657c8dfd4e73a5150eb9e2c0143e41f.scope - libcontainer container 9e8dc75ee8c9b9c0cad90ab4f555d8566657c8dfd4e73a5150eb9e2c0143e41f. Sep 16 04:58:32.888603 kubelet[2888]: W0916 04:58:32.887850 2888 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.28.73:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.28.73:6443: connect: connection refused Sep 16 04:58:32.888603 kubelet[2888]: E0916 04:58:32.888327 2888 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.28.73:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.73:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:58:32.907782 kubelet[2888]: W0916 04:58:32.907673 2888 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.28.73:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.28.73:6443: connect: connection refused Sep 16 04:58:32.907944 kubelet[2888]: E0916 04:58:32.907927 2888 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.28.73:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.73:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:58:32.945930 kubelet[2888]: E0916 04:58:32.945745 2888 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.28.73:6443/api/v1/namespaces/default/events\": dial tcp 172.31.28.73:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-28-73.1865aa839fce1411 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-73,UID:ip-172-31-28-73,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-28-73,},FirstTimestamp:2025-09-16 04:58:31.856567313 +0000 UTC m=+0.449651853,LastTimestamp:2025-09-16 04:58:31.856567313 +0000 UTC m=+0.449651853,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-73,}" Sep 16 04:58:32.951633 containerd[1909]: time="2025-09-16T04:58:32.951589248Z" level=info msg="StartContainer for \"25b5a84e1f0a3a54ea07cd13f6d6990b36e4c9ab5ffc2413a1b646f8c079f5a4\" returns successfully" Sep 16 04:58:32.955044 containerd[1909]: time="2025-09-16T04:58:32.953695340Z" level=info msg="StartContainer for \"554ab5f8a66bcbef6cd7f4c5fa9f09968d8112f0dcd7bf0367262eefe88438e7\" returns successfully" Sep 16 04:58:32.979809 containerd[1909]: time="2025-09-16T04:58:32.979764929Z" level=info msg="StartContainer for \"9e8dc75ee8c9b9c0cad90ab4f555d8566657c8dfd4e73a5150eb9e2c0143e41f\" returns successfully" Sep 16 04:58:32.981438 kubelet[2888]: E0916 04:58:32.981013 2888 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-73\" not found" node="ip-172-31-28-73" Sep 16 04:58:33.042892 kubelet[2888]: W0916 04:58:33.042748 2888 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.28.73:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-73&limit=500&resourceVersion=0": dial tcp 172.31.28.73:6443: connect: connection refused Sep 16 04:58:33.042892 kubelet[2888]: E0916 04:58:33.042851 2888 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.28.73:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-73&limit=500&resourceVersion=0\": dial tcp 172.31.28.73:6443: connect: connection refused" logger="UnhandledError" Sep 16 04:58:33.289888 kubelet[2888]: E0916 04:58:33.289317 2888 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-73?timeout=10s\": dial tcp 172.31.28.73:6443: connect: connection refused" interval="1.6s" Sep 16 04:58:33.503175 kubelet[2888]: I0916 04:58:33.502638 2888 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-73" Sep 16 04:58:33.503175 kubelet[2888]: E0916 04:58:33.503010 2888 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.73:6443/api/v1/nodes\": dial tcp 172.31.28.73:6443: connect: connection refused" node="ip-172-31-28-73" Sep 16 04:58:33.985048 kubelet[2888]: E0916 04:58:33.985010 2888 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-73\" not found" node="ip-172-31-28-73" Sep 16 04:58:33.985970 kubelet[2888]: E0916 04:58:33.985943 2888 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-73\" not found" node="ip-172-31-28-73" Sep 16 04:58:33.987111 kubelet[2888]: E0916 04:58:33.986354 2888 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-73\" not found" node="ip-172-31-28-73" Sep 16 04:58:34.985231 kubelet[2888]: E0916 04:58:34.985194 2888 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-73\" not found" node="ip-172-31-28-73" Sep 16 04:58:34.985680 kubelet[2888]: E0916 04:58:34.985610 2888 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-73\" not found" node="ip-172-31-28-73" Sep 16 04:58:35.106213 kubelet[2888]: I0916 04:58:35.106145 2888 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-73" Sep 16 04:58:35.162842 kubelet[2888]: E0916 04:58:35.162748 2888 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-28-73\" not found" node="ip-172-31-28-73" Sep 16 04:58:35.275041 kubelet[2888]: I0916 04:58:35.274917 2888 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-28-73" Sep 16 04:58:35.275041 kubelet[2888]: E0916 04:58:35.274969 2888 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-28-73\": node \"ip-172-31-28-73\" not found" Sep 16 04:58:35.316031 kubelet[2888]: E0916 04:58:35.315992 2888 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-73\" not found" Sep 16 04:58:35.416992 kubelet[2888]: E0916 04:58:35.416941 2888 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-73\" not found" Sep 16 04:58:35.578224 kubelet[2888]: I0916 04:58:35.578101 2888 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-28-73" Sep 16 04:58:35.584311 kubelet[2888]: E0916 04:58:35.584261 2888 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-28-73\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-28-73" Sep 16 04:58:35.584311 kubelet[2888]: I0916 04:58:35.584310 2888 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-73" Sep 16 04:58:35.586019 kubelet[2888]: E0916 04:58:35.585981 2888 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-28-73\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-28-73" Sep 16 04:58:35.586019 kubelet[2888]: I0916 04:58:35.586009 2888 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-73" Sep 16 04:58:35.587904 kubelet[2888]: E0916 04:58:35.587879 2888 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-28-73\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-28-73" Sep 16 04:58:35.860384 kubelet[2888]: I0916 04:58:35.860200 2888 apiserver.go:52] "Watching apiserver" Sep 16 04:58:35.882284 kubelet[2888]: I0916 04:58:35.882239 2888 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 16 04:58:35.984978 kubelet[2888]: I0916 04:58:35.984948 2888 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-28-73" Sep 16 04:58:35.988216 kubelet[2888]: E0916 04:58:35.988171 2888 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-28-73\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-28-73" Sep 16 04:58:36.646180 kubelet[2888]: I0916 04:58:36.645749 2888 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-73" Sep 16 04:58:37.366311 systemd[1]: Reload requested from client PID 3163 ('systemctl') (unit session-9.scope)... Sep 16 04:58:37.366335 systemd[1]: Reloading... Sep 16 04:58:37.502132 zram_generator::config[3207]: No configuration found. Sep 16 04:58:37.550369 kubelet[2888]: I0916 04:58:37.550071 2888 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-73" Sep 16 04:58:37.824305 systemd[1]: Reloading finished in 457 ms. Sep 16 04:58:37.854102 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:58:37.873784 systemd[1]: kubelet.service: Deactivated successfully. Sep 16 04:58:37.874330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:58:37.874444 systemd[1]: kubelet.service: Consumed 847ms CPU time, 128.5M memory peak. Sep 16 04:58:37.878760 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 16 04:58:38.147997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 16 04:58:38.159107 (kubelet)[3267]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 16 04:58:38.231581 kubelet[3267]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:58:38.231581 kubelet[3267]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 16 04:58:38.231581 kubelet[3267]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 16 04:58:38.231581 kubelet[3267]: I0916 04:58:38.230615 3267 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 16 04:58:38.241449 kubelet[3267]: I0916 04:58:38.241409 3267 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 16 04:58:38.241449 kubelet[3267]: I0916 04:58:38.241437 3267 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 16 04:58:38.241748 kubelet[3267]: I0916 04:58:38.241731 3267 server.go:954] "Client rotation is on, will bootstrap in background" Sep 16 04:58:38.243336 kubelet[3267]: I0916 04:58:38.243288 3267 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 16 04:58:38.248790 kubelet[3267]: I0916 04:58:38.248761 3267 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 16 04:58:38.252000 kubelet[3267]: I0916 04:58:38.251932 3267 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 16 04:58:38.257892 kubelet[3267]: I0916 04:58:38.257853 3267 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 16 04:58:38.258237 kubelet[3267]: I0916 04:58:38.258198 3267 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 16 04:58:38.258727 kubelet[3267]: I0916 04:58:38.258275 3267 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-73","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 16 04:58:38.258883 kubelet[3267]: I0916 04:58:38.258744 3267 topology_manager.go:138] "Creating topology manager with none policy" Sep 16 04:58:38.258883 kubelet[3267]: I0916 04:58:38.258760 3267 container_manager_linux.go:304] "Creating device plugin manager" Sep 16 04:58:38.258883 kubelet[3267]: I0916 04:58:38.258829 3267 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:58:38.260103 kubelet[3267]: I0916 04:58:38.259025 3267 kubelet.go:446] "Attempting to sync node with API server" Sep 16 04:58:38.260103 kubelet[3267]: I0916 04:58:38.259120 3267 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 16 04:58:38.260103 kubelet[3267]: I0916 04:58:38.259150 3267 kubelet.go:352] "Adding apiserver pod source" Sep 16 04:58:38.260103 kubelet[3267]: I0916 04:58:38.259165 3267 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 16 04:58:38.266110 kubelet[3267]: I0916 04:58:38.266061 3267 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 16 04:58:38.266593 kubelet[3267]: I0916 04:58:38.266563 3267 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 16 04:58:38.268097 kubelet[3267]: I0916 04:58:38.267237 3267 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 16 04:58:38.268097 kubelet[3267]: I0916 04:58:38.267285 3267 server.go:1287] "Started kubelet" Sep 16 04:58:38.273028 kubelet[3267]: I0916 04:58:38.272995 3267 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 16 04:58:38.288751 kubelet[3267]: I0916 04:58:38.288706 3267 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 16 04:58:38.290122 kubelet[3267]: I0916 04:58:38.290062 3267 server.go:479] "Adding debug handlers to kubelet server" Sep 16 04:58:38.291049 kubelet[3267]: I0916 04:58:38.290991 3267 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 16 04:58:38.291253 kubelet[3267]: I0916 04:58:38.291235 3267 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 16 04:58:38.291455 kubelet[3267]: I0916 04:58:38.291439 3267 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 16 04:58:38.293035 kubelet[3267]: I0916 04:58:38.292981 3267 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 16 04:58:38.293423 kubelet[3267]: E0916 04:58:38.293407 3267 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-73\" not found" Sep 16 04:58:38.294982 kubelet[3267]: I0916 04:58:38.294512 3267 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 16 04:58:38.295228 kubelet[3267]: I0916 04:58:38.295216 3267 reconciler.go:26] "Reconciler: start to sync state" Sep 16 04:58:38.296840 kubelet[3267]: I0916 04:58:38.296643 3267 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 16 04:58:38.298163 kubelet[3267]: I0916 04:58:38.298147 3267 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 16 04:58:38.298256 kubelet[3267]: I0916 04:58:38.298248 3267 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 16 04:58:38.298313 kubelet[3267]: I0916 04:58:38.298306 3267 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 16 04:58:38.298350 kubelet[3267]: I0916 04:58:38.298345 3267 kubelet.go:2382] "Starting kubelet main sync loop" Sep 16 04:58:38.298433 kubelet[3267]: E0916 04:58:38.298418 3267 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 16 04:58:38.307679 kubelet[3267]: E0916 04:58:38.307644 3267 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 16 04:58:38.308292 kubelet[3267]: I0916 04:58:38.307932 3267 factory.go:221] Registration of the systemd container factory successfully Sep 16 04:58:38.308292 kubelet[3267]: I0916 04:58:38.308028 3267 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 16 04:58:38.311227 kubelet[3267]: I0916 04:58:38.311127 3267 factory.go:221] Registration of the containerd container factory successfully Sep 16 04:58:38.368289 kubelet[3267]: I0916 04:58:38.368261 3267 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 16 04:58:38.368289 kubelet[3267]: I0916 04:58:38.368277 3267 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 16 04:58:38.368289 kubelet[3267]: I0916 04:58:38.368296 3267 state_mem.go:36] "Initialized new in-memory state store" Sep 16 04:58:38.368587 kubelet[3267]: I0916 04:58:38.368549 3267 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 16 04:58:38.368587 kubelet[3267]: I0916 04:58:38.368563 3267 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 16 04:58:38.368587 kubelet[3267]: I0916 04:58:38.368585 3267 policy_none.go:49] "None policy: Start" Sep 16 04:58:38.368587 kubelet[3267]: I0916 04:58:38.368595 3267 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 16 04:58:38.368743 kubelet[3267]: I0916 04:58:38.368605 3267 state_mem.go:35] "Initializing new in-memory state store" Sep 16 04:58:38.368743 kubelet[3267]: I0916 04:58:38.368714 3267 state_mem.go:75] "Updated machine memory state" Sep 16 04:58:38.375017 kubelet[3267]: I0916 04:58:38.374431 3267 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 16 04:58:38.375017 kubelet[3267]: I0916 04:58:38.374591 3267 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 16 04:58:38.375017 kubelet[3267]: I0916 04:58:38.374601 3267 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 16 04:58:38.375580 kubelet[3267]: I0916 04:58:38.375564 3267 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 16 04:58:38.382542 kubelet[3267]: E0916 04:58:38.382495 3267 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 16 04:58:38.393177 sudo[3299]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 16 04:58:38.393926 sudo[3299]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 16 04:58:38.402139 kubelet[3267]: I0916 04:58:38.401734 3267 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-73" Sep 16 04:58:38.409241 kubelet[3267]: I0916 04:58:38.408969 3267 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-28-73" Sep 16 04:58:38.411246 kubelet[3267]: I0916 04:58:38.409629 3267 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-73" Sep 16 04:58:38.435288 kubelet[3267]: E0916 04:58:38.435253 3267 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-28-73\" already exists" pod="kube-system/kube-apiserver-ip-172-31-28-73" Sep 16 04:58:38.436969 kubelet[3267]: E0916 04:58:38.436937 3267 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-28-73\" already exists" pod="kube-system/kube-scheduler-ip-172-31-28-73" Sep 16 04:58:38.480766 kubelet[3267]: I0916 04:58:38.480032 3267 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-73" Sep 16 04:58:38.494823 kubelet[3267]: I0916 04:58:38.494784 3267 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-28-73" Sep 16 04:58:38.494946 kubelet[3267]: I0916 04:58:38.494869 3267 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-28-73" Sep 16 04:58:38.497096 kubelet[3267]: I0916 04:58:38.496798 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6cb982fbc4c71d688fa9a41f8d442ea4-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-73\" (UID: \"6cb982fbc4c71d688fa9a41f8d442ea4\") " pod="kube-system/kube-apiserver-ip-172-31-28-73" Sep 16 04:58:38.497096 kubelet[3267]: I0916 04:58:38.496958 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6cb982fbc4c71d688fa9a41f8d442ea4-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-73\" (UID: \"6cb982fbc4c71d688fa9a41f8d442ea4\") " pod="kube-system/kube-apiserver-ip-172-31-28-73" Sep 16 04:58:38.498890 kubelet[3267]: I0916 04:58:38.498847 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/54297050fc671ece3fea6c4a273d1aa5-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-73\" (UID: \"54297050fc671ece3fea6c4a273d1aa5\") " pod="kube-system/kube-controller-manager-ip-172-31-28-73" Sep 16 04:58:38.499401 kubelet[3267]: I0916 04:58:38.499013 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/54297050fc671ece3fea6c4a273d1aa5-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-73\" (UID: \"54297050fc671ece3fea6c4a273d1aa5\") " pod="kube-system/kube-controller-manager-ip-172-31-28-73" Sep 16 04:58:38.499401 kubelet[3267]: I0916 04:58:38.499094 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/54297050fc671ece3fea6c4a273d1aa5-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-73\" (UID: \"54297050fc671ece3fea6c4a273d1aa5\") " pod="kube-system/kube-controller-manager-ip-172-31-28-73" Sep 16 04:58:38.499401 kubelet[3267]: I0916 04:58:38.499140 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/54297050fc671ece3fea6c4a273d1aa5-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-73\" (UID: \"54297050fc671ece3fea6c4a273d1aa5\") " pod="kube-system/kube-controller-manager-ip-172-31-28-73" Sep 16 04:58:38.499401 kubelet[3267]: I0916 04:58:38.499168 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6cb982fbc4c71d688fa9a41f8d442ea4-ca-certs\") pod \"kube-apiserver-ip-172-31-28-73\" (UID: \"6cb982fbc4c71d688fa9a41f8d442ea4\") " pod="kube-system/kube-apiserver-ip-172-31-28-73" Sep 16 04:58:38.600710 kubelet[3267]: I0916 04:58:38.600372 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/54297050fc671ece3fea6c4a273d1aa5-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-73\" (UID: \"54297050fc671ece3fea6c4a273d1aa5\") " pod="kube-system/kube-controller-manager-ip-172-31-28-73" Sep 16 04:58:38.600710 kubelet[3267]: I0916 04:58:38.600523 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d5b7ca9021073c131330808a61d7c887-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-73\" (UID: \"d5b7ca9021073c131330808a61d7c887\") " pod="kube-system/kube-scheduler-ip-172-31-28-73" Sep 16 04:58:38.909029 sudo[3299]: pam_unix(sudo:session): session closed for user root Sep 16 04:58:39.266344 kubelet[3267]: I0916 04:58:39.265991 3267 apiserver.go:52] "Watching apiserver" Sep 16 04:58:39.295800 kubelet[3267]: I0916 04:58:39.295743 3267 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 16 04:58:39.396017 kubelet[3267]: I0916 04:58:39.395674 3267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-28-73" podStartSLOduration=1.395632937 podStartE2EDuration="1.395632937s" podCreationTimestamp="2025-09-16 04:58:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:58:39.393422234 +0000 UTC m=+1.225568731" watchObservedRunningTime="2025-09-16 04:58:39.395632937 +0000 UTC m=+1.227779426" Sep 16 04:58:39.434451 kubelet[3267]: I0916 04:58:39.434261 3267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-28-73" podStartSLOduration=3.434237803 podStartE2EDuration="3.434237803s" podCreationTimestamp="2025-09-16 04:58:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:58:39.407376191 +0000 UTC m=+1.239522688" watchObservedRunningTime="2025-09-16 04:58:39.434237803 +0000 UTC m=+1.266384298" Sep 16 04:58:39.461627 kubelet[3267]: I0916 04:58:39.461505 3267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-28-73" podStartSLOduration=2.46148659 podStartE2EDuration="2.46148659s" podCreationTimestamp="2025-09-16 04:58:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:58:39.438417505 +0000 UTC m=+1.270564003" watchObservedRunningTime="2025-09-16 04:58:39.46148659 +0000 UTC m=+1.293633092" Sep 16 04:58:40.745189 sudo[2321]: pam_unix(sudo:session): session closed for user root Sep 16 04:58:40.767855 sshd[2313]: Connection closed by 139.178.68.195 port 38304 Sep 16 04:58:40.768712 sshd-session[2304]: pam_unix(sshd:session): session closed for user core Sep 16 04:58:40.772920 systemd[1]: sshd@8-172.31.28.73:22-139.178.68.195:38304.service: Deactivated successfully. Sep 16 04:58:40.776737 systemd[1]: session-9.scope: Deactivated successfully. Sep 16 04:58:40.777278 systemd[1]: session-9.scope: Consumed 4.587s CPU time, 207.8M memory peak. Sep 16 04:58:40.779691 systemd-logind[1858]: Session 9 logged out. Waiting for processes to exit. Sep 16 04:58:40.782256 systemd-logind[1858]: Removed session 9. Sep 16 04:58:42.304142 kubelet[3267]: I0916 04:58:42.303943 3267 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 16 04:58:42.310784 containerd[1909]: time="2025-09-16T04:58:42.310701945Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 16 04:58:42.312838 kubelet[3267]: I0916 04:58:42.312640 3267 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 16 04:58:42.902141 update_engine[1861]: I20250916 04:58:42.901722 1861 update_attempter.cc:509] Updating boot flags... Sep 16 04:58:43.030791 kubelet[3267]: I0916 04:58:43.030434 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/184d4ce8-e2c4-48d3-b67e-3b3eb31ff037-kube-proxy\") pod \"kube-proxy-7xnfj\" (UID: \"184d4ce8-e2c4-48d3-b67e-3b3eb31ff037\") " pod="kube-system/kube-proxy-7xnfj" Sep 16 04:58:43.031340 systemd[1]: Created slice kubepods-besteffort-pod184d4ce8_e2c4_48d3_b67e_3b3eb31ff037.slice - libcontainer container kubepods-besteffort-pod184d4ce8_e2c4_48d3_b67e_3b3eb31ff037.slice. Sep 16 04:58:43.037766 kubelet[3267]: I0916 04:58:43.034077 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz4kn\" (UniqueName: \"kubernetes.io/projected/184d4ce8-e2c4-48d3-b67e-3b3eb31ff037-kube-api-access-qz4kn\") pod \"kube-proxy-7xnfj\" (UID: \"184d4ce8-e2c4-48d3-b67e-3b3eb31ff037\") " pod="kube-system/kube-proxy-7xnfj" Sep 16 04:58:43.038196 kubelet[3267]: I0916 04:58:43.038170 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/184d4ce8-e2c4-48d3-b67e-3b3eb31ff037-xtables-lock\") pod \"kube-proxy-7xnfj\" (UID: \"184d4ce8-e2c4-48d3-b67e-3b3eb31ff037\") " pod="kube-system/kube-proxy-7xnfj" Sep 16 04:58:43.039024 kubelet[3267]: I0916 04:58:43.038998 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/184d4ce8-e2c4-48d3-b67e-3b3eb31ff037-lib-modules\") pod \"kube-proxy-7xnfj\" (UID: \"184d4ce8-e2c4-48d3-b67e-3b3eb31ff037\") " pod="kube-system/kube-proxy-7xnfj" Sep 16 04:58:43.043125 kubelet[3267]: W0916 04:58:43.041886 3267 reflector.go:569] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-28-73" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-28-73' and this object Sep 16 04:58:43.043125 kubelet[3267]: E0916 04:58:43.041932 3267 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ip-172-31-28-73\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-28-73' and this object" logger="UnhandledError" Sep 16 04:58:43.043509 kubelet[3267]: I0916 04:58:43.043364 3267 status_manager.go:890] "Failed to get status for pod" podUID="184d4ce8-e2c4-48d3-b67e-3b3eb31ff037" pod="kube-system/kube-proxy-7xnfj" err="pods \"kube-proxy-7xnfj\" is forbidden: User \"system:node:ip-172-31-28-73\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-28-73' and this object" Sep 16 04:58:43.043509 kubelet[3267]: W0916 04:58:43.043449 3267 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-28-73" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-28-73' and this object Sep 16 04:58:43.043509 kubelet[3267]: E0916 04:58:43.043474 3267 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ip-172-31-28-73\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-28-73' and this object" logger="UnhandledError" Sep 16 04:58:43.084514 systemd[1]: Created slice kubepods-burstable-podebdc0486_555a_4904_86b5_f5d7b6c3927d.slice - libcontainer container kubepods-burstable-podebdc0486_555a_4904_86b5_f5d7b6c3927d.slice. Sep 16 04:58:43.139529 kubelet[3267]: I0916 04:58:43.139489 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ebdc0486-555a-4904-86b5-f5d7b6c3927d-hostproc\") pod \"cilium-656zh\" (UID: \"ebdc0486-555a-4904-86b5-f5d7b6c3927d\") " pod="kube-system/cilium-656zh" Sep 16 04:58:43.142107 kubelet[3267]: I0916 04:58:43.140535 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ebdc0486-555a-4904-86b5-f5d7b6c3927d-etc-cni-netd\") pod \"cilium-656zh\" (UID: \"ebdc0486-555a-4904-86b5-f5d7b6c3927d\") " pod="kube-system/cilium-656zh" Sep 16 04:58:43.142107 kubelet[3267]: I0916 04:58:43.140578 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ebdc0486-555a-4904-86b5-f5d7b6c3927d-xtables-lock\") pod \"cilium-656zh\" (UID: \"ebdc0486-555a-4904-86b5-f5d7b6c3927d\") " pod="kube-system/cilium-656zh" Sep 16 04:58:43.142107 kubelet[3267]: I0916 04:58:43.140608 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ebdc0486-555a-4904-86b5-f5d7b6c3927d-clustermesh-secrets\") pod \"cilium-656zh\" (UID: \"ebdc0486-555a-4904-86b5-f5d7b6c3927d\") " pod="kube-system/cilium-656zh" Sep 16 04:58:43.142107 kubelet[3267]: I0916 04:58:43.140652 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ebdc0486-555a-4904-86b5-f5d7b6c3927d-cilium-cgroup\") pod \"cilium-656zh\" (UID: \"ebdc0486-555a-4904-86b5-f5d7b6c3927d\") " pod="kube-system/cilium-656zh" Sep 16 04:58:43.142107 kubelet[3267]: I0916 04:58:43.140695 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ebdc0486-555a-4904-86b5-f5d7b6c3927d-bpf-maps\") pod \"cilium-656zh\" (UID: \"ebdc0486-555a-4904-86b5-f5d7b6c3927d\") " pod="kube-system/cilium-656zh" Sep 16 04:58:43.142107 kubelet[3267]: I0916 04:58:43.140720 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ebdc0486-555a-4904-86b5-f5d7b6c3927d-host-proc-sys-net\") pod \"cilium-656zh\" (UID: \"ebdc0486-555a-4904-86b5-f5d7b6c3927d\") " pod="kube-system/cilium-656zh" Sep 16 04:58:43.142630 kubelet[3267]: I0916 04:58:43.140924 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ebdc0486-555a-4904-86b5-f5d7b6c3927d-hubble-tls\") pod \"cilium-656zh\" (UID: \"ebdc0486-555a-4904-86b5-f5d7b6c3927d\") " pod="kube-system/cilium-656zh" Sep 16 04:58:43.142630 kubelet[3267]: I0916 04:58:43.140967 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ebdc0486-555a-4904-86b5-f5d7b6c3927d-host-proc-sys-kernel\") pod \"cilium-656zh\" (UID: \"ebdc0486-555a-4904-86b5-f5d7b6c3927d\") " pod="kube-system/cilium-656zh" Sep 16 04:58:43.142630 kubelet[3267]: I0916 04:58:43.141022 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ebdc0486-555a-4904-86b5-f5d7b6c3927d-cilium-config-path\") pod \"cilium-656zh\" (UID: \"ebdc0486-555a-4904-86b5-f5d7b6c3927d\") " pod="kube-system/cilium-656zh" Sep 16 04:58:43.142630 kubelet[3267]: I0916 04:58:43.141049 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqmfm\" (UniqueName: \"kubernetes.io/projected/ebdc0486-555a-4904-86b5-f5d7b6c3927d-kube-api-access-hqmfm\") pod \"cilium-656zh\" (UID: \"ebdc0486-555a-4904-86b5-f5d7b6c3927d\") " pod="kube-system/cilium-656zh" Sep 16 04:58:43.144128 kubelet[3267]: I0916 04:58:43.142819 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ebdc0486-555a-4904-86b5-f5d7b6c3927d-cilium-run\") pod \"cilium-656zh\" (UID: \"ebdc0486-555a-4904-86b5-f5d7b6c3927d\") " pod="kube-system/cilium-656zh" Sep 16 04:58:43.144128 kubelet[3267]: I0916 04:58:43.142881 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ebdc0486-555a-4904-86b5-f5d7b6c3927d-cni-path\") pod \"cilium-656zh\" (UID: \"ebdc0486-555a-4904-86b5-f5d7b6c3927d\") " pod="kube-system/cilium-656zh" Sep 16 04:58:43.144128 kubelet[3267]: I0916 04:58:43.142908 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ebdc0486-555a-4904-86b5-f5d7b6c3927d-lib-modules\") pod \"cilium-656zh\" (UID: \"ebdc0486-555a-4904-86b5-f5d7b6c3927d\") " pod="kube-system/cilium-656zh" Sep 16 04:58:43.521658 systemd[1]: Created slice kubepods-besteffort-pod143e3301_5137_46d0_bd13_58352d95ea88.slice - libcontainer container kubepods-besteffort-pod143e3301_5137_46d0_bd13_58352d95ea88.slice. Sep 16 04:58:43.552090 kubelet[3267]: I0916 04:58:43.552029 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/143e3301-5137-46d0-bd13-58352d95ea88-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-hrqrk\" (UID: \"143e3301-5137-46d0-bd13-58352d95ea88\") " pod="kube-system/cilium-operator-6c4d7847fc-hrqrk" Sep 16 04:58:43.552542 kubelet[3267]: I0916 04:58:43.552077 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrfmf\" (UniqueName: \"kubernetes.io/projected/143e3301-5137-46d0-bd13-58352d95ea88-kube-api-access-jrfmf\") pod \"cilium-operator-6c4d7847fc-hrqrk\" (UID: \"143e3301-5137-46d0-bd13-58352d95ea88\") " pod="kube-system/cilium-operator-6c4d7847fc-hrqrk" Sep 16 04:58:44.145046 kubelet[3267]: E0916 04:58:44.144094 3267 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Sep 16 04:58:44.145046 kubelet[3267]: E0916 04:58:44.144321 3267 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/184d4ce8-e2c4-48d3-b67e-3b3eb31ff037-kube-proxy podName:184d4ce8-e2c4-48d3-b67e-3b3eb31ff037 nodeName:}" failed. No retries permitted until 2025-09-16 04:58:44.644301174 +0000 UTC m=+6.476447662 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/184d4ce8-e2c4-48d3-b67e-3b3eb31ff037-kube-proxy") pod "kube-proxy-7xnfj" (UID: "184d4ce8-e2c4-48d3-b67e-3b3eb31ff037") : failed to sync configmap cache: timed out waiting for the condition Sep 16 04:58:44.214286 kubelet[3267]: E0916 04:58:44.214191 3267 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Sep 16 04:58:44.214286 kubelet[3267]: E0916 04:58:44.214287 3267 projected.go:194] Error preparing data for projected volume kube-api-access-qz4kn for pod kube-system/kube-proxy-7xnfj: failed to sync configmap cache: timed out waiting for the condition Sep 16 04:58:44.214506 kubelet[3267]: E0916 04:58:44.214396 3267 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/184d4ce8-e2c4-48d3-b67e-3b3eb31ff037-kube-api-access-qz4kn podName:184d4ce8-e2c4-48d3-b67e-3b3eb31ff037 nodeName:}" failed. No retries permitted until 2025-09-16 04:58:44.714370631 +0000 UTC m=+6.546517126 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qz4kn" (UniqueName: "kubernetes.io/projected/184d4ce8-e2c4-48d3-b67e-3b3eb31ff037-kube-api-access-qz4kn") pod "kube-proxy-7xnfj" (UID: "184d4ce8-e2c4-48d3-b67e-3b3eb31ff037") : failed to sync configmap cache: timed out waiting for the condition Sep 16 04:58:44.430074 containerd[1909]: time="2025-09-16T04:58:44.429398106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-hrqrk,Uid:143e3301-5137-46d0-bd13-58352d95ea88,Namespace:kube-system,Attempt:0,}" Sep 16 04:58:44.456615 containerd[1909]: time="2025-09-16T04:58:44.456569685Z" level=info msg="connecting to shim 90f47b1991561c92593628c360b286cc18999bd1b252ad6ccc2407bf6c466b6f" address="unix:///run/containerd/s/be0859ea6151741428f0fdaac8270a0c0a667c14fe653cf8fdf822cbf90b706a" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:58:44.483315 systemd[1]: Started cri-containerd-90f47b1991561c92593628c360b286cc18999bd1b252ad6ccc2407bf6c466b6f.scope - libcontainer container 90f47b1991561c92593628c360b286cc18999bd1b252ad6ccc2407bf6c466b6f. Sep 16 04:58:44.548783 containerd[1909]: time="2025-09-16T04:58:44.548677728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-hrqrk,Uid:143e3301-5137-46d0-bd13-58352d95ea88,Namespace:kube-system,Attempt:0,} returns sandbox id \"90f47b1991561c92593628c360b286cc18999bd1b252ad6ccc2407bf6c466b6f\"" Sep 16 04:58:44.552213 containerd[1909]: time="2025-09-16T04:58:44.552068863Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 16 04:58:44.596142 containerd[1909]: time="2025-09-16T04:58:44.596066856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-656zh,Uid:ebdc0486-555a-4904-86b5-f5d7b6c3927d,Namespace:kube-system,Attempt:0,}" Sep 16 04:58:44.632132 containerd[1909]: time="2025-09-16T04:58:44.631827208Z" level=info msg="connecting to shim d7492d7503e88045e1afbb2c8cf3b591912ef8cf8b5dd0f412226242ae85813e" address="unix:///run/containerd/s/e817215ecfa5bd46f51aa2ba1b08cc63a687b58ff434e13fe643a141aace9613" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:58:44.664289 systemd[1]: Started cri-containerd-d7492d7503e88045e1afbb2c8cf3b591912ef8cf8b5dd0f412226242ae85813e.scope - libcontainer container d7492d7503e88045e1afbb2c8cf3b591912ef8cf8b5dd0f412226242ae85813e. Sep 16 04:58:44.697753 containerd[1909]: time="2025-09-16T04:58:44.697656821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-656zh,Uid:ebdc0486-555a-4904-86b5-f5d7b6c3927d,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7492d7503e88045e1afbb2c8cf3b591912ef8cf8b5dd0f412226242ae85813e\"" Sep 16 04:58:44.848303 containerd[1909]: time="2025-09-16T04:58:44.848040889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7xnfj,Uid:184d4ce8-e2c4-48d3-b67e-3b3eb31ff037,Namespace:kube-system,Attempt:0,}" Sep 16 04:58:44.868739 containerd[1909]: time="2025-09-16T04:58:44.868698583Z" level=info msg="connecting to shim 124771e6b1b7b7ccd9a26a4c4555fc371a897bd59ec8640787ee72275886e738" address="unix:///run/containerd/s/93193cc8cb8a4b9d5f8f60c4a6438da2881fba4b51b3f21cff7145a8445579bc" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:58:44.896310 systemd[1]: Started cri-containerd-124771e6b1b7b7ccd9a26a4c4555fc371a897bd59ec8640787ee72275886e738.scope - libcontainer container 124771e6b1b7b7ccd9a26a4c4555fc371a897bd59ec8640787ee72275886e738. Sep 16 04:58:44.937317 containerd[1909]: time="2025-09-16T04:58:44.937221246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7xnfj,Uid:184d4ce8-e2c4-48d3-b67e-3b3eb31ff037,Namespace:kube-system,Attempt:0,} returns sandbox id \"124771e6b1b7b7ccd9a26a4c4555fc371a897bd59ec8640787ee72275886e738\"" Sep 16 04:58:44.940534 containerd[1909]: time="2025-09-16T04:58:44.940482982Z" level=info msg="CreateContainer within sandbox \"124771e6b1b7b7ccd9a26a4c4555fc371a897bd59ec8640787ee72275886e738\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 16 04:58:44.956587 containerd[1909]: time="2025-09-16T04:58:44.956463168Z" level=info msg="Container 7cd55131622d652186a2028a416f1026e8c7c37b500c7397b425237fd368e709: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:58:44.969812 containerd[1909]: time="2025-09-16T04:58:44.969755902Z" level=info msg="CreateContainer within sandbox \"124771e6b1b7b7ccd9a26a4c4555fc371a897bd59ec8640787ee72275886e738\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7cd55131622d652186a2028a416f1026e8c7c37b500c7397b425237fd368e709\"" Sep 16 04:58:44.970777 containerd[1909]: time="2025-09-16T04:58:44.970711793Z" level=info msg="StartContainer for \"7cd55131622d652186a2028a416f1026e8c7c37b500c7397b425237fd368e709\"" Sep 16 04:58:44.973490 containerd[1909]: time="2025-09-16T04:58:44.973228282Z" level=info msg="connecting to shim 7cd55131622d652186a2028a416f1026e8c7c37b500c7397b425237fd368e709" address="unix:///run/containerd/s/93193cc8cb8a4b9d5f8f60c4a6438da2881fba4b51b3f21cff7145a8445579bc" protocol=ttrpc version=3 Sep 16 04:58:45.013555 systemd[1]: Started cri-containerd-7cd55131622d652186a2028a416f1026e8c7c37b500c7397b425237fd368e709.scope - libcontainer container 7cd55131622d652186a2028a416f1026e8c7c37b500c7397b425237fd368e709. Sep 16 04:58:45.084784 containerd[1909]: time="2025-09-16T04:58:45.084732610Z" level=info msg="StartContainer for \"7cd55131622d652186a2028a416f1026e8c7c37b500c7397b425237fd368e709\" returns successfully" Sep 16 04:58:45.935744 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount298937502.mount: Deactivated successfully. Sep 16 04:58:48.800535 containerd[1909]: time="2025-09-16T04:58:48.800481577Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:48.801548 containerd[1909]: time="2025-09-16T04:58:48.801431292Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 16 04:58:48.803236 containerd[1909]: time="2025-09-16T04:58:48.803195141Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:58:48.805594 containerd[1909]: time="2025-09-16T04:58:48.805543158Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.253375583s" Sep 16 04:58:48.805594 containerd[1909]: time="2025-09-16T04:58:48.805591653Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 16 04:58:48.807850 containerd[1909]: time="2025-09-16T04:58:48.807691909Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 16 04:58:48.808484 containerd[1909]: time="2025-09-16T04:58:48.808443776Z" level=info msg="CreateContainer within sandbox \"90f47b1991561c92593628c360b286cc18999bd1b252ad6ccc2407bf6c466b6f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 16 04:58:48.826135 containerd[1909]: time="2025-09-16T04:58:48.825924026Z" level=info msg="Container c04ed8c894650edf70d75cc6470293afd14b7b4c9cf6c0bd42c3f663117187d5: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:58:48.825970 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount986012094.mount: Deactivated successfully. Sep 16 04:58:48.834245 containerd[1909]: time="2025-09-16T04:58:48.834205734Z" level=info msg="CreateContainer within sandbox \"90f47b1991561c92593628c360b286cc18999bd1b252ad6ccc2407bf6c466b6f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c04ed8c894650edf70d75cc6470293afd14b7b4c9cf6c0bd42c3f663117187d5\"" Sep 16 04:58:48.837076 containerd[1909]: time="2025-09-16T04:58:48.836349629Z" level=info msg="StartContainer for \"c04ed8c894650edf70d75cc6470293afd14b7b4c9cf6c0bd42c3f663117187d5\"" Sep 16 04:58:48.837363 containerd[1909]: time="2025-09-16T04:58:48.837308873Z" level=info msg="connecting to shim c04ed8c894650edf70d75cc6470293afd14b7b4c9cf6c0bd42c3f663117187d5" address="unix:///run/containerd/s/be0859ea6151741428f0fdaac8270a0c0a667c14fe653cf8fdf822cbf90b706a" protocol=ttrpc version=3 Sep 16 04:58:48.863317 systemd[1]: Started cri-containerd-c04ed8c894650edf70d75cc6470293afd14b7b4c9cf6c0bd42c3f663117187d5.scope - libcontainer container c04ed8c894650edf70d75cc6470293afd14b7b4c9cf6c0bd42c3f663117187d5. Sep 16 04:58:48.903053 containerd[1909]: time="2025-09-16T04:58:48.902969493Z" level=info msg="StartContainer for \"c04ed8c894650edf70d75cc6470293afd14b7b4c9cf6c0bd42c3f663117187d5\" returns successfully" Sep 16 04:58:49.523889 kubelet[3267]: I0916 04:58:49.523819 3267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7xnfj" podStartSLOduration=7.523800362 podStartE2EDuration="7.523800362s" podCreationTimestamp="2025-09-16 04:58:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:58:45.394886781 +0000 UTC m=+7.227033260" watchObservedRunningTime="2025-09-16 04:58:49.523800362 +0000 UTC m=+11.355946860" Sep 16 04:58:49.658918 kubelet[3267]: I0916 04:58:49.658628 3267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-hrqrk" podStartSLOduration=2.403529626 podStartE2EDuration="6.658606317s" podCreationTimestamp="2025-09-16 04:58:43 +0000 UTC" firstStartedPulling="2025-09-16 04:58:44.551508114 +0000 UTC m=+6.383654602" lastFinishedPulling="2025-09-16 04:58:48.806584802 +0000 UTC m=+10.638731293" observedRunningTime="2025-09-16 04:58:49.525751922 +0000 UTC m=+11.357898419" watchObservedRunningTime="2025-09-16 04:58:49.658606317 +0000 UTC m=+11.490752814" Sep 16 04:59:03.149841 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2791454459.mount: Deactivated successfully. Sep 16 04:59:06.103279 containerd[1909]: time="2025-09-16T04:59:06.103216653Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:59:06.105138 containerd[1909]: time="2025-09-16T04:59:06.105070372Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 16 04:59:06.108125 containerd[1909]: time="2025-09-16T04:59:06.107422476Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 16 04:59:06.108899 containerd[1909]: time="2025-09-16T04:59:06.108780639Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 17.300995187s" Sep 16 04:59:06.108899 containerd[1909]: time="2025-09-16T04:59:06.108817695Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 16 04:59:06.112811 containerd[1909]: time="2025-09-16T04:59:06.112771122Z" level=info msg="CreateContainer within sandbox \"d7492d7503e88045e1afbb2c8cf3b591912ef8cf8b5dd0f412226242ae85813e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 16 04:59:06.157030 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2987805211.mount: Deactivated successfully. Sep 16 04:59:06.159045 containerd[1909]: time="2025-09-16T04:59:06.158795268Z" level=info msg="Container 205aebbf6cff03d7930b91505d65cfb20dbca72ae8444f7f808e5a6885e90a23: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:59:06.182946 containerd[1909]: time="2025-09-16T04:59:06.182873196Z" level=info msg="CreateContainer within sandbox \"d7492d7503e88045e1afbb2c8cf3b591912ef8cf8b5dd0f412226242ae85813e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"205aebbf6cff03d7930b91505d65cfb20dbca72ae8444f7f808e5a6885e90a23\"" Sep 16 04:59:06.184820 containerd[1909]: time="2025-09-16T04:59:06.184750534Z" level=info msg="StartContainer for \"205aebbf6cff03d7930b91505d65cfb20dbca72ae8444f7f808e5a6885e90a23\"" Sep 16 04:59:06.187262 containerd[1909]: time="2025-09-16T04:59:06.186531478Z" level=info msg="connecting to shim 205aebbf6cff03d7930b91505d65cfb20dbca72ae8444f7f808e5a6885e90a23" address="unix:///run/containerd/s/e817215ecfa5bd46f51aa2ba1b08cc63a687b58ff434e13fe643a141aace9613" protocol=ttrpc version=3 Sep 16 04:59:06.269713 systemd[1]: Started cri-containerd-205aebbf6cff03d7930b91505d65cfb20dbca72ae8444f7f808e5a6885e90a23.scope - libcontainer container 205aebbf6cff03d7930b91505d65cfb20dbca72ae8444f7f808e5a6885e90a23. Sep 16 04:59:06.314884 containerd[1909]: time="2025-09-16T04:59:06.314844768Z" level=info msg="StartContainer for \"205aebbf6cff03d7930b91505d65cfb20dbca72ae8444f7f808e5a6885e90a23\" returns successfully" Sep 16 04:59:06.325753 systemd[1]: cri-containerd-205aebbf6cff03d7930b91505d65cfb20dbca72ae8444f7f808e5a6885e90a23.scope: Deactivated successfully. Sep 16 04:59:06.370605 containerd[1909]: time="2025-09-16T04:59:06.368933708Z" level=info msg="TaskExit event in podsandbox handler container_id:\"205aebbf6cff03d7930b91505d65cfb20dbca72ae8444f7f808e5a6885e90a23\" id:\"205aebbf6cff03d7930b91505d65cfb20dbca72ae8444f7f808e5a6885e90a23\" pid:3828 exited_at:{seconds:1757998746 nanos:330604407}" Sep 16 04:59:06.388277 containerd[1909]: time="2025-09-16T04:59:06.388236306Z" level=info msg="received exit event container_id:\"205aebbf6cff03d7930b91505d65cfb20dbca72ae8444f7f808e5a6885e90a23\" id:\"205aebbf6cff03d7930b91505d65cfb20dbca72ae8444f7f808e5a6885e90a23\" pid:3828 exited_at:{seconds:1757998746 nanos:330604407}" Sep 16 04:59:07.154126 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-205aebbf6cff03d7930b91505d65cfb20dbca72ae8444f7f808e5a6885e90a23-rootfs.mount: Deactivated successfully. Sep 16 04:59:07.456584 containerd[1909]: time="2025-09-16T04:59:07.456352657Z" level=info msg="CreateContainer within sandbox \"d7492d7503e88045e1afbb2c8cf3b591912ef8cf8b5dd0f412226242ae85813e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 16 04:59:07.470723 containerd[1909]: time="2025-09-16T04:59:07.470686978Z" level=info msg="Container 2ef787936fa278cca992162528cac9cc6597424ff51f71adfa24b0be0c2aaf39: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:59:07.489694 containerd[1909]: time="2025-09-16T04:59:07.489642103Z" level=info msg="CreateContainer within sandbox \"d7492d7503e88045e1afbb2c8cf3b591912ef8cf8b5dd0f412226242ae85813e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2ef787936fa278cca992162528cac9cc6597424ff51f71adfa24b0be0c2aaf39\"" Sep 16 04:59:07.490348 containerd[1909]: time="2025-09-16T04:59:07.490299104Z" level=info msg="StartContainer for \"2ef787936fa278cca992162528cac9cc6597424ff51f71adfa24b0be0c2aaf39\"" Sep 16 04:59:07.494710 containerd[1909]: time="2025-09-16T04:59:07.494668411Z" level=info msg="connecting to shim 2ef787936fa278cca992162528cac9cc6597424ff51f71adfa24b0be0c2aaf39" address="unix:///run/containerd/s/e817215ecfa5bd46f51aa2ba1b08cc63a687b58ff434e13fe643a141aace9613" protocol=ttrpc version=3 Sep 16 04:59:07.521322 systemd[1]: Started cri-containerd-2ef787936fa278cca992162528cac9cc6597424ff51f71adfa24b0be0c2aaf39.scope - libcontainer container 2ef787936fa278cca992162528cac9cc6597424ff51f71adfa24b0be0c2aaf39. Sep 16 04:59:07.560543 containerd[1909]: time="2025-09-16T04:59:07.560482312Z" level=info msg="StartContainer for \"2ef787936fa278cca992162528cac9cc6597424ff51f71adfa24b0be0c2aaf39\" returns successfully" Sep 16 04:59:07.575756 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 16 04:59:07.575981 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:59:07.577888 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 16 04:59:07.581305 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 16 04:59:07.584282 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 16 04:59:07.585391 systemd[1]: cri-containerd-2ef787936fa278cca992162528cac9cc6597424ff51f71adfa24b0be0c2aaf39.scope: Deactivated successfully. Sep 16 04:59:07.589354 containerd[1909]: time="2025-09-16T04:59:07.587829404Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2ef787936fa278cca992162528cac9cc6597424ff51f71adfa24b0be0c2aaf39\" id:\"2ef787936fa278cca992162528cac9cc6597424ff51f71adfa24b0be0c2aaf39\" pid:3873 exited_at:{seconds:1757998747 nanos:587494607}" Sep 16 04:59:07.589993 containerd[1909]: time="2025-09-16T04:59:07.589962191Z" level=info msg="received exit event container_id:\"2ef787936fa278cca992162528cac9cc6597424ff51f71adfa24b0be0c2aaf39\" id:\"2ef787936fa278cca992162528cac9cc6597424ff51f71adfa24b0be0c2aaf39\" pid:3873 exited_at:{seconds:1757998747 nanos:587494607}" Sep 16 04:59:07.630518 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 16 04:59:08.154490 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ef787936fa278cca992162528cac9cc6597424ff51f71adfa24b0be0c2aaf39-rootfs.mount: Deactivated successfully. Sep 16 04:59:08.460794 containerd[1909]: time="2025-09-16T04:59:08.460711657Z" level=info msg="CreateContainer within sandbox \"d7492d7503e88045e1afbb2c8cf3b591912ef8cf8b5dd0f412226242ae85813e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 16 04:59:08.485110 containerd[1909]: time="2025-09-16T04:59:08.483399426Z" level=info msg="Container 944ed0739062aefe6c3c0f9fa98ed90377d93fdcfd5e8091517e254a32e5a42c: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:59:08.488078 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3415520521.mount: Deactivated successfully. Sep 16 04:59:08.503037 containerd[1909]: time="2025-09-16T04:59:08.502984078Z" level=info msg="CreateContainer within sandbox \"d7492d7503e88045e1afbb2c8cf3b591912ef8cf8b5dd0f412226242ae85813e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"944ed0739062aefe6c3c0f9fa98ed90377d93fdcfd5e8091517e254a32e5a42c\"" Sep 16 04:59:08.505171 containerd[1909]: time="2025-09-16T04:59:08.505132113Z" level=info msg="StartContainer for \"944ed0739062aefe6c3c0f9fa98ed90377d93fdcfd5e8091517e254a32e5a42c\"" Sep 16 04:59:08.511679 containerd[1909]: time="2025-09-16T04:59:08.509893406Z" level=info msg="connecting to shim 944ed0739062aefe6c3c0f9fa98ed90377d93fdcfd5e8091517e254a32e5a42c" address="unix:///run/containerd/s/e817215ecfa5bd46f51aa2ba1b08cc63a687b58ff434e13fe643a141aace9613" protocol=ttrpc version=3 Sep 16 04:59:08.550887 systemd[1]: Started cri-containerd-944ed0739062aefe6c3c0f9fa98ed90377d93fdcfd5e8091517e254a32e5a42c.scope - libcontainer container 944ed0739062aefe6c3c0f9fa98ed90377d93fdcfd5e8091517e254a32e5a42c. Sep 16 04:59:08.611379 containerd[1909]: time="2025-09-16T04:59:08.611250885Z" level=info msg="StartContainer for \"944ed0739062aefe6c3c0f9fa98ed90377d93fdcfd5e8091517e254a32e5a42c\" returns successfully" Sep 16 04:59:08.620547 systemd[1]: cri-containerd-944ed0739062aefe6c3c0f9fa98ed90377d93fdcfd5e8091517e254a32e5a42c.scope: Deactivated successfully. Sep 16 04:59:08.621143 systemd[1]: cri-containerd-944ed0739062aefe6c3c0f9fa98ed90377d93fdcfd5e8091517e254a32e5a42c.scope: Consumed 28ms CPU time, 5.8M memory peak, 1M read from disk. Sep 16 04:59:08.625355 containerd[1909]: time="2025-09-16T04:59:08.625308778Z" level=info msg="received exit event container_id:\"944ed0739062aefe6c3c0f9fa98ed90377d93fdcfd5e8091517e254a32e5a42c\" id:\"944ed0739062aefe6c3c0f9fa98ed90377d93fdcfd5e8091517e254a32e5a42c\" pid:3920 exited_at:{seconds:1757998748 nanos:624891067}" Sep 16 04:59:08.625900 containerd[1909]: time="2025-09-16T04:59:08.625847844Z" level=info msg="TaskExit event in podsandbox handler container_id:\"944ed0739062aefe6c3c0f9fa98ed90377d93fdcfd5e8091517e254a32e5a42c\" id:\"944ed0739062aefe6c3c0f9fa98ed90377d93fdcfd5e8091517e254a32e5a42c\" pid:3920 exited_at:{seconds:1757998748 nanos:624891067}" Sep 16 04:59:08.656163 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-944ed0739062aefe6c3c0f9fa98ed90377d93fdcfd5e8091517e254a32e5a42c-rootfs.mount: Deactivated successfully. Sep 16 04:59:09.467965 containerd[1909]: time="2025-09-16T04:59:09.467361445Z" level=info msg="CreateContainer within sandbox \"d7492d7503e88045e1afbb2c8cf3b591912ef8cf8b5dd0f412226242ae85813e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 16 04:59:09.492868 containerd[1909]: time="2025-09-16T04:59:09.491491518Z" level=info msg="Container 00031fd2e02469bc301d35f7330ac660c3136ee6229d50ee23b834ebf846e42f: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:59:09.497704 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1187373198.mount: Deactivated successfully. Sep 16 04:59:09.507704 containerd[1909]: time="2025-09-16T04:59:09.507640720Z" level=info msg="CreateContainer within sandbox \"d7492d7503e88045e1afbb2c8cf3b591912ef8cf8b5dd0f412226242ae85813e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"00031fd2e02469bc301d35f7330ac660c3136ee6229d50ee23b834ebf846e42f\"" Sep 16 04:59:09.508449 containerd[1909]: time="2025-09-16T04:59:09.508415991Z" level=info msg="StartContainer for \"00031fd2e02469bc301d35f7330ac660c3136ee6229d50ee23b834ebf846e42f\"" Sep 16 04:59:09.509554 containerd[1909]: time="2025-09-16T04:59:09.509502041Z" level=info msg="connecting to shim 00031fd2e02469bc301d35f7330ac660c3136ee6229d50ee23b834ebf846e42f" address="unix:///run/containerd/s/e817215ecfa5bd46f51aa2ba1b08cc63a687b58ff434e13fe643a141aace9613" protocol=ttrpc version=3 Sep 16 04:59:09.534445 systemd[1]: Started cri-containerd-00031fd2e02469bc301d35f7330ac660c3136ee6229d50ee23b834ebf846e42f.scope - libcontainer container 00031fd2e02469bc301d35f7330ac660c3136ee6229d50ee23b834ebf846e42f. Sep 16 04:59:09.577486 systemd[1]: cri-containerd-00031fd2e02469bc301d35f7330ac660c3136ee6229d50ee23b834ebf846e42f.scope: Deactivated successfully. Sep 16 04:59:09.578058 containerd[1909]: time="2025-09-16T04:59:09.577618593Z" level=info msg="TaskExit event in podsandbox handler container_id:\"00031fd2e02469bc301d35f7330ac660c3136ee6229d50ee23b834ebf846e42f\" id:\"00031fd2e02469bc301d35f7330ac660c3136ee6229d50ee23b834ebf846e42f\" pid:3963 exited_at:{seconds:1757998749 nanos:577254562}" Sep 16 04:59:09.581001 containerd[1909]: time="2025-09-16T04:59:09.580948398Z" level=info msg="received exit event container_id:\"00031fd2e02469bc301d35f7330ac660c3136ee6229d50ee23b834ebf846e42f\" id:\"00031fd2e02469bc301d35f7330ac660c3136ee6229d50ee23b834ebf846e42f\" pid:3963 exited_at:{seconds:1757998749 nanos:577254562}" Sep 16 04:59:09.594414 containerd[1909]: time="2025-09-16T04:59:09.594349071Z" level=info msg="StartContainer for \"00031fd2e02469bc301d35f7330ac660c3136ee6229d50ee23b834ebf846e42f\" returns successfully" Sep 16 04:59:09.615679 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-00031fd2e02469bc301d35f7330ac660c3136ee6229d50ee23b834ebf846e42f-rootfs.mount: Deactivated successfully. Sep 16 04:59:10.472539 containerd[1909]: time="2025-09-16T04:59:10.472499587Z" level=info msg="CreateContainer within sandbox \"d7492d7503e88045e1afbb2c8cf3b591912ef8cf8b5dd0f412226242ae85813e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 16 04:59:10.522112 containerd[1909]: time="2025-09-16T04:59:10.519646503Z" level=info msg="Container 6e7862a31792a9a586353a8874a9f6210104d4a9b7a4767e6067a06c930f4d05: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:59:10.541814 containerd[1909]: time="2025-09-16T04:59:10.541770000Z" level=info msg="CreateContainer within sandbox \"d7492d7503e88045e1afbb2c8cf3b591912ef8cf8b5dd0f412226242ae85813e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6e7862a31792a9a586353a8874a9f6210104d4a9b7a4767e6067a06c930f4d05\"" Sep 16 04:59:10.543009 containerd[1909]: time="2025-09-16T04:59:10.542978859Z" level=info msg="StartContainer for \"6e7862a31792a9a586353a8874a9f6210104d4a9b7a4767e6067a06c930f4d05\"" Sep 16 04:59:10.544425 containerd[1909]: time="2025-09-16T04:59:10.544352077Z" level=info msg="connecting to shim 6e7862a31792a9a586353a8874a9f6210104d4a9b7a4767e6067a06c930f4d05" address="unix:///run/containerd/s/e817215ecfa5bd46f51aa2ba1b08cc63a687b58ff434e13fe643a141aace9613" protocol=ttrpc version=3 Sep 16 04:59:10.604156 systemd[1]: Started cri-containerd-6e7862a31792a9a586353a8874a9f6210104d4a9b7a4767e6067a06c930f4d05.scope - libcontainer container 6e7862a31792a9a586353a8874a9f6210104d4a9b7a4767e6067a06c930f4d05. Sep 16 04:59:10.667351 containerd[1909]: time="2025-09-16T04:59:10.667302464Z" level=info msg="StartContainer for \"6e7862a31792a9a586353a8874a9f6210104d4a9b7a4767e6067a06c930f4d05\" returns successfully" Sep 16 04:59:10.776399 containerd[1909]: time="2025-09-16T04:59:10.776194697Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6e7862a31792a9a586353a8874a9f6210104d4a9b7a4767e6067a06c930f4d05\" id:\"6e90a1fae77989e94a6de3362f66f431f6c96b4ae9e816ab21ed8206c7c3315f\" pid:4031 exited_at:{seconds:1757998750 nanos:774958490}" Sep 16 04:59:10.843459 kubelet[3267]: I0916 04:59:10.843424 3267 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 16 04:59:10.894510 systemd[1]: Created slice kubepods-burstable-pod0b324f85_839d_452f_ad64_68abb802d1c5.slice - libcontainer container kubepods-burstable-pod0b324f85_839d_452f_ad64_68abb802d1c5.slice. Sep 16 04:59:10.911477 systemd[1]: Created slice kubepods-burstable-podf3a955d0_a8fa_4172_9864_859670acf183.slice - libcontainer container kubepods-burstable-podf3a955d0_a8fa_4172_9864_859670acf183.slice. Sep 16 04:59:10.964531 kubelet[3267]: I0916 04:59:10.964490 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27jbv\" (UniqueName: \"kubernetes.io/projected/0b324f85-839d-452f-ad64-68abb802d1c5-kube-api-access-27jbv\") pod \"coredns-668d6bf9bc-r5l6p\" (UID: \"0b324f85-839d-452f-ad64-68abb802d1c5\") " pod="kube-system/coredns-668d6bf9bc-r5l6p" Sep 16 04:59:10.966392 kubelet[3267]: I0916 04:59:10.966324 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3a955d0-a8fa-4172-9864-859670acf183-config-volume\") pod \"coredns-668d6bf9bc-j2x66\" (UID: \"f3a955d0-a8fa-4172-9864-859670acf183\") " pod="kube-system/coredns-668d6bf9bc-j2x66" Sep 16 04:59:10.966648 kubelet[3267]: I0916 04:59:10.966611 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxjsw\" (UniqueName: \"kubernetes.io/projected/f3a955d0-a8fa-4172-9864-859670acf183-kube-api-access-dxjsw\") pod \"coredns-668d6bf9bc-j2x66\" (UID: \"f3a955d0-a8fa-4172-9864-859670acf183\") " pod="kube-system/coredns-668d6bf9bc-j2x66" Sep 16 04:59:10.967012 kubelet[3267]: I0916 04:59:10.966971 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b324f85-839d-452f-ad64-68abb802d1c5-config-volume\") pod \"coredns-668d6bf9bc-r5l6p\" (UID: \"0b324f85-839d-452f-ad64-68abb802d1c5\") " pod="kube-system/coredns-668d6bf9bc-r5l6p" Sep 16 04:59:11.207579 containerd[1909]: time="2025-09-16T04:59:11.207415220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r5l6p,Uid:0b324f85-839d-452f-ad64-68abb802d1c5,Namespace:kube-system,Attempt:0,}" Sep 16 04:59:11.218722 containerd[1909]: time="2025-09-16T04:59:11.218195364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j2x66,Uid:f3a955d0-a8fa-4172-9864-859670acf183,Namespace:kube-system,Attempt:0,}" Sep 16 04:59:13.143708 (udev-worker)[4091]: Network interface NamePolicy= disabled on kernel command line. Sep 16 04:59:13.144195 systemd-networkd[1815]: cilium_host: Link UP Sep 16 04:59:13.144368 systemd-networkd[1815]: cilium_net: Link UP Sep 16 04:59:13.144569 systemd-networkd[1815]: cilium_host: Gained carrier Sep 16 04:59:13.144760 systemd-networkd[1815]: cilium_net: Gained carrier Sep 16 04:59:13.145824 (udev-worker)[4093]: Network interface NamePolicy= disabled on kernel command line. Sep 16 04:59:13.146660 systemd-networkd[1815]: cilium_net: Gained IPv6LL Sep 16 04:59:13.289299 systemd-networkd[1815]: cilium_vxlan: Link UP Sep 16 04:59:13.289309 systemd-networkd[1815]: cilium_vxlan: Gained carrier Sep 16 04:59:13.504754 systemd-networkd[1815]: cilium_host: Gained IPv6LL Sep 16 04:59:13.828127 kernel: NET: Registered PF_ALG protocol family Sep 16 04:59:14.641589 kubelet[3267]: I0916 04:59:14.641531 3267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-656zh" podStartSLOduration=11.231503533 podStartE2EDuration="32.641515464s" podCreationTimestamp="2025-09-16 04:58:42 +0000 UTC" firstStartedPulling="2025-09-16 04:58:44.699738427 +0000 UTC m=+6.531884903" lastFinishedPulling="2025-09-16 04:59:06.109750289 +0000 UTC m=+27.941896834" observedRunningTime="2025-09-16 04:59:11.513298605 +0000 UTC m=+33.345445103" watchObservedRunningTime="2025-09-16 04:59:14.641515464 +0000 UTC m=+36.473661961" Sep 16 04:59:14.643270 (udev-worker)[4139]: Network interface NamePolicy= disabled on kernel command line. Sep 16 04:59:14.645057 systemd-networkd[1815]: lxc_health: Link UP Sep 16 04:59:14.652200 systemd-networkd[1815]: lxc_health: Gained carrier Sep 16 04:59:14.657218 systemd-networkd[1815]: cilium_vxlan: Gained IPv6LL Sep 16 04:59:15.297056 systemd-networkd[1815]: lxc6f787d9a3548: Link UP Sep 16 04:59:15.302266 kernel: eth0: renamed from tmpa032a Sep 16 04:59:15.305278 (udev-worker)[4141]: Network interface NamePolicy= disabled on kernel command line. Sep 16 04:59:15.314144 systemd-networkd[1815]: lxc6f787d9a3548: Gained carrier Sep 16 04:59:15.314790 systemd-networkd[1815]: lxcd26de4761982: Link UP Sep 16 04:59:15.316122 kernel: eth0: renamed from tmp49b03 Sep 16 04:59:15.320813 systemd-networkd[1815]: lxcd26de4761982: Gained carrier Sep 16 04:59:16.000318 systemd-networkd[1815]: lxc_health: Gained IPv6LL Sep 16 04:59:17.153172 systemd-networkd[1815]: lxcd26de4761982: Gained IPv6LL Sep 16 04:59:17.344332 systemd-networkd[1815]: lxc6f787d9a3548: Gained IPv6LL Sep 16 04:59:19.475051 ntpd[2144]: Listen normally on 6 cilium_host 192.168.0.192:123 Sep 16 04:59:19.475781 ntpd[2144]: 16 Sep 04:59:19 ntpd[2144]: Listen normally on 6 cilium_host 192.168.0.192:123 Sep 16 04:59:19.475781 ntpd[2144]: 16 Sep 04:59:19 ntpd[2144]: Listen normally on 7 cilium_net [fe80::5499:acff:fec6:c6eb%4]:123 Sep 16 04:59:19.475781 ntpd[2144]: 16 Sep 04:59:19 ntpd[2144]: Listen normally on 8 cilium_host [fe80::3c6e:6cff:feaf:907a%5]:123 Sep 16 04:59:19.475781 ntpd[2144]: 16 Sep 04:59:19 ntpd[2144]: Listen normally on 9 cilium_vxlan [fe80::bc64:55ff:fed1:2549%6]:123 Sep 16 04:59:19.475781 ntpd[2144]: 16 Sep 04:59:19 ntpd[2144]: Listen normally on 10 lxc_health [fe80::28a8:1eff:fe50:a1da%8]:123 Sep 16 04:59:19.475781 ntpd[2144]: 16 Sep 04:59:19 ntpd[2144]: Listen normally on 11 lxc6f787d9a3548 [fe80::c011:2fff:fefc:f15b%10]:123 Sep 16 04:59:19.475781 ntpd[2144]: 16 Sep 04:59:19 ntpd[2144]: Listen normally on 12 lxcd26de4761982 [fe80::9012:4eff:fe9f:5f7%12]:123 Sep 16 04:59:19.475171 ntpd[2144]: Listen normally on 7 cilium_net [fe80::5499:acff:fec6:c6eb%4]:123 Sep 16 04:59:19.475204 ntpd[2144]: Listen normally on 8 cilium_host [fe80::3c6e:6cff:feaf:907a%5]:123 Sep 16 04:59:19.475230 ntpd[2144]: Listen normally on 9 cilium_vxlan [fe80::bc64:55ff:fed1:2549%6]:123 Sep 16 04:59:19.475256 ntpd[2144]: Listen normally on 10 lxc_health [fe80::28a8:1eff:fe50:a1da%8]:123 Sep 16 04:59:19.475281 ntpd[2144]: Listen normally on 11 lxc6f787d9a3548 [fe80::c011:2fff:fefc:f15b%10]:123 Sep 16 04:59:19.475307 ntpd[2144]: Listen normally on 12 lxcd26de4761982 [fe80::9012:4eff:fe9f:5f7%12]:123 Sep 16 04:59:19.752181 containerd[1909]: time="2025-09-16T04:59:19.751963833Z" level=info msg="connecting to shim 49b03be3b4dfc5c4a759fb9939343865f2b55b53461f02ff8c5d4519690415d4" address="unix:///run/containerd/s/c904992179b10c2774db08db5055105ebd14d8290d1a6d487a44b399ee0fc6df" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:59:19.780563 containerd[1909]: time="2025-09-16T04:59:19.780439346Z" level=info msg="connecting to shim a032a8aa2ab403576e4a57ae9f1a5fb306d89cd15ad21c8c5c231d67956e51b3" address="unix:///run/containerd/s/839e73cfcb7ee0ab51b82f9ff5cecc231cbdf97717a088d74b32fae645b698e9" namespace=k8s.io protocol=ttrpc version=3 Sep 16 04:59:19.801814 systemd[1]: Started cri-containerd-49b03be3b4dfc5c4a759fb9939343865f2b55b53461f02ff8c5d4519690415d4.scope - libcontainer container 49b03be3b4dfc5c4a759fb9939343865f2b55b53461f02ff8c5d4519690415d4. Sep 16 04:59:19.864373 systemd[1]: Started cri-containerd-a032a8aa2ab403576e4a57ae9f1a5fb306d89cd15ad21c8c5c231d67956e51b3.scope - libcontainer container a032a8aa2ab403576e4a57ae9f1a5fb306d89cd15ad21c8c5c231d67956e51b3. Sep 16 04:59:19.947714 containerd[1909]: time="2025-09-16T04:59:19.947672104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j2x66,Uid:f3a955d0-a8fa-4172-9864-859670acf183,Namespace:kube-system,Attempt:0,} returns sandbox id \"49b03be3b4dfc5c4a759fb9939343865f2b55b53461f02ff8c5d4519690415d4\"" Sep 16 04:59:19.956520 containerd[1909]: time="2025-09-16T04:59:19.956458729Z" level=info msg="CreateContainer within sandbox \"49b03be3b4dfc5c4a759fb9939343865f2b55b53461f02ff8c5d4519690415d4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 16 04:59:19.988493 containerd[1909]: time="2025-09-16T04:59:19.988421720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r5l6p,Uid:0b324f85-839d-452f-ad64-68abb802d1c5,Namespace:kube-system,Attempt:0,} returns sandbox id \"a032a8aa2ab403576e4a57ae9f1a5fb306d89cd15ad21c8c5c231d67956e51b3\"" Sep 16 04:59:19.993331 containerd[1909]: time="2025-09-16T04:59:19.993296133Z" level=info msg="CreateContainer within sandbox \"a032a8aa2ab403576e4a57ae9f1a5fb306d89cd15ad21c8c5c231d67956e51b3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 16 04:59:20.031696 containerd[1909]: time="2025-09-16T04:59:20.031028220Z" level=info msg="Container 2b21d5f09d5a311dd2c1c2df0b8ca4257e31ce298cdf4990abcbfd5fc3741c3a: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:59:20.044239 containerd[1909]: time="2025-09-16T04:59:20.044185093Z" level=info msg="Container a82537d4a54ba5ee7bf1aa507fa8b7efbb5694c2382c09182939a84018909e1f: CDI devices from CRI Config.CDIDevices: []" Sep 16 04:59:20.052799 containerd[1909]: time="2025-09-16T04:59:20.052724740Z" level=info msg="CreateContainer within sandbox \"a032a8aa2ab403576e4a57ae9f1a5fb306d89cd15ad21c8c5c231d67956e51b3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2b21d5f09d5a311dd2c1c2df0b8ca4257e31ce298cdf4990abcbfd5fc3741c3a\"" Sep 16 04:59:20.055159 containerd[1909]: time="2025-09-16T04:59:20.055105681Z" level=info msg="StartContainer for \"2b21d5f09d5a311dd2c1c2df0b8ca4257e31ce298cdf4990abcbfd5fc3741c3a\"" Sep 16 04:59:20.058275 containerd[1909]: time="2025-09-16T04:59:20.058218663Z" level=info msg="connecting to shim 2b21d5f09d5a311dd2c1c2df0b8ca4257e31ce298cdf4990abcbfd5fc3741c3a" address="unix:///run/containerd/s/839e73cfcb7ee0ab51b82f9ff5cecc231cbdf97717a088d74b32fae645b698e9" protocol=ttrpc version=3 Sep 16 04:59:20.079771 containerd[1909]: time="2025-09-16T04:59:20.079726813Z" level=info msg="CreateContainer within sandbox \"49b03be3b4dfc5c4a759fb9939343865f2b55b53461f02ff8c5d4519690415d4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a82537d4a54ba5ee7bf1aa507fa8b7efbb5694c2382c09182939a84018909e1f\"" Sep 16 04:59:20.083695 containerd[1909]: time="2025-09-16T04:59:20.083564821Z" level=info msg="StartContainer for \"a82537d4a54ba5ee7bf1aa507fa8b7efbb5694c2382c09182939a84018909e1f\"" Sep 16 04:59:20.086547 containerd[1909]: time="2025-09-16T04:59:20.086513225Z" level=info msg="connecting to shim a82537d4a54ba5ee7bf1aa507fa8b7efbb5694c2382c09182939a84018909e1f" address="unix:///run/containerd/s/c904992179b10c2774db08db5055105ebd14d8290d1a6d487a44b399ee0fc6df" protocol=ttrpc version=3 Sep 16 04:59:20.087606 systemd[1]: Started cri-containerd-2b21d5f09d5a311dd2c1c2df0b8ca4257e31ce298cdf4990abcbfd5fc3741c3a.scope - libcontainer container 2b21d5f09d5a311dd2c1c2df0b8ca4257e31ce298cdf4990abcbfd5fc3741c3a. Sep 16 04:59:20.119942 systemd[1]: Started cri-containerd-a82537d4a54ba5ee7bf1aa507fa8b7efbb5694c2382c09182939a84018909e1f.scope - libcontainer container a82537d4a54ba5ee7bf1aa507fa8b7efbb5694c2382c09182939a84018909e1f. Sep 16 04:59:20.187131 containerd[1909]: time="2025-09-16T04:59:20.186679342Z" level=info msg="StartContainer for \"2b21d5f09d5a311dd2c1c2df0b8ca4257e31ce298cdf4990abcbfd5fc3741c3a\" returns successfully" Sep 16 04:59:20.202713 containerd[1909]: time="2025-09-16T04:59:20.202655354Z" level=info msg="StartContainer for \"a82537d4a54ba5ee7bf1aa507fa8b7efbb5694c2382c09182939a84018909e1f\" returns successfully" Sep 16 04:59:20.531024 kubelet[3267]: I0916 04:59:20.530911 3267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-r5l6p" podStartSLOduration=37.530891368 podStartE2EDuration="37.530891368s" podCreationTimestamp="2025-09-16 04:58:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:59:20.527726358 +0000 UTC m=+42.359872859" watchObservedRunningTime="2025-09-16 04:59:20.530891368 +0000 UTC m=+42.363037881" Sep 16 04:59:20.555025 kubelet[3267]: I0916 04:59:20.554921 3267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-j2x66" podStartSLOduration=37.554904326 podStartE2EDuration="37.554904326s" podCreationTimestamp="2025-09-16 04:58:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 04:59:20.553469116 +0000 UTC m=+42.385615607" watchObservedRunningTime="2025-09-16 04:59:20.554904326 +0000 UTC m=+42.387050823" Sep 16 04:59:20.723262 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4278470548.mount: Deactivated successfully. Sep 16 04:59:24.500819 systemd[1]: Started sshd@9-172.31.28.73:22-139.178.68.195:36660.service - OpenSSH per-connection server daemon (139.178.68.195:36660). Sep 16 04:59:24.701402 sshd[4668]: Accepted publickey for core from 139.178.68.195 port 36660 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 04:59:24.703453 sshd-session[4668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:59:24.709800 systemd-logind[1858]: New session 10 of user core. Sep 16 04:59:24.715335 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 16 04:59:25.486122 sshd[4671]: Connection closed by 139.178.68.195 port 36660 Sep 16 04:59:25.486868 sshd-session[4668]: pam_unix(sshd:session): session closed for user core Sep 16 04:59:25.491964 systemd[1]: sshd@9-172.31.28.73:22-139.178.68.195:36660.service: Deactivated successfully. Sep 16 04:59:25.495546 systemd[1]: session-10.scope: Deactivated successfully. Sep 16 04:59:25.496900 systemd-logind[1858]: Session 10 logged out. Waiting for processes to exit. Sep 16 04:59:25.499544 systemd-logind[1858]: Removed session 10. Sep 16 04:59:30.518449 systemd[1]: Started sshd@10-172.31.28.73:22-139.178.68.195:48632.service - OpenSSH per-connection server daemon (139.178.68.195:48632). Sep 16 04:59:30.692357 sshd[4685]: Accepted publickey for core from 139.178.68.195 port 48632 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 04:59:30.693701 sshd-session[4685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:59:30.699366 systemd-logind[1858]: New session 11 of user core. Sep 16 04:59:30.704301 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 16 04:59:30.902612 sshd[4688]: Connection closed by 139.178.68.195 port 48632 Sep 16 04:59:30.903515 sshd-session[4685]: pam_unix(sshd:session): session closed for user core Sep 16 04:59:30.911662 systemd[1]: sshd@10-172.31.28.73:22-139.178.68.195:48632.service: Deactivated successfully. Sep 16 04:59:30.914767 systemd[1]: session-11.scope: Deactivated successfully. Sep 16 04:59:30.919058 systemd-logind[1858]: Session 11 logged out. Waiting for processes to exit. Sep 16 04:59:30.920184 systemd-logind[1858]: Removed session 11. Sep 16 04:59:35.937983 systemd[1]: Started sshd@11-172.31.28.73:22-139.178.68.195:48640.service - OpenSSH per-connection server daemon (139.178.68.195:48640). Sep 16 04:59:36.134503 sshd[4701]: Accepted publickey for core from 139.178.68.195 port 48640 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 04:59:36.135954 sshd-session[4701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:59:36.142806 systemd-logind[1858]: New session 12 of user core. Sep 16 04:59:36.150375 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 16 04:59:36.356500 sshd[4704]: Connection closed by 139.178.68.195 port 48640 Sep 16 04:59:36.357243 sshd-session[4701]: pam_unix(sshd:session): session closed for user core Sep 16 04:59:36.361049 systemd[1]: sshd@11-172.31.28.73:22-139.178.68.195:48640.service: Deactivated successfully. Sep 16 04:59:36.363309 systemd[1]: session-12.scope: Deactivated successfully. Sep 16 04:59:36.364957 systemd-logind[1858]: Session 12 logged out. Waiting for processes to exit. Sep 16 04:59:36.368669 systemd-logind[1858]: Removed session 12. Sep 16 04:59:41.390394 systemd[1]: Started sshd@12-172.31.28.73:22-139.178.68.195:40196.service - OpenSSH per-connection server daemon (139.178.68.195:40196). Sep 16 04:59:41.572189 sshd[4718]: Accepted publickey for core from 139.178.68.195 port 40196 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 04:59:41.573723 sshd-session[4718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:59:41.581474 systemd-logind[1858]: New session 13 of user core. Sep 16 04:59:41.590351 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 16 04:59:41.796889 sshd[4721]: Connection closed by 139.178.68.195 port 40196 Sep 16 04:59:41.798216 sshd-session[4718]: pam_unix(sshd:session): session closed for user core Sep 16 04:59:41.805037 systemd[1]: sshd@12-172.31.28.73:22-139.178.68.195:40196.service: Deactivated successfully. Sep 16 04:59:41.807763 systemd[1]: session-13.scope: Deactivated successfully. Sep 16 04:59:41.808879 systemd-logind[1858]: Session 13 logged out. Waiting for processes to exit. Sep 16 04:59:41.811197 systemd-logind[1858]: Removed session 13. Sep 16 04:59:41.830356 systemd[1]: Started sshd@13-172.31.28.73:22-139.178.68.195:40208.service - OpenSSH per-connection server daemon (139.178.68.195:40208). Sep 16 04:59:42.018914 sshd[4734]: Accepted publickey for core from 139.178.68.195 port 40208 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 04:59:42.020855 sshd-session[4734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:59:42.027208 systemd-logind[1858]: New session 14 of user core. Sep 16 04:59:42.036133 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 16 04:59:42.407717 sshd[4737]: Connection closed by 139.178.68.195 port 40208 Sep 16 04:59:42.408468 sshd-session[4734]: pam_unix(sshd:session): session closed for user core Sep 16 04:59:42.419874 systemd-logind[1858]: Session 14 logged out. Waiting for processes to exit. Sep 16 04:59:42.421823 systemd[1]: sshd@13-172.31.28.73:22-139.178.68.195:40208.service: Deactivated successfully. Sep 16 04:59:42.423966 systemd[1]: session-14.scope: Deactivated successfully. Sep 16 04:59:42.448140 systemd-logind[1858]: Removed session 14. Sep 16 04:59:42.451580 systemd[1]: Started sshd@14-172.31.28.73:22-139.178.68.195:40210.service - OpenSSH per-connection server daemon (139.178.68.195:40210). Sep 16 04:59:42.638568 sshd[4747]: Accepted publickey for core from 139.178.68.195 port 40210 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 04:59:42.640039 sshd-session[4747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:59:42.645425 systemd-logind[1858]: New session 15 of user core. Sep 16 04:59:42.653404 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 16 04:59:42.858524 sshd[4750]: Connection closed by 139.178.68.195 port 40210 Sep 16 04:59:42.859132 sshd-session[4747]: pam_unix(sshd:session): session closed for user core Sep 16 04:59:42.862482 systemd[1]: sshd@14-172.31.28.73:22-139.178.68.195:40210.service: Deactivated successfully. Sep 16 04:59:42.864614 systemd[1]: session-15.scope: Deactivated successfully. Sep 16 04:59:42.867335 systemd-logind[1858]: Session 15 logged out. Waiting for processes to exit. Sep 16 04:59:42.868586 systemd-logind[1858]: Removed session 15. Sep 16 04:59:47.901298 systemd[1]: Started sshd@15-172.31.28.73:22-139.178.68.195:40222.service - OpenSSH per-connection server daemon (139.178.68.195:40222). Sep 16 04:59:48.101007 sshd[4764]: Accepted publickey for core from 139.178.68.195 port 40222 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 04:59:48.102988 sshd-session[4764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:59:48.110409 systemd-logind[1858]: New session 16 of user core. Sep 16 04:59:48.115324 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 16 04:59:48.308523 sshd[4767]: Connection closed by 139.178.68.195 port 40222 Sep 16 04:59:48.309077 sshd-session[4764]: pam_unix(sshd:session): session closed for user core Sep 16 04:59:48.313406 systemd[1]: sshd@15-172.31.28.73:22-139.178.68.195:40222.service: Deactivated successfully. Sep 16 04:59:48.315576 systemd[1]: session-16.scope: Deactivated successfully. Sep 16 04:59:48.316659 systemd-logind[1858]: Session 16 logged out. Waiting for processes to exit. Sep 16 04:59:48.318540 systemd-logind[1858]: Removed session 16. Sep 16 04:59:53.347473 systemd[1]: Started sshd@16-172.31.28.73:22-139.178.68.195:56956.service - OpenSSH per-connection server daemon (139.178.68.195:56956). Sep 16 04:59:53.523673 sshd[4780]: Accepted publickey for core from 139.178.68.195 port 56956 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 04:59:53.527662 sshd-session[4780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:59:53.537176 systemd-logind[1858]: New session 17 of user core. Sep 16 04:59:53.541381 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 16 04:59:53.748059 sshd[4783]: Connection closed by 139.178.68.195 port 56956 Sep 16 04:59:53.748881 sshd-session[4780]: pam_unix(sshd:session): session closed for user core Sep 16 04:59:53.753782 systemd[1]: sshd@16-172.31.28.73:22-139.178.68.195:56956.service: Deactivated successfully. Sep 16 04:59:53.756858 systemd[1]: session-17.scope: Deactivated successfully. Sep 16 04:59:53.760533 systemd-logind[1858]: Session 17 logged out. Waiting for processes to exit. Sep 16 04:59:53.762108 systemd-logind[1858]: Removed session 17. Sep 16 04:59:53.783375 systemd[1]: Started sshd@17-172.31.28.73:22-139.178.68.195:56966.service - OpenSSH per-connection server daemon (139.178.68.195:56966). Sep 16 04:59:53.980788 sshd[4795]: Accepted publickey for core from 139.178.68.195 port 56966 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 04:59:53.982597 sshd-session[4795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:59:53.989596 systemd-logind[1858]: New session 18 of user core. Sep 16 04:59:53.994306 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 16 04:59:54.613758 sshd[4798]: Connection closed by 139.178.68.195 port 56966 Sep 16 04:59:54.614837 sshd-session[4795]: pam_unix(sshd:session): session closed for user core Sep 16 04:59:54.624167 systemd[1]: sshd@17-172.31.28.73:22-139.178.68.195:56966.service: Deactivated successfully. Sep 16 04:59:54.634374 systemd[1]: session-18.scope: Deactivated successfully. Sep 16 04:59:54.636778 systemd-logind[1858]: Session 18 logged out. Waiting for processes to exit. Sep 16 04:59:54.648303 systemd[1]: Started sshd@18-172.31.28.73:22-139.178.68.195:56974.service - OpenSSH per-connection server daemon (139.178.68.195:56974). Sep 16 04:59:54.649877 systemd-logind[1858]: Removed session 18. Sep 16 04:59:54.833189 sshd[4808]: Accepted publickey for core from 139.178.68.195 port 56974 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 04:59:54.835294 sshd-session[4808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:59:54.840977 systemd-logind[1858]: New session 19 of user core. Sep 16 04:59:54.845351 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 16 04:59:55.649616 sshd[4811]: Connection closed by 139.178.68.195 port 56974 Sep 16 04:59:55.650433 sshd-session[4808]: pam_unix(sshd:session): session closed for user core Sep 16 04:59:55.664164 systemd-logind[1858]: Session 19 logged out. Waiting for processes to exit. Sep 16 04:59:55.664519 systemd[1]: sshd@18-172.31.28.73:22-139.178.68.195:56974.service: Deactivated successfully. Sep 16 04:59:55.671011 systemd[1]: session-19.scope: Deactivated successfully. Sep 16 04:59:55.689382 systemd[1]: Started sshd@19-172.31.28.73:22-139.178.68.195:56988.service - OpenSSH per-connection server daemon (139.178.68.195:56988). Sep 16 04:59:55.692981 systemd-logind[1858]: Removed session 19. Sep 16 04:59:55.882265 sshd[4828]: Accepted publickey for core from 139.178.68.195 port 56988 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 04:59:55.883738 sshd-session[4828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:59:55.889915 systemd-logind[1858]: New session 20 of user core. Sep 16 04:59:55.895389 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 16 04:59:56.290470 sshd[4831]: Connection closed by 139.178.68.195 port 56988 Sep 16 04:59:56.291286 sshd-session[4828]: pam_unix(sshd:session): session closed for user core Sep 16 04:59:56.296160 systemd-logind[1858]: Session 20 logged out. Waiting for processes to exit. Sep 16 04:59:56.297065 systemd[1]: sshd@19-172.31.28.73:22-139.178.68.195:56988.service: Deactivated successfully. Sep 16 04:59:56.299115 systemd[1]: session-20.scope: Deactivated successfully. Sep 16 04:59:56.302006 systemd-logind[1858]: Removed session 20. Sep 16 04:59:56.332738 systemd[1]: Started sshd@20-172.31.28.73:22-139.178.68.195:57004.service - OpenSSH per-connection server daemon (139.178.68.195:57004). Sep 16 04:59:56.502981 sshd[4840]: Accepted publickey for core from 139.178.68.195 port 57004 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 04:59:56.504708 sshd-session[4840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 04:59:56.511564 systemd-logind[1858]: New session 21 of user core. Sep 16 04:59:56.515295 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 16 04:59:56.714314 sshd[4843]: Connection closed by 139.178.68.195 port 57004 Sep 16 04:59:56.715179 sshd-session[4840]: pam_unix(sshd:session): session closed for user core Sep 16 04:59:56.721543 systemd[1]: sshd@20-172.31.28.73:22-139.178.68.195:57004.service: Deactivated successfully. Sep 16 04:59:56.724776 systemd[1]: session-21.scope: Deactivated successfully. Sep 16 04:59:56.726171 systemd-logind[1858]: Session 21 logged out. Waiting for processes to exit. Sep 16 04:59:56.729899 systemd-logind[1858]: Removed session 21. Sep 16 05:00:01.774003 systemd[1]: Started sshd@21-172.31.28.73:22-139.178.68.195:35078.service - OpenSSH per-connection server daemon (139.178.68.195:35078). Sep 16 05:00:02.135542 sshd[4855]: Accepted publickey for core from 139.178.68.195 port 35078 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:00:02.137696 sshd-session[4855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:00:02.169344 systemd-logind[1858]: New session 22 of user core. Sep 16 05:00:02.183798 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 16 05:00:02.961568 sshd[4858]: Connection closed by 139.178.68.195 port 35078 Sep 16 05:00:02.970855 sshd-session[4855]: pam_unix(sshd:session): session closed for user core Sep 16 05:00:02.987877 systemd[1]: sshd@21-172.31.28.73:22-139.178.68.195:35078.service: Deactivated successfully. Sep 16 05:00:02.997150 systemd[1]: session-22.scope: Deactivated successfully. Sep 16 05:00:03.010803 systemd-logind[1858]: Session 22 logged out. Waiting for processes to exit. Sep 16 05:00:03.019046 systemd-logind[1858]: Removed session 22. Sep 16 05:00:07.993432 systemd[1]: Started sshd@22-172.31.28.73:22-139.178.68.195:35094.service - OpenSSH per-connection server daemon (139.178.68.195:35094). Sep 16 05:00:08.159910 sshd[4873]: Accepted publickey for core from 139.178.68.195 port 35094 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:00:08.161577 sshd-session[4873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:00:08.169842 systemd-logind[1858]: New session 23 of user core. Sep 16 05:00:08.174321 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 16 05:00:08.350133 sshd[4876]: Connection closed by 139.178.68.195 port 35094 Sep 16 05:00:08.350704 sshd-session[4873]: pam_unix(sshd:session): session closed for user core Sep 16 05:00:08.354647 systemd[1]: sshd@22-172.31.28.73:22-139.178.68.195:35094.service: Deactivated successfully. Sep 16 05:00:08.356655 systemd[1]: session-23.scope: Deactivated successfully. Sep 16 05:00:08.358258 systemd-logind[1858]: Session 23 logged out. Waiting for processes to exit. Sep 16 05:00:08.360032 systemd-logind[1858]: Removed session 23. Sep 16 05:00:13.384872 systemd[1]: Started sshd@23-172.31.28.73:22-139.178.68.195:53500.service - OpenSSH per-connection server daemon (139.178.68.195:53500). Sep 16 05:00:13.550875 sshd[4887]: Accepted publickey for core from 139.178.68.195 port 53500 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:00:13.552220 sshd-session[4887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:00:13.559165 systemd-logind[1858]: New session 24 of user core. Sep 16 05:00:13.569429 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 16 05:00:13.762489 sshd[4890]: Connection closed by 139.178.68.195 port 53500 Sep 16 05:00:13.763307 sshd-session[4887]: pam_unix(sshd:session): session closed for user core Sep 16 05:00:13.767820 systemd[1]: sshd@23-172.31.28.73:22-139.178.68.195:53500.service: Deactivated successfully. Sep 16 05:00:13.770031 systemd[1]: session-24.scope: Deactivated successfully. Sep 16 05:00:13.771569 systemd-logind[1858]: Session 24 logged out. Waiting for processes to exit. Sep 16 05:00:13.773591 systemd-logind[1858]: Removed session 24. Sep 16 05:00:18.801703 systemd[1]: Started sshd@24-172.31.28.73:22-139.178.68.195:53502.service - OpenSSH per-connection server daemon (139.178.68.195:53502). Sep 16 05:00:19.006642 sshd[4904]: Accepted publickey for core from 139.178.68.195 port 53502 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:00:19.008234 sshd-session[4904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:00:19.014995 systemd-logind[1858]: New session 25 of user core. Sep 16 05:00:19.019313 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 16 05:00:19.270992 sshd[4907]: Connection closed by 139.178.68.195 port 53502 Sep 16 05:00:19.273163 sshd-session[4904]: pam_unix(sshd:session): session closed for user core Sep 16 05:00:19.276730 systemd[1]: sshd@24-172.31.28.73:22-139.178.68.195:53502.service: Deactivated successfully. Sep 16 05:00:19.278916 systemd[1]: session-25.scope: Deactivated successfully. Sep 16 05:00:19.280727 systemd-logind[1858]: Session 25 logged out. Waiting for processes to exit. Sep 16 05:00:19.282477 systemd-logind[1858]: Removed session 25. Sep 16 05:00:19.310380 systemd[1]: Started sshd@25-172.31.28.73:22-139.178.68.195:53512.service - OpenSSH per-connection server daemon (139.178.68.195:53512). Sep 16 05:00:19.500147 sshd[4919]: Accepted publickey for core from 139.178.68.195 port 53512 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:00:19.501670 sshd-session[4919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:00:19.507651 systemd-logind[1858]: New session 26 of user core. Sep 16 05:00:19.517383 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 16 05:00:20.960352 containerd[1909]: time="2025-09-16T05:00:20.960288854Z" level=info msg="StopContainer for \"c04ed8c894650edf70d75cc6470293afd14b7b4c9cf6c0bd42c3f663117187d5\" with timeout 30 (s)" Sep 16 05:00:20.961736 containerd[1909]: time="2025-09-16T05:00:20.960868375Z" level=info msg="Stop container \"c04ed8c894650edf70d75cc6470293afd14b7b4c9cf6c0bd42c3f663117187d5\" with signal terminated" Sep 16 05:00:20.984345 systemd[1]: cri-containerd-c04ed8c894650edf70d75cc6470293afd14b7b4c9cf6c0bd42c3f663117187d5.scope: Deactivated successfully. Sep 16 05:00:20.984762 systemd[1]: cri-containerd-c04ed8c894650edf70d75cc6470293afd14b7b4c9cf6c0bd42c3f663117187d5.scope: Consumed 456ms CPU time, 26.3M memory peak, 9.3M read from disk, 4K written to disk. Sep 16 05:00:20.991272 containerd[1909]: time="2025-09-16T05:00:20.991009980Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c04ed8c894650edf70d75cc6470293afd14b7b4c9cf6c0bd42c3f663117187d5\" id:\"c04ed8c894650edf70d75cc6470293afd14b7b4c9cf6c0bd42c3f663117187d5\" pid:3766 exited_at:{seconds:1757998820 nanos:988722477}" Sep 16 05:00:20.991422 containerd[1909]: time="2025-09-16T05:00:20.991337521Z" level=info msg="received exit event container_id:\"c04ed8c894650edf70d75cc6470293afd14b7b4c9cf6c0bd42c3f663117187d5\" id:\"c04ed8c894650edf70d75cc6470293afd14b7b4c9cf6c0bd42c3f663117187d5\" pid:3766 exited_at:{seconds:1757998820 nanos:988722477}" Sep 16 05:00:21.016189 containerd[1909]: time="2025-09-16T05:00:21.015171376Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 16 05:00:21.016421 containerd[1909]: time="2025-09-16T05:00:21.016124419Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6e7862a31792a9a586353a8874a9f6210104d4a9b7a4767e6067a06c930f4d05\" id:\"8ed647af8695d26383c11fb089910df239cae8040a41a9533ebb74733455052e\" pid:4948 exited_at:{seconds:1757998821 nanos:15417191}" Sep 16 05:00:21.019695 containerd[1909]: time="2025-09-16T05:00:21.019624714Z" level=info msg="StopContainer for \"6e7862a31792a9a586353a8874a9f6210104d4a9b7a4767e6067a06c930f4d05\" with timeout 2 (s)" Sep 16 05:00:21.020380 containerd[1909]: time="2025-09-16T05:00:21.020333443Z" level=info msg="Stop container \"6e7862a31792a9a586353a8874a9f6210104d4a9b7a4767e6067a06c930f4d05\" with signal terminated" Sep 16 05:00:21.038656 systemd-networkd[1815]: lxc_health: Link DOWN Sep 16 05:00:21.038670 systemd-networkd[1815]: lxc_health: Lost carrier Sep 16 05:00:21.065638 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c04ed8c894650edf70d75cc6470293afd14b7b4c9cf6c0bd42c3f663117187d5-rootfs.mount: Deactivated successfully. Sep 16 05:00:21.074893 systemd[1]: cri-containerd-6e7862a31792a9a586353a8874a9f6210104d4a9b7a4767e6067a06c930f4d05.scope: Deactivated successfully. Sep 16 05:00:21.075936 systemd[1]: cri-containerd-6e7862a31792a9a586353a8874a9f6210104d4a9b7a4767e6067a06c930f4d05.scope: Consumed 7.974s CPU time, 190.6M memory peak, 67.3M read from disk, 13.3M written to disk. Sep 16 05:00:21.078591 containerd[1909]: time="2025-09-16T05:00:21.078518447Z" level=info msg="received exit event container_id:\"6e7862a31792a9a586353a8874a9f6210104d4a9b7a4767e6067a06c930f4d05\" id:\"6e7862a31792a9a586353a8874a9f6210104d4a9b7a4767e6067a06c930f4d05\" pid:4001 exited_at:{seconds:1757998821 nanos:78243826}" Sep 16 05:00:21.078937 containerd[1909]: time="2025-09-16T05:00:21.078906458Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6e7862a31792a9a586353a8874a9f6210104d4a9b7a4767e6067a06c930f4d05\" id:\"6e7862a31792a9a586353a8874a9f6210104d4a9b7a4767e6067a06c930f4d05\" pid:4001 exited_at:{seconds:1757998821 nanos:78243826}" Sep 16 05:00:21.110731 containerd[1909]: time="2025-09-16T05:00:21.110539862Z" level=info msg="StopContainer for \"c04ed8c894650edf70d75cc6470293afd14b7b4c9cf6c0bd42c3f663117187d5\" returns successfully" Sep 16 05:00:21.111768 containerd[1909]: time="2025-09-16T05:00:21.111730757Z" level=info msg="StopPodSandbox for \"90f47b1991561c92593628c360b286cc18999bd1b252ad6ccc2407bf6c466b6f\"" Sep 16 05:00:21.119796 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e7862a31792a9a586353a8874a9f6210104d4a9b7a4767e6067a06c930f4d05-rootfs.mount: Deactivated successfully. Sep 16 05:00:21.125369 containerd[1909]: time="2025-09-16T05:00:21.125025989Z" level=info msg="Container to stop \"c04ed8c894650edf70d75cc6470293afd14b7b4c9cf6c0bd42c3f663117187d5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 05:00:21.141987 systemd[1]: cri-containerd-90f47b1991561c92593628c360b286cc18999bd1b252ad6ccc2407bf6c466b6f.scope: Deactivated successfully. Sep 16 05:00:21.144826 containerd[1909]: time="2025-09-16T05:00:21.143889764Z" level=info msg="StopContainer for \"6e7862a31792a9a586353a8874a9f6210104d4a9b7a4767e6067a06c930f4d05\" returns successfully" Sep 16 05:00:21.144826 containerd[1909]: time="2025-09-16T05:00:21.144605954Z" level=info msg="TaskExit event in podsandbox handler container_id:\"90f47b1991561c92593628c360b286cc18999bd1b252ad6ccc2407bf6c466b6f\" id:\"90f47b1991561c92593628c360b286cc18999bd1b252ad6ccc2407bf6c466b6f\" pid:3470 exit_status:137 exited_at:{seconds:1757998821 nanos:143386680}" Sep 16 05:00:21.146732 containerd[1909]: time="2025-09-16T05:00:21.146701912Z" level=info msg="StopPodSandbox for \"d7492d7503e88045e1afbb2c8cf3b591912ef8cf8b5dd0f412226242ae85813e\"" Sep 16 05:00:21.146833 containerd[1909]: time="2025-09-16T05:00:21.146804813Z" level=info msg="Container to stop \"205aebbf6cff03d7930b91505d65cfb20dbca72ae8444f7f808e5a6885e90a23\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 05:00:21.146833 containerd[1909]: time="2025-09-16T05:00:21.146823265Z" level=info msg="Container to stop \"2ef787936fa278cca992162528cac9cc6597424ff51f71adfa24b0be0c2aaf39\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 05:00:21.146918 containerd[1909]: time="2025-09-16T05:00:21.146837289Z" level=info msg="Container to stop \"944ed0739062aefe6c3c0f9fa98ed90377d93fdcfd5e8091517e254a32e5a42c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 05:00:21.146918 containerd[1909]: time="2025-09-16T05:00:21.146852420Z" level=info msg="Container to stop \"00031fd2e02469bc301d35f7330ac660c3136ee6229d50ee23b834ebf846e42f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 05:00:21.146918 containerd[1909]: time="2025-09-16T05:00:21.146865185Z" level=info msg="Container to stop \"6e7862a31792a9a586353a8874a9f6210104d4a9b7a4767e6067a06c930f4d05\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 16 05:00:21.162448 systemd[1]: cri-containerd-d7492d7503e88045e1afbb2c8cf3b591912ef8cf8b5dd0f412226242ae85813e.scope: Deactivated successfully. Sep 16 05:00:21.204627 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-90f47b1991561c92593628c360b286cc18999bd1b252ad6ccc2407bf6c466b6f-rootfs.mount: Deactivated successfully. Sep 16 05:00:21.214129 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7492d7503e88045e1afbb2c8cf3b591912ef8cf8b5dd0f412226242ae85813e-rootfs.mount: Deactivated successfully. Sep 16 05:00:21.220504 containerd[1909]: time="2025-09-16T05:00:21.220458678Z" level=info msg="shim disconnected" id=d7492d7503e88045e1afbb2c8cf3b591912ef8cf8b5dd0f412226242ae85813e namespace=k8s.io Sep 16 05:00:21.220504 containerd[1909]: time="2025-09-16T05:00:21.220505722Z" level=warning msg="cleaning up after shim disconnected" id=d7492d7503e88045e1afbb2c8cf3b591912ef8cf8b5dd0f412226242ae85813e namespace=k8s.io Sep 16 05:00:21.235628 containerd[1909]: time="2025-09-16T05:00:21.220515746Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 16 05:00:21.235860 containerd[1909]: time="2025-09-16T05:00:21.224230461Z" level=info msg="shim disconnected" id=90f47b1991561c92593628c360b286cc18999bd1b252ad6ccc2407bf6c466b6f namespace=k8s.io Sep 16 05:00:21.235925 containerd[1909]: time="2025-09-16T05:00:21.235865187Z" level=warning msg="cleaning up after shim disconnected" id=90f47b1991561c92593628c360b286cc18999bd1b252ad6ccc2407bf6c466b6f namespace=k8s.io Sep 16 05:00:21.235925 containerd[1909]: time="2025-09-16T05:00:21.235877338Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 16 05:00:21.271111 containerd[1909]: time="2025-09-16T05:00:21.269637876Z" level=info msg="received exit event sandbox_id:\"d7492d7503e88045e1afbb2c8cf3b591912ef8cf8b5dd0f412226242ae85813e\" exit_status:137 exited_at:{seconds:1757998821 nanos:170326200}" Sep 16 05:00:21.275548 containerd[1909]: time="2025-09-16T05:00:21.275500611Z" level=info msg="TearDown network for sandbox \"d7492d7503e88045e1afbb2c8cf3b591912ef8cf8b5dd0f412226242ae85813e\" successfully" Sep 16 05:00:21.275726 containerd[1909]: time="2025-09-16T05:00:21.275708971Z" level=info msg="StopPodSandbox for \"d7492d7503e88045e1afbb2c8cf3b591912ef8cf8b5dd0f412226242ae85813e\" returns successfully" Sep 16 05:00:21.280126 containerd[1909]: time="2025-09-16T05:00:21.276886951Z" level=info msg="TearDown network for sandbox \"90f47b1991561c92593628c360b286cc18999bd1b252ad6ccc2407bf6c466b6f\" successfully" Sep 16 05:00:21.280013 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d7492d7503e88045e1afbb2c8cf3b591912ef8cf8b5dd0f412226242ae85813e-shm.mount: Deactivated successfully. Sep 16 05:00:21.280564 containerd[1909]: time="2025-09-16T05:00:21.280510321Z" level=info msg="StopPodSandbox for \"90f47b1991561c92593628c360b286cc18999bd1b252ad6ccc2407bf6c466b6f\" returns successfully" Sep 16 05:00:21.280654 containerd[1909]: time="2025-09-16T05:00:21.280210852Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d7492d7503e88045e1afbb2c8cf3b591912ef8cf8b5dd0f412226242ae85813e\" id:\"d7492d7503e88045e1afbb2c8cf3b591912ef8cf8b5dd0f412226242ae85813e\" pid:3517 exit_status:137 exited_at:{seconds:1757998821 nanos:170326200}" Sep 16 05:00:21.280774 containerd[1909]: time="2025-09-16T05:00:21.280354715Z" level=info msg="received exit event sandbox_id:\"90f47b1991561c92593628c360b286cc18999bd1b252ad6ccc2407bf6c466b6f\" exit_status:137 exited_at:{seconds:1757998821 nanos:143386680}" Sep 16 05:00:21.420474 kubelet[3267]: I0916 05:00:21.420390 3267 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebdc0486-555a-4904-86b5-f5d7b6c3927d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ebdc0486-555a-4904-86b5-f5d7b6c3927d" (UID: "ebdc0486-555a-4904-86b5-f5d7b6c3927d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 05:00:21.423748 kubelet[3267]: I0916 05:00:21.423694 3267 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ebdc0486-555a-4904-86b5-f5d7b6c3927d-host-proc-sys-kernel\") pod \"ebdc0486-555a-4904-86b5-f5d7b6c3927d\" (UID: \"ebdc0486-555a-4904-86b5-f5d7b6c3927d\") " Sep 16 05:00:21.424531 kubelet[3267]: I0916 05:00:21.423784 3267 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ebdc0486-555a-4904-86b5-f5d7b6c3927d-xtables-lock\") pod \"ebdc0486-555a-4904-86b5-f5d7b6c3927d\" (UID: \"ebdc0486-555a-4904-86b5-f5d7b6c3927d\") " Sep 16 05:00:21.424531 kubelet[3267]: I0916 05:00:21.423805 3267 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ebdc0486-555a-4904-86b5-f5d7b6c3927d-lib-modules\") pod \"ebdc0486-555a-4904-86b5-f5d7b6c3927d\" (UID: \"ebdc0486-555a-4904-86b5-f5d7b6c3927d\") " Sep 16 05:00:21.424531 kubelet[3267]: I0916 05:00:21.423828 3267 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ebdc0486-555a-4904-86b5-f5d7b6c3927d-host-proc-sys-net\") pod \"ebdc0486-555a-4904-86b5-f5d7b6c3927d\" (UID: \"ebdc0486-555a-4904-86b5-f5d7b6c3927d\") " Sep 16 05:00:21.424531 kubelet[3267]: I0916 05:00:21.423849 3267 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqmfm\" (UniqueName: \"kubernetes.io/projected/ebdc0486-555a-4904-86b5-f5d7b6c3927d-kube-api-access-hqmfm\") pod \"ebdc0486-555a-4904-86b5-f5d7b6c3927d\" (UID: \"ebdc0486-555a-4904-86b5-f5d7b6c3927d\") " Sep 16 05:00:21.424531 kubelet[3267]: I0916 05:00:21.423865 3267 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrfmf\" (UniqueName: \"kubernetes.io/projected/143e3301-5137-46d0-bd13-58352d95ea88-kube-api-access-jrfmf\") pod \"143e3301-5137-46d0-bd13-58352d95ea88\" (UID: \"143e3301-5137-46d0-bd13-58352d95ea88\") " Sep 16 05:00:21.424531 kubelet[3267]: I0916 05:00:21.423882 3267 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/143e3301-5137-46d0-bd13-58352d95ea88-cilium-config-path\") pod \"143e3301-5137-46d0-bd13-58352d95ea88\" (UID: \"143e3301-5137-46d0-bd13-58352d95ea88\") " Sep 16 05:00:21.424739 kubelet[3267]: I0916 05:00:21.423898 3267 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ebdc0486-555a-4904-86b5-f5d7b6c3927d-cilium-config-path\") pod \"ebdc0486-555a-4904-86b5-f5d7b6c3927d\" (UID: \"ebdc0486-555a-4904-86b5-f5d7b6c3927d\") " Sep 16 05:00:21.424739 kubelet[3267]: I0916 05:00:21.423917 3267 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ebdc0486-555a-4904-86b5-f5d7b6c3927d-etc-cni-netd\") pod \"ebdc0486-555a-4904-86b5-f5d7b6c3927d\" (UID: \"ebdc0486-555a-4904-86b5-f5d7b6c3927d\") " Sep 16 05:00:21.424739 kubelet[3267]: I0916 05:00:21.423932 3267 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ebdc0486-555a-4904-86b5-f5d7b6c3927d-hubble-tls\") pod \"ebdc0486-555a-4904-86b5-f5d7b6c3927d\" (UID: \"ebdc0486-555a-4904-86b5-f5d7b6c3927d\") " Sep 16 05:00:21.424739 kubelet[3267]: I0916 05:00:21.423921 3267 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebdc0486-555a-4904-86b5-f5d7b6c3927d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ebdc0486-555a-4904-86b5-f5d7b6c3927d" (UID: "ebdc0486-555a-4904-86b5-f5d7b6c3927d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 05:00:21.424739 kubelet[3267]: I0916 05:00:21.423948 3267 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ebdc0486-555a-4904-86b5-f5d7b6c3927d-cilium-run\") pod \"ebdc0486-555a-4904-86b5-f5d7b6c3927d\" (UID: \"ebdc0486-555a-4904-86b5-f5d7b6c3927d\") " Sep 16 05:00:21.424739 kubelet[3267]: I0916 05:00:21.423965 3267 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ebdc0486-555a-4904-86b5-f5d7b6c3927d-clustermesh-secrets\") pod \"ebdc0486-555a-4904-86b5-f5d7b6c3927d\" (UID: \"ebdc0486-555a-4904-86b5-f5d7b6c3927d\") " Sep 16 05:00:21.424886 kubelet[3267]: I0916 05:00:21.423967 3267 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebdc0486-555a-4904-86b5-f5d7b6c3927d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ebdc0486-555a-4904-86b5-f5d7b6c3927d" (UID: "ebdc0486-555a-4904-86b5-f5d7b6c3927d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 05:00:21.424886 kubelet[3267]: I0916 05:00:21.423983 3267 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebdc0486-555a-4904-86b5-f5d7b6c3927d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ebdc0486-555a-4904-86b5-f5d7b6c3927d" (UID: "ebdc0486-555a-4904-86b5-f5d7b6c3927d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 05:00:21.424886 kubelet[3267]: I0916 05:00:21.423982 3267 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ebdc0486-555a-4904-86b5-f5d7b6c3927d-bpf-maps\") pod \"ebdc0486-555a-4904-86b5-f5d7b6c3927d\" (UID: \"ebdc0486-555a-4904-86b5-f5d7b6c3927d\") " Sep 16 05:00:21.424886 kubelet[3267]: I0916 05:00:21.424026 3267 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ebdc0486-555a-4904-86b5-f5d7b6c3927d-hostproc\") pod \"ebdc0486-555a-4904-86b5-f5d7b6c3927d\" (UID: \"ebdc0486-555a-4904-86b5-f5d7b6c3927d\") " Sep 16 05:00:21.424886 kubelet[3267]: I0916 05:00:21.424042 3267 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ebdc0486-555a-4904-86b5-f5d7b6c3927d-cilium-cgroup\") pod \"ebdc0486-555a-4904-86b5-f5d7b6c3927d\" (UID: \"ebdc0486-555a-4904-86b5-f5d7b6c3927d\") " Sep 16 05:00:21.424886 kubelet[3267]: I0916 05:00:21.424066 3267 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ebdc0486-555a-4904-86b5-f5d7b6c3927d-cni-path\") pod \"ebdc0486-555a-4904-86b5-f5d7b6c3927d\" (UID: \"ebdc0486-555a-4904-86b5-f5d7b6c3927d\") " Sep 16 05:00:21.425172 kubelet[3267]: I0916 05:00:21.424131 3267 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ebdc0486-555a-4904-86b5-f5d7b6c3927d-xtables-lock\") on node \"ip-172-31-28-73\" DevicePath \"\"" Sep 16 05:00:21.425172 kubelet[3267]: I0916 05:00:21.424150 3267 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ebdc0486-555a-4904-86b5-f5d7b6c3927d-host-proc-sys-kernel\") on node \"ip-172-31-28-73\" DevicePath \"\"" Sep 16 05:00:21.425172 kubelet[3267]: I0916 05:00:21.424165 3267 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ebdc0486-555a-4904-86b5-f5d7b6c3927d-lib-modules\") on node \"ip-172-31-28-73\" DevicePath \"\"" Sep 16 05:00:21.425172 kubelet[3267]: I0916 05:00:21.424174 3267 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ebdc0486-555a-4904-86b5-f5d7b6c3927d-host-proc-sys-net\") on node \"ip-172-31-28-73\" DevicePath \"\"" Sep 16 05:00:21.425172 kubelet[3267]: I0916 05:00:21.424192 3267 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebdc0486-555a-4904-86b5-f5d7b6c3927d-cni-path" (OuterVolumeSpecName: "cni-path") pod "ebdc0486-555a-4904-86b5-f5d7b6c3927d" (UID: "ebdc0486-555a-4904-86b5-f5d7b6c3927d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 05:00:21.425172 kubelet[3267]: I0916 05:00:21.424206 3267 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebdc0486-555a-4904-86b5-f5d7b6c3927d-hostproc" (OuterVolumeSpecName: "hostproc") pod "ebdc0486-555a-4904-86b5-f5d7b6c3927d" (UID: "ebdc0486-555a-4904-86b5-f5d7b6c3927d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 05:00:21.425901 kubelet[3267]: I0916 05:00:21.424218 3267 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebdc0486-555a-4904-86b5-f5d7b6c3927d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ebdc0486-555a-4904-86b5-f5d7b6c3927d" (UID: "ebdc0486-555a-4904-86b5-f5d7b6c3927d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 05:00:21.425901 kubelet[3267]: I0916 05:00:21.424232 3267 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebdc0486-555a-4904-86b5-f5d7b6c3927d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ebdc0486-555a-4904-86b5-f5d7b6c3927d" (UID: "ebdc0486-555a-4904-86b5-f5d7b6c3927d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 05:00:21.434164 kubelet[3267]: I0916 05:00:21.434109 3267 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebdc0486-555a-4904-86b5-f5d7b6c3927d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ebdc0486-555a-4904-86b5-f5d7b6c3927d" (UID: "ebdc0486-555a-4904-86b5-f5d7b6c3927d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 05:00:21.435125 kubelet[3267]: I0916 05:00:21.435053 3267 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/143e3301-5137-46d0-bd13-58352d95ea88-kube-api-access-jrfmf" (OuterVolumeSpecName: "kube-api-access-jrfmf") pod "143e3301-5137-46d0-bd13-58352d95ea88" (UID: "143e3301-5137-46d0-bd13-58352d95ea88"). InnerVolumeSpecName "kube-api-access-jrfmf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 16 05:00:21.435621 kubelet[3267]: I0916 05:00:21.435486 3267 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebdc0486-555a-4904-86b5-f5d7b6c3927d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ebdc0486-555a-4904-86b5-f5d7b6c3927d" (UID: "ebdc0486-555a-4904-86b5-f5d7b6c3927d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 16 05:00:21.435819 kubelet[3267]: I0916 05:00:21.435761 3267 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebdc0486-555a-4904-86b5-f5d7b6c3927d-kube-api-access-hqmfm" (OuterVolumeSpecName: "kube-api-access-hqmfm") pod "ebdc0486-555a-4904-86b5-f5d7b6c3927d" (UID: "ebdc0486-555a-4904-86b5-f5d7b6c3927d"). InnerVolumeSpecName "kube-api-access-hqmfm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 16 05:00:21.438617 kubelet[3267]: I0916 05:00:21.438507 3267 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/143e3301-5137-46d0-bd13-58352d95ea88-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "143e3301-5137-46d0-bd13-58352d95ea88" (UID: "143e3301-5137-46d0-bd13-58352d95ea88"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 16 05:00:21.438617 kubelet[3267]: I0916 05:00:21.438461 3267 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebdc0486-555a-4904-86b5-f5d7b6c3927d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ebdc0486-555a-4904-86b5-f5d7b6c3927d" (UID: "ebdc0486-555a-4904-86b5-f5d7b6c3927d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 16 05:00:21.439177 kubelet[3267]: I0916 05:00:21.439119 3267 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebdc0486-555a-4904-86b5-f5d7b6c3927d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ebdc0486-555a-4904-86b5-f5d7b6c3927d" (UID: "ebdc0486-555a-4904-86b5-f5d7b6c3927d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 16 05:00:21.440412 kubelet[3267]: I0916 05:00:21.440377 3267 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebdc0486-555a-4904-86b5-f5d7b6c3927d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ebdc0486-555a-4904-86b5-f5d7b6c3927d" (UID: "ebdc0486-555a-4904-86b5-f5d7b6c3927d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 16 05:00:21.526258 kubelet[3267]: I0916 05:00:21.524886 3267 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ebdc0486-555a-4904-86b5-f5d7b6c3927d-clustermesh-secrets\") on node \"ip-172-31-28-73\" DevicePath \"\"" Sep 16 05:00:21.526258 kubelet[3267]: I0916 05:00:21.525138 3267 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ebdc0486-555a-4904-86b5-f5d7b6c3927d-hubble-tls\") on node \"ip-172-31-28-73\" DevicePath \"\"" Sep 16 05:00:21.526258 kubelet[3267]: I0916 05:00:21.525170 3267 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ebdc0486-555a-4904-86b5-f5d7b6c3927d-cilium-run\") on node \"ip-172-31-28-73\" DevicePath \"\"" Sep 16 05:00:21.526258 kubelet[3267]: I0916 05:00:21.525216 3267 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ebdc0486-555a-4904-86b5-f5d7b6c3927d-bpf-maps\") on node \"ip-172-31-28-73\" DevicePath \"\"" Sep 16 05:00:21.526258 kubelet[3267]: I0916 05:00:21.525230 3267 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ebdc0486-555a-4904-86b5-f5d7b6c3927d-hostproc\") on node \"ip-172-31-28-73\" DevicePath \"\"" Sep 16 05:00:21.526258 kubelet[3267]: I0916 05:00:21.525242 3267 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ebdc0486-555a-4904-86b5-f5d7b6c3927d-cilium-cgroup\") on node \"ip-172-31-28-73\" DevicePath \"\"" Sep 16 05:00:21.526258 kubelet[3267]: I0916 05:00:21.525254 3267 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ebdc0486-555a-4904-86b5-f5d7b6c3927d-cni-path\") on node \"ip-172-31-28-73\" DevicePath \"\"" Sep 16 05:00:21.526258 kubelet[3267]: I0916 05:00:21.525296 3267 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jrfmf\" (UniqueName: \"kubernetes.io/projected/143e3301-5137-46d0-bd13-58352d95ea88-kube-api-access-jrfmf\") on node \"ip-172-31-28-73\" DevicePath \"\"" Sep 16 05:00:21.526696 kubelet[3267]: I0916 05:00:21.525309 3267 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/143e3301-5137-46d0-bd13-58352d95ea88-cilium-config-path\") on node \"ip-172-31-28-73\" DevicePath \"\"" Sep 16 05:00:21.526696 kubelet[3267]: I0916 05:00:21.525323 3267 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hqmfm\" (UniqueName: \"kubernetes.io/projected/ebdc0486-555a-4904-86b5-f5d7b6c3927d-kube-api-access-hqmfm\") on node \"ip-172-31-28-73\" DevicePath \"\"" Sep 16 05:00:21.526696 kubelet[3267]: I0916 05:00:21.525336 3267 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ebdc0486-555a-4904-86b5-f5d7b6c3927d-etc-cni-netd\") on node \"ip-172-31-28-73\" DevicePath \"\"" Sep 16 05:00:21.526696 kubelet[3267]: I0916 05:00:21.525377 3267 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ebdc0486-555a-4904-86b5-f5d7b6c3927d-cilium-config-path\") on node \"ip-172-31-28-73\" DevicePath \"\"" Sep 16 05:00:21.723130 kubelet[3267]: I0916 05:00:21.722486 3267 scope.go:117] "RemoveContainer" containerID="6e7862a31792a9a586353a8874a9f6210104d4a9b7a4767e6067a06c930f4d05" Sep 16 05:00:21.735928 containerd[1909]: time="2025-09-16T05:00:21.735477274Z" level=info msg="RemoveContainer for \"6e7862a31792a9a586353a8874a9f6210104d4a9b7a4767e6067a06c930f4d05\"" Sep 16 05:00:21.739680 systemd[1]: Removed slice kubepods-burstable-podebdc0486_555a_4904_86b5_f5d7b6c3927d.slice - libcontainer container kubepods-burstable-podebdc0486_555a_4904_86b5_f5d7b6c3927d.slice. Sep 16 05:00:21.739906 systemd[1]: kubepods-burstable-podebdc0486_555a_4904_86b5_f5d7b6c3927d.slice: Consumed 8.081s CPU time, 190.9M memory peak, 68.4M read from disk, 13.3M written to disk. Sep 16 05:00:21.755267 systemd[1]: Removed slice kubepods-besteffort-pod143e3301_5137_46d0_bd13_58352d95ea88.slice - libcontainer container kubepods-besteffort-pod143e3301_5137_46d0_bd13_58352d95ea88.slice. Sep 16 05:00:21.755501 systemd[1]: kubepods-besteffort-pod143e3301_5137_46d0_bd13_58352d95ea88.slice: Consumed 490ms CPU time, 26.6M memory peak, 9.3M read from disk, 4K written to disk. Sep 16 05:00:21.762163 containerd[1909]: time="2025-09-16T05:00:21.762099546Z" level=info msg="RemoveContainer for \"6e7862a31792a9a586353a8874a9f6210104d4a9b7a4767e6067a06c930f4d05\" returns successfully" Sep 16 05:00:21.762473 kubelet[3267]: I0916 05:00:21.762430 3267 scope.go:117] "RemoveContainer" containerID="00031fd2e02469bc301d35f7330ac660c3136ee6229d50ee23b834ebf846e42f" Sep 16 05:00:21.764848 containerd[1909]: time="2025-09-16T05:00:21.764808617Z" level=info msg="RemoveContainer for \"00031fd2e02469bc301d35f7330ac660c3136ee6229d50ee23b834ebf846e42f\"" Sep 16 05:00:21.773868 containerd[1909]: time="2025-09-16T05:00:21.773813745Z" level=info msg="RemoveContainer for \"00031fd2e02469bc301d35f7330ac660c3136ee6229d50ee23b834ebf846e42f\" returns successfully" Sep 16 05:00:21.774162 kubelet[3267]: I0916 05:00:21.774138 3267 scope.go:117] "RemoveContainer" containerID="944ed0739062aefe6c3c0f9fa98ed90377d93fdcfd5e8091517e254a32e5a42c" Sep 16 05:00:21.777897 containerd[1909]: time="2025-09-16T05:00:21.777288044Z" level=info msg="RemoveContainer for \"944ed0739062aefe6c3c0f9fa98ed90377d93fdcfd5e8091517e254a32e5a42c\"" Sep 16 05:00:21.786359 containerd[1909]: time="2025-09-16T05:00:21.786285118Z" level=info msg="RemoveContainer for \"944ed0739062aefe6c3c0f9fa98ed90377d93fdcfd5e8091517e254a32e5a42c\" returns successfully" Sep 16 05:00:21.786578 kubelet[3267]: I0916 05:00:21.786549 3267 scope.go:117] "RemoveContainer" containerID="2ef787936fa278cca992162528cac9cc6597424ff51f71adfa24b0be0c2aaf39" Sep 16 05:00:21.788870 containerd[1909]: time="2025-09-16T05:00:21.788832622Z" level=info msg="RemoveContainer for \"2ef787936fa278cca992162528cac9cc6597424ff51f71adfa24b0be0c2aaf39\"" Sep 16 05:00:21.794981 containerd[1909]: time="2025-09-16T05:00:21.794914666Z" level=info msg="RemoveContainer for \"2ef787936fa278cca992162528cac9cc6597424ff51f71adfa24b0be0c2aaf39\" returns successfully" Sep 16 05:00:21.795258 kubelet[3267]: I0916 05:00:21.795230 3267 scope.go:117] "RemoveContainer" containerID="205aebbf6cff03d7930b91505d65cfb20dbca72ae8444f7f808e5a6885e90a23" Sep 16 05:00:21.797448 containerd[1909]: time="2025-09-16T05:00:21.797410521Z" level=info msg="RemoveContainer for \"205aebbf6cff03d7930b91505d65cfb20dbca72ae8444f7f808e5a6885e90a23\"" Sep 16 05:00:21.806337 containerd[1909]: time="2025-09-16T05:00:21.806214945Z" level=info msg="RemoveContainer for \"205aebbf6cff03d7930b91505d65cfb20dbca72ae8444f7f808e5a6885e90a23\" returns successfully" Sep 16 05:00:21.806632 kubelet[3267]: I0916 05:00:21.806602 3267 scope.go:117] "RemoveContainer" containerID="6e7862a31792a9a586353a8874a9f6210104d4a9b7a4767e6067a06c930f4d05" Sep 16 05:00:21.807283 containerd[1909]: time="2025-09-16T05:00:21.807214469Z" level=error msg="ContainerStatus for \"6e7862a31792a9a586353a8874a9f6210104d4a9b7a4767e6067a06c930f4d05\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6e7862a31792a9a586353a8874a9f6210104d4a9b7a4767e6067a06c930f4d05\": not found" Sep 16 05:00:21.807627 kubelet[3267]: E0916 05:00:21.807594 3267 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6e7862a31792a9a586353a8874a9f6210104d4a9b7a4767e6067a06c930f4d05\": not found" containerID="6e7862a31792a9a586353a8874a9f6210104d4a9b7a4767e6067a06c930f4d05" Sep 16 05:00:21.807792 kubelet[3267]: I0916 05:00:21.807638 3267 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6e7862a31792a9a586353a8874a9f6210104d4a9b7a4767e6067a06c930f4d05"} err="failed to get container status \"6e7862a31792a9a586353a8874a9f6210104d4a9b7a4767e6067a06c930f4d05\": rpc error: code = NotFound desc = an error occurred when try to find container \"6e7862a31792a9a586353a8874a9f6210104d4a9b7a4767e6067a06c930f4d05\": not found" Sep 16 05:00:21.807792 kubelet[3267]: I0916 05:00:21.807780 3267 scope.go:117] "RemoveContainer" containerID="00031fd2e02469bc301d35f7330ac660c3136ee6229d50ee23b834ebf846e42f" Sep 16 05:00:21.808098 containerd[1909]: time="2025-09-16T05:00:21.808050660Z" level=error msg="ContainerStatus for \"00031fd2e02469bc301d35f7330ac660c3136ee6229d50ee23b834ebf846e42f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"00031fd2e02469bc301d35f7330ac660c3136ee6229d50ee23b834ebf846e42f\": not found" Sep 16 05:00:21.819065 kubelet[3267]: E0916 05:00:21.819034 3267 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"00031fd2e02469bc301d35f7330ac660c3136ee6229d50ee23b834ebf846e42f\": not found" containerID="00031fd2e02469bc301d35f7330ac660c3136ee6229d50ee23b834ebf846e42f" Sep 16 05:00:21.819386 kubelet[3267]: I0916 05:00:21.819257 3267 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"00031fd2e02469bc301d35f7330ac660c3136ee6229d50ee23b834ebf846e42f"} err="failed to get container status \"00031fd2e02469bc301d35f7330ac660c3136ee6229d50ee23b834ebf846e42f\": rpc error: code = NotFound desc = an error occurred when try to find container \"00031fd2e02469bc301d35f7330ac660c3136ee6229d50ee23b834ebf846e42f\": not found" Sep 16 05:00:21.819386 kubelet[3267]: I0916 05:00:21.819286 3267 scope.go:117] "RemoveContainer" containerID="944ed0739062aefe6c3c0f9fa98ed90377d93fdcfd5e8091517e254a32e5a42c" Sep 16 05:00:21.820182 containerd[1909]: time="2025-09-16T05:00:21.820116831Z" level=error msg="ContainerStatus for \"944ed0739062aefe6c3c0f9fa98ed90377d93fdcfd5e8091517e254a32e5a42c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"944ed0739062aefe6c3c0f9fa98ed90377d93fdcfd5e8091517e254a32e5a42c\": not found" Sep 16 05:00:21.820287 kubelet[3267]: E0916 05:00:21.820256 3267 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"944ed0739062aefe6c3c0f9fa98ed90377d93fdcfd5e8091517e254a32e5a42c\": not found" containerID="944ed0739062aefe6c3c0f9fa98ed90377d93fdcfd5e8091517e254a32e5a42c" Sep 16 05:00:21.820332 kubelet[3267]: I0916 05:00:21.820280 3267 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"944ed0739062aefe6c3c0f9fa98ed90377d93fdcfd5e8091517e254a32e5a42c"} err="failed to get container status \"944ed0739062aefe6c3c0f9fa98ed90377d93fdcfd5e8091517e254a32e5a42c\": rpc error: code = NotFound desc = an error occurred when try to find container \"944ed0739062aefe6c3c0f9fa98ed90377d93fdcfd5e8091517e254a32e5a42c\": not found" Sep 16 05:00:21.820332 kubelet[3267]: I0916 05:00:21.820300 3267 scope.go:117] "RemoveContainer" containerID="2ef787936fa278cca992162528cac9cc6597424ff51f71adfa24b0be0c2aaf39" Sep 16 05:00:21.820502 containerd[1909]: time="2025-09-16T05:00:21.820469459Z" level=error msg="ContainerStatus for \"2ef787936fa278cca992162528cac9cc6597424ff51f71adfa24b0be0c2aaf39\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2ef787936fa278cca992162528cac9cc6597424ff51f71adfa24b0be0c2aaf39\": not found" Sep 16 05:00:21.820579 kubelet[3267]: E0916 05:00:21.820562 3267 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2ef787936fa278cca992162528cac9cc6597424ff51f71adfa24b0be0c2aaf39\": not found" containerID="2ef787936fa278cca992162528cac9cc6597424ff51f71adfa24b0be0c2aaf39" Sep 16 05:00:21.820614 kubelet[3267]: I0916 05:00:21.820601 3267 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2ef787936fa278cca992162528cac9cc6597424ff51f71adfa24b0be0c2aaf39"} err="failed to get container status \"2ef787936fa278cca992162528cac9cc6597424ff51f71adfa24b0be0c2aaf39\": rpc error: code = NotFound desc = an error occurred when try to find container \"2ef787936fa278cca992162528cac9cc6597424ff51f71adfa24b0be0c2aaf39\": not found" Sep 16 05:00:21.820650 kubelet[3267]: I0916 05:00:21.820615 3267 scope.go:117] "RemoveContainer" containerID="205aebbf6cff03d7930b91505d65cfb20dbca72ae8444f7f808e5a6885e90a23" Sep 16 05:00:21.820796 containerd[1909]: time="2025-09-16T05:00:21.820742696Z" level=error msg="ContainerStatus for \"205aebbf6cff03d7930b91505d65cfb20dbca72ae8444f7f808e5a6885e90a23\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"205aebbf6cff03d7930b91505d65cfb20dbca72ae8444f7f808e5a6885e90a23\": not found" Sep 16 05:00:21.820900 kubelet[3267]: E0916 05:00:21.820849 3267 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"205aebbf6cff03d7930b91505d65cfb20dbca72ae8444f7f808e5a6885e90a23\": not found" containerID="205aebbf6cff03d7930b91505d65cfb20dbca72ae8444f7f808e5a6885e90a23" Sep 16 05:00:21.820900 kubelet[3267]: I0916 05:00:21.820887 3267 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"205aebbf6cff03d7930b91505d65cfb20dbca72ae8444f7f808e5a6885e90a23"} err="failed to get container status \"205aebbf6cff03d7930b91505d65cfb20dbca72ae8444f7f808e5a6885e90a23\": rpc error: code = NotFound desc = an error occurred when try to find container \"205aebbf6cff03d7930b91505d65cfb20dbca72ae8444f7f808e5a6885e90a23\": not found" Sep 16 05:00:21.820900 kubelet[3267]: I0916 05:00:21.820899 3267 scope.go:117] "RemoveContainer" containerID="c04ed8c894650edf70d75cc6470293afd14b7b4c9cf6c0bd42c3f663117187d5" Sep 16 05:00:21.822398 containerd[1909]: time="2025-09-16T05:00:21.822351646Z" level=info msg="RemoveContainer for \"c04ed8c894650edf70d75cc6470293afd14b7b4c9cf6c0bd42c3f663117187d5\"" Sep 16 05:00:21.828007 containerd[1909]: time="2025-09-16T05:00:21.827961400Z" level=info msg="RemoveContainer for \"c04ed8c894650edf70d75cc6470293afd14b7b4c9cf6c0bd42c3f663117187d5\" returns successfully" Sep 16 05:00:21.828232 kubelet[3267]: I0916 05:00:21.828207 3267 scope.go:117] "RemoveContainer" containerID="c04ed8c894650edf70d75cc6470293afd14b7b4c9cf6c0bd42c3f663117187d5" Sep 16 05:00:21.828483 containerd[1909]: time="2025-09-16T05:00:21.828418458Z" level=error msg="ContainerStatus for \"c04ed8c894650edf70d75cc6470293afd14b7b4c9cf6c0bd42c3f663117187d5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c04ed8c894650edf70d75cc6470293afd14b7b4c9cf6c0bd42c3f663117187d5\": not found" Sep 16 05:00:21.828569 kubelet[3267]: E0916 05:00:21.828539 3267 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c04ed8c894650edf70d75cc6470293afd14b7b4c9cf6c0bd42c3f663117187d5\": not found" containerID="c04ed8c894650edf70d75cc6470293afd14b7b4c9cf6c0bd42c3f663117187d5" Sep 16 05:00:21.828609 kubelet[3267]: I0916 05:00:21.828565 3267 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c04ed8c894650edf70d75cc6470293afd14b7b4c9cf6c0bd42c3f663117187d5"} err="failed to get container status \"c04ed8c894650edf70d75cc6470293afd14b7b4c9cf6c0bd42c3f663117187d5\": rpc error: code = NotFound desc = an error occurred when try to find container \"c04ed8c894650edf70d75cc6470293afd14b7b4c9cf6c0bd42c3f663117187d5\": not found" Sep 16 05:00:22.063518 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-90f47b1991561c92593628c360b286cc18999bd1b252ad6ccc2407bf6c466b6f-shm.mount: Deactivated successfully. Sep 16 05:00:22.063629 systemd[1]: var-lib-kubelet-pods-ebdc0486\x2d555a\x2d4904\x2d86b5\x2df5d7b6c3927d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhqmfm.mount: Deactivated successfully. Sep 16 05:00:22.063692 systemd[1]: var-lib-kubelet-pods-143e3301\x2d5137\x2d46d0\x2dbd13\x2d58352d95ea88-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djrfmf.mount: Deactivated successfully. Sep 16 05:00:22.063753 systemd[1]: var-lib-kubelet-pods-ebdc0486\x2d555a\x2d4904\x2d86b5\x2df5d7b6c3927d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 16 05:00:22.063817 systemd[1]: var-lib-kubelet-pods-ebdc0486\x2d555a\x2d4904\x2d86b5\x2df5d7b6c3927d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 16 05:00:22.301482 kubelet[3267]: I0916 05:00:22.301422 3267 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="143e3301-5137-46d0-bd13-58352d95ea88" path="/var/lib/kubelet/pods/143e3301-5137-46d0-bd13-58352d95ea88/volumes" Sep 16 05:00:22.301860 kubelet[3267]: I0916 05:00:22.301813 3267 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebdc0486-555a-4904-86b5-f5d7b6c3927d" path="/var/lib/kubelet/pods/ebdc0486-555a-4904-86b5-f5d7b6c3927d/volumes" Sep 16 05:00:22.488457 containerd[1909]: time="2025-09-16T05:00:22.488333213Z" level=info msg="TaskExit event in podsandbox handler exit_status:137 exited_at:{seconds:1757998821 nanos:143386680}" Sep 16 05:00:22.911575 sshd[4922]: Connection closed by 139.178.68.195 port 53512 Sep 16 05:00:22.911961 sshd-session[4919]: pam_unix(sshd:session): session closed for user core Sep 16 05:00:22.919417 systemd[1]: sshd@25-172.31.28.73:22-139.178.68.195:53512.service: Deactivated successfully. Sep 16 05:00:22.921696 systemd[1]: session-26.scope: Deactivated successfully. Sep 16 05:00:22.923589 systemd-logind[1858]: Session 26 logged out. Waiting for processes to exit. Sep 16 05:00:22.926960 systemd-logind[1858]: Removed session 26. Sep 16 05:00:22.948219 systemd[1]: Started sshd@26-172.31.28.73:22-139.178.68.195:35366.service - OpenSSH per-connection server daemon (139.178.68.195:35366). Sep 16 05:00:23.145507 sshd[5076]: Accepted publickey for core from 139.178.68.195 port 35366 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:00:23.146915 sshd-session[5076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:00:23.152494 systemd-logind[1858]: New session 27 of user core. Sep 16 05:00:23.155334 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 16 05:00:23.412705 kubelet[3267]: E0916 05:00:23.412647 3267 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 16 05:00:23.475109 ntpd[2144]: Deleting 10 lxc_health, [fe80::28a8:1eff:fe50:a1da%8]:123, stats: received=0, sent=0, dropped=0, active_time=64 secs Sep 16 05:00:23.475584 ntpd[2144]: 16 Sep 05:00:23 ntpd[2144]: Deleting 10 lxc_health, [fe80::28a8:1eff:fe50:a1da%8]:123, stats: received=0, sent=0, dropped=0, active_time=64 secs Sep 16 05:00:23.848107 sshd[5079]: Connection closed by 139.178.68.195 port 35366 Sep 16 05:00:23.848629 sshd-session[5076]: pam_unix(sshd:session): session closed for user core Sep 16 05:00:23.850072 kubelet[3267]: I0916 05:00:23.849598 3267 memory_manager.go:355] "RemoveStaleState removing state" podUID="143e3301-5137-46d0-bd13-58352d95ea88" containerName="cilium-operator" Sep 16 05:00:23.850438 kubelet[3267]: I0916 05:00:23.850207 3267 memory_manager.go:355] "RemoveStaleState removing state" podUID="ebdc0486-555a-4904-86b5-f5d7b6c3927d" containerName="cilium-agent" Sep 16 05:00:23.856526 systemd[1]: sshd@26-172.31.28.73:22-139.178.68.195:35366.service: Deactivated successfully. Sep 16 05:00:23.862435 systemd[1]: session-27.scope: Deactivated successfully. Sep 16 05:00:23.870609 systemd-logind[1858]: Session 27 logged out. Waiting for processes to exit. Sep 16 05:00:23.892428 systemd[1]: Started sshd@27-172.31.28.73:22-139.178.68.195:35380.service - OpenSSH per-connection server daemon (139.178.68.195:35380). Sep 16 05:00:23.897272 systemd-logind[1858]: Removed session 27. Sep 16 05:00:23.912446 systemd[1]: Created slice kubepods-burstable-podcabfed39_28d9_47d3_84b0_4ec962cf46b8.slice - libcontainer container kubepods-burstable-podcabfed39_28d9_47d3_84b0_4ec962cf46b8.slice. Sep 16 05:00:23.941108 kubelet[3267]: I0916 05:00:23.940848 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cabfed39-28d9-47d3-84b0-4ec962cf46b8-clustermesh-secrets\") pod \"cilium-455sg\" (UID: \"cabfed39-28d9-47d3-84b0-4ec962cf46b8\") " pod="kube-system/cilium-455sg" Sep 16 05:00:23.942722 kubelet[3267]: I0916 05:00:23.941734 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrrbv\" (UniqueName: \"kubernetes.io/projected/cabfed39-28d9-47d3-84b0-4ec962cf46b8-kube-api-access-vrrbv\") pod \"cilium-455sg\" (UID: \"cabfed39-28d9-47d3-84b0-4ec962cf46b8\") " pod="kube-system/cilium-455sg" Sep 16 05:00:23.942722 kubelet[3267]: I0916 05:00:23.941790 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cabfed39-28d9-47d3-84b0-4ec962cf46b8-cilium-cgroup\") pod \"cilium-455sg\" (UID: \"cabfed39-28d9-47d3-84b0-4ec962cf46b8\") " pod="kube-system/cilium-455sg" Sep 16 05:00:23.942722 kubelet[3267]: I0916 05:00:23.941814 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cabfed39-28d9-47d3-84b0-4ec962cf46b8-host-proc-sys-kernel\") pod \"cilium-455sg\" (UID: \"cabfed39-28d9-47d3-84b0-4ec962cf46b8\") " pod="kube-system/cilium-455sg" Sep 16 05:00:23.942722 kubelet[3267]: I0916 05:00:23.941837 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cabfed39-28d9-47d3-84b0-4ec962cf46b8-etc-cni-netd\") pod \"cilium-455sg\" (UID: \"cabfed39-28d9-47d3-84b0-4ec962cf46b8\") " pod="kube-system/cilium-455sg" Sep 16 05:00:23.942722 kubelet[3267]: I0916 05:00:23.941869 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cabfed39-28d9-47d3-84b0-4ec962cf46b8-hostproc\") pod \"cilium-455sg\" (UID: \"cabfed39-28d9-47d3-84b0-4ec962cf46b8\") " pod="kube-system/cilium-455sg" Sep 16 05:00:23.942722 kubelet[3267]: I0916 05:00:23.941891 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cabfed39-28d9-47d3-84b0-4ec962cf46b8-xtables-lock\") pod \"cilium-455sg\" (UID: \"cabfed39-28d9-47d3-84b0-4ec962cf46b8\") " pod="kube-system/cilium-455sg" Sep 16 05:00:23.944008 kubelet[3267]: I0916 05:00:23.941915 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/cabfed39-28d9-47d3-84b0-4ec962cf46b8-cilium-ipsec-secrets\") pod \"cilium-455sg\" (UID: \"cabfed39-28d9-47d3-84b0-4ec962cf46b8\") " pod="kube-system/cilium-455sg" Sep 16 05:00:23.944008 kubelet[3267]: I0916 05:00:23.941946 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cabfed39-28d9-47d3-84b0-4ec962cf46b8-host-proc-sys-net\") pod \"cilium-455sg\" (UID: \"cabfed39-28d9-47d3-84b0-4ec962cf46b8\") " pod="kube-system/cilium-455sg" Sep 16 05:00:23.944008 kubelet[3267]: I0916 05:00:23.941976 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cabfed39-28d9-47d3-84b0-4ec962cf46b8-cilium-run\") pod \"cilium-455sg\" (UID: \"cabfed39-28d9-47d3-84b0-4ec962cf46b8\") " pod="kube-system/cilium-455sg" Sep 16 05:00:23.944008 kubelet[3267]: I0916 05:00:23.941996 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cabfed39-28d9-47d3-84b0-4ec962cf46b8-lib-modules\") pod \"cilium-455sg\" (UID: \"cabfed39-28d9-47d3-84b0-4ec962cf46b8\") " pod="kube-system/cilium-455sg" Sep 16 05:00:23.944008 kubelet[3267]: I0916 05:00:23.942025 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cabfed39-28d9-47d3-84b0-4ec962cf46b8-bpf-maps\") pod \"cilium-455sg\" (UID: \"cabfed39-28d9-47d3-84b0-4ec962cf46b8\") " pod="kube-system/cilium-455sg" Sep 16 05:00:23.944008 kubelet[3267]: I0916 05:00:23.942049 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cabfed39-28d9-47d3-84b0-4ec962cf46b8-cni-path\") pod \"cilium-455sg\" (UID: \"cabfed39-28d9-47d3-84b0-4ec962cf46b8\") " pod="kube-system/cilium-455sg" Sep 16 05:00:23.945133 kubelet[3267]: I0916 05:00:23.942071 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cabfed39-28d9-47d3-84b0-4ec962cf46b8-cilium-config-path\") pod \"cilium-455sg\" (UID: \"cabfed39-28d9-47d3-84b0-4ec962cf46b8\") " pod="kube-system/cilium-455sg" Sep 16 05:00:23.946366 kubelet[3267]: I0916 05:00:23.946333 3267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cabfed39-28d9-47d3-84b0-4ec962cf46b8-hubble-tls\") pod \"cilium-455sg\" (UID: \"cabfed39-28d9-47d3-84b0-4ec962cf46b8\") " pod="kube-system/cilium-455sg" Sep 16 05:00:24.105910 sshd[5090]: Accepted publickey for core from 139.178.68.195 port 35380 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:00:24.107947 sshd-session[5090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:00:24.114986 systemd-logind[1858]: New session 28 of user core. Sep 16 05:00:24.125416 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 16 05:00:24.219560 containerd[1909]: time="2025-09-16T05:00:24.219500270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-455sg,Uid:cabfed39-28d9-47d3-84b0-4ec962cf46b8,Namespace:kube-system,Attempt:0,}" Sep 16 05:00:24.245116 sshd[5098]: Connection closed by 139.178.68.195 port 35380 Sep 16 05:00:24.245356 sshd-session[5090]: pam_unix(sshd:session): session closed for user core Sep 16 05:00:24.254071 containerd[1909]: time="2025-09-16T05:00:24.253989137Z" level=info msg="connecting to shim e7b5d285bdf1407abb2f4ddeab83b35ebc409de7a363019c07441a34b6a885ac" address="unix:///run/containerd/s/5db527492e7cbdd0eacf20578004cf65eed7af8327dcf510cb98983a0778ad4a" namespace=k8s.io protocol=ttrpc version=3 Sep 16 05:00:24.257388 systemd[1]: sshd@27-172.31.28.73:22-139.178.68.195:35380.service: Deactivated successfully. Sep 16 05:00:24.260863 systemd[1]: session-28.scope: Deactivated successfully. Sep 16 05:00:24.265552 systemd-logind[1858]: Session 28 logged out. Waiting for processes to exit. Sep 16 05:00:24.284954 systemd[1]: Started sshd@28-172.31.28.73:22-139.178.68.195:35388.service - OpenSSH per-connection server daemon (139.178.68.195:35388). Sep 16 05:00:24.288634 systemd-logind[1858]: Removed session 28. Sep 16 05:00:24.302749 systemd[1]: Started cri-containerd-e7b5d285bdf1407abb2f4ddeab83b35ebc409de7a363019c07441a34b6a885ac.scope - libcontainer container e7b5d285bdf1407abb2f4ddeab83b35ebc409de7a363019c07441a34b6a885ac. Sep 16 05:00:24.356516 containerd[1909]: time="2025-09-16T05:00:24.356408409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-455sg,Uid:cabfed39-28d9-47d3-84b0-4ec962cf46b8,Namespace:kube-system,Attempt:0,} returns sandbox id \"e7b5d285bdf1407abb2f4ddeab83b35ebc409de7a363019c07441a34b6a885ac\"" Sep 16 05:00:24.359620 containerd[1909]: time="2025-09-16T05:00:24.359497643Z" level=info msg="CreateContainer within sandbox \"e7b5d285bdf1407abb2f4ddeab83b35ebc409de7a363019c07441a34b6a885ac\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 16 05:00:24.375147 containerd[1909]: time="2025-09-16T05:00:24.373128807Z" level=info msg="Container 1ce250c30f10da646f7b7cdbd609c9cd14ca39f6819adacce28b090382f454d9: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:00:24.387618 containerd[1909]: time="2025-09-16T05:00:24.387566190Z" level=info msg="CreateContainer within sandbox \"e7b5d285bdf1407abb2f4ddeab83b35ebc409de7a363019c07441a34b6a885ac\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1ce250c30f10da646f7b7cdbd609c9cd14ca39f6819adacce28b090382f454d9\"" Sep 16 05:00:24.389139 containerd[1909]: time="2025-09-16T05:00:24.388443573Z" level=info msg="StartContainer for \"1ce250c30f10da646f7b7cdbd609c9cd14ca39f6819adacce28b090382f454d9\"" Sep 16 05:00:24.389985 containerd[1909]: time="2025-09-16T05:00:24.389932681Z" level=info msg="connecting to shim 1ce250c30f10da646f7b7cdbd609c9cd14ca39f6819adacce28b090382f454d9" address="unix:///run/containerd/s/5db527492e7cbdd0eacf20578004cf65eed7af8327dcf510cb98983a0778ad4a" protocol=ttrpc version=3 Sep 16 05:00:24.414309 systemd[1]: Started cri-containerd-1ce250c30f10da646f7b7cdbd609c9cd14ca39f6819adacce28b090382f454d9.scope - libcontainer container 1ce250c30f10da646f7b7cdbd609c9cd14ca39f6819adacce28b090382f454d9. Sep 16 05:00:24.458868 containerd[1909]: time="2025-09-16T05:00:24.458420772Z" level=info msg="StartContainer for \"1ce250c30f10da646f7b7cdbd609c9cd14ca39f6819adacce28b090382f454d9\" returns successfully" Sep 16 05:00:24.474053 systemd[1]: cri-containerd-1ce250c30f10da646f7b7cdbd609c9cd14ca39f6819adacce28b090382f454d9.scope: Deactivated successfully. Sep 16 05:00:24.474695 systemd[1]: cri-containerd-1ce250c30f10da646f7b7cdbd609c9cd14ca39f6819adacce28b090382f454d9.scope: Consumed 25ms CPU time, 9M memory peak, 2.7M read from disk. Sep 16 05:00:24.477661 containerd[1909]: time="2025-09-16T05:00:24.477611144Z" level=info msg="received exit event container_id:\"1ce250c30f10da646f7b7cdbd609c9cd14ca39f6819adacce28b090382f454d9\" id:\"1ce250c30f10da646f7b7cdbd609c9cd14ca39f6819adacce28b090382f454d9\" pid:5168 exited_at:{seconds:1757998824 nanos:476204821}" Sep 16 05:00:24.478204 containerd[1909]: time="2025-09-16T05:00:24.478147582Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1ce250c30f10da646f7b7cdbd609c9cd14ca39f6819adacce28b090382f454d9\" id:\"1ce250c30f10da646f7b7cdbd609c9cd14ca39f6819adacce28b090382f454d9\" pid:5168 exited_at:{seconds:1757998824 nanos:476204821}" Sep 16 05:00:24.479930 sshd[5137]: Accepted publickey for core from 139.178.68.195 port 35388 ssh2: RSA SHA256:v3+cK3y4/qIZwrDjQBp9SCv5VZD/lvIU+hjTU9LJj18 Sep 16 05:00:24.484819 sshd-session[5137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 16 05:00:24.494479 systemd-logind[1858]: New session 29 of user core. Sep 16 05:00:24.501423 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 16 05:00:24.754090 containerd[1909]: time="2025-09-16T05:00:24.753955405Z" level=info msg="CreateContainer within sandbox \"e7b5d285bdf1407abb2f4ddeab83b35ebc409de7a363019c07441a34b6a885ac\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 16 05:00:24.775348 containerd[1909]: time="2025-09-16T05:00:24.775283004Z" level=info msg="Container 5777c3881c003ae401700c600b3cb8e5b25de2edd4a1130d3aa8a210f3d96b88: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:00:24.786196 containerd[1909]: time="2025-09-16T05:00:24.785913505Z" level=info msg="CreateContainer within sandbox \"e7b5d285bdf1407abb2f4ddeab83b35ebc409de7a363019c07441a34b6a885ac\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5777c3881c003ae401700c600b3cb8e5b25de2edd4a1130d3aa8a210f3d96b88\"" Sep 16 05:00:24.802584 containerd[1909]: time="2025-09-16T05:00:24.802514085Z" level=info msg="StartContainer for \"5777c3881c003ae401700c600b3cb8e5b25de2edd4a1130d3aa8a210f3d96b88\"" Sep 16 05:00:24.803642 containerd[1909]: time="2025-09-16T05:00:24.803559661Z" level=info msg="connecting to shim 5777c3881c003ae401700c600b3cb8e5b25de2edd4a1130d3aa8a210f3d96b88" address="unix:///run/containerd/s/5db527492e7cbdd0eacf20578004cf65eed7af8327dcf510cb98983a0778ad4a" protocol=ttrpc version=3 Sep 16 05:00:24.831402 systemd[1]: Started cri-containerd-5777c3881c003ae401700c600b3cb8e5b25de2edd4a1130d3aa8a210f3d96b88.scope - libcontainer container 5777c3881c003ae401700c600b3cb8e5b25de2edd4a1130d3aa8a210f3d96b88. Sep 16 05:00:24.881793 containerd[1909]: time="2025-09-16T05:00:24.881695886Z" level=info msg="StartContainer for \"5777c3881c003ae401700c600b3cb8e5b25de2edd4a1130d3aa8a210f3d96b88\" returns successfully" Sep 16 05:00:24.891552 systemd[1]: cri-containerd-5777c3881c003ae401700c600b3cb8e5b25de2edd4a1130d3aa8a210f3d96b88.scope: Deactivated successfully. Sep 16 05:00:24.892136 systemd[1]: cri-containerd-5777c3881c003ae401700c600b3cb8e5b25de2edd4a1130d3aa8a210f3d96b88.scope: Consumed 20ms CPU time, 7.1M memory peak, 1.7M read from disk. Sep 16 05:00:24.893031 containerd[1909]: time="2025-09-16T05:00:24.892329118Z" level=info msg="received exit event container_id:\"5777c3881c003ae401700c600b3cb8e5b25de2edd4a1130d3aa8a210f3d96b88\" id:\"5777c3881c003ae401700c600b3cb8e5b25de2edd4a1130d3aa8a210f3d96b88\" pid:5218 exited_at:{seconds:1757998824 nanos:891950869}" Sep 16 05:00:24.893031 containerd[1909]: time="2025-09-16T05:00:24.892544941Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5777c3881c003ae401700c600b3cb8e5b25de2edd4a1130d3aa8a210f3d96b88\" id:\"5777c3881c003ae401700c600b3cb8e5b25de2edd4a1130d3aa8a210f3d96b88\" pid:5218 exited_at:{seconds:1757998824 nanos:891950869}" Sep 16 05:00:25.757376 containerd[1909]: time="2025-09-16T05:00:25.757325656Z" level=info msg="CreateContainer within sandbox \"e7b5d285bdf1407abb2f4ddeab83b35ebc409de7a363019c07441a34b6a885ac\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 16 05:00:25.787293 containerd[1909]: time="2025-09-16T05:00:25.786196696Z" level=info msg="Container 1cc8d910254d675f8f835704a148b72a944f65f12edbd3ef5c4b868ba76c313a: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:00:25.803951 containerd[1909]: time="2025-09-16T05:00:25.803841518Z" level=info msg="CreateContainer within sandbox \"e7b5d285bdf1407abb2f4ddeab83b35ebc409de7a363019c07441a34b6a885ac\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1cc8d910254d675f8f835704a148b72a944f65f12edbd3ef5c4b868ba76c313a\"" Sep 16 05:00:25.808374 containerd[1909]: time="2025-09-16T05:00:25.808323523Z" level=info msg="StartContainer for \"1cc8d910254d675f8f835704a148b72a944f65f12edbd3ef5c4b868ba76c313a\"" Sep 16 05:00:25.809991 containerd[1909]: time="2025-09-16T05:00:25.809930069Z" level=info msg="connecting to shim 1cc8d910254d675f8f835704a148b72a944f65f12edbd3ef5c4b868ba76c313a" address="unix:///run/containerd/s/5db527492e7cbdd0eacf20578004cf65eed7af8327dcf510cb98983a0778ad4a" protocol=ttrpc version=3 Sep 16 05:00:25.843318 systemd[1]: Started cri-containerd-1cc8d910254d675f8f835704a148b72a944f65f12edbd3ef5c4b868ba76c313a.scope - libcontainer container 1cc8d910254d675f8f835704a148b72a944f65f12edbd3ef5c4b868ba76c313a. Sep 16 05:00:25.905337 containerd[1909]: time="2025-09-16T05:00:25.905292807Z" level=info msg="StartContainer for \"1cc8d910254d675f8f835704a148b72a944f65f12edbd3ef5c4b868ba76c313a\" returns successfully" Sep 16 05:00:25.911211 systemd[1]: cri-containerd-1cc8d910254d675f8f835704a148b72a944f65f12edbd3ef5c4b868ba76c313a.scope: Deactivated successfully. Sep 16 05:00:25.912673 containerd[1909]: time="2025-09-16T05:00:25.912518437Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1cc8d910254d675f8f835704a148b72a944f65f12edbd3ef5c4b868ba76c313a\" id:\"1cc8d910254d675f8f835704a148b72a944f65f12edbd3ef5c4b868ba76c313a\" pid:5262 exited_at:{seconds:1757998825 nanos:912218876}" Sep 16 05:00:25.912673 containerd[1909]: time="2025-09-16T05:00:25.912577146Z" level=info msg="received exit event container_id:\"1cc8d910254d675f8f835704a148b72a944f65f12edbd3ef5c4b868ba76c313a\" id:\"1cc8d910254d675f8f835704a148b72a944f65f12edbd3ef5c4b868ba76c313a\" pid:5262 exited_at:{seconds:1757998825 nanos:912218876}" Sep 16 05:00:25.942707 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1cc8d910254d675f8f835704a148b72a944f65f12edbd3ef5c4b868ba76c313a-rootfs.mount: Deactivated successfully. Sep 16 05:00:26.763047 containerd[1909]: time="2025-09-16T05:00:26.762989821Z" level=info msg="CreateContainer within sandbox \"e7b5d285bdf1407abb2f4ddeab83b35ebc409de7a363019c07441a34b6a885ac\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 16 05:00:26.782110 containerd[1909]: time="2025-09-16T05:00:26.780746213Z" level=info msg="Container cfd575d63a7f5790527678829bf051565a7f0201bf0aca1324264d352930a84e: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:00:26.795352 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1642128632.mount: Deactivated successfully. Sep 16 05:00:26.806067 containerd[1909]: time="2025-09-16T05:00:26.806013012Z" level=info msg="CreateContainer within sandbox \"e7b5d285bdf1407abb2f4ddeab83b35ebc409de7a363019c07441a34b6a885ac\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cfd575d63a7f5790527678829bf051565a7f0201bf0aca1324264d352930a84e\"" Sep 16 05:00:26.808643 containerd[1909]: time="2025-09-16T05:00:26.808594492Z" level=info msg="StartContainer for \"cfd575d63a7f5790527678829bf051565a7f0201bf0aca1324264d352930a84e\"" Sep 16 05:00:26.810712 containerd[1909]: time="2025-09-16T05:00:26.810651124Z" level=info msg="connecting to shim cfd575d63a7f5790527678829bf051565a7f0201bf0aca1324264d352930a84e" address="unix:///run/containerd/s/5db527492e7cbdd0eacf20578004cf65eed7af8327dcf510cb98983a0778ad4a" protocol=ttrpc version=3 Sep 16 05:00:26.842308 systemd[1]: Started cri-containerd-cfd575d63a7f5790527678829bf051565a7f0201bf0aca1324264d352930a84e.scope - libcontainer container cfd575d63a7f5790527678829bf051565a7f0201bf0aca1324264d352930a84e. Sep 16 05:00:26.877992 systemd[1]: cri-containerd-cfd575d63a7f5790527678829bf051565a7f0201bf0aca1324264d352930a84e.scope: Deactivated successfully. Sep 16 05:00:26.882278 containerd[1909]: time="2025-09-16T05:00:26.882198709Z" level=info msg="received exit event container_id:\"cfd575d63a7f5790527678829bf051565a7f0201bf0aca1324264d352930a84e\" id:\"cfd575d63a7f5790527678829bf051565a7f0201bf0aca1324264d352930a84e\" pid:5301 exited_at:{seconds:1757998826 nanos:881702090}" Sep 16 05:00:26.882595 containerd[1909]: time="2025-09-16T05:00:26.882516943Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cfd575d63a7f5790527678829bf051565a7f0201bf0aca1324264d352930a84e\" id:\"cfd575d63a7f5790527678829bf051565a7f0201bf0aca1324264d352930a84e\" pid:5301 exited_at:{seconds:1757998826 nanos:881702090}" Sep 16 05:00:26.894255 containerd[1909]: time="2025-09-16T05:00:26.894128195Z" level=info msg="StartContainer for \"cfd575d63a7f5790527678829bf051565a7f0201bf0aca1324264d352930a84e\" returns successfully" Sep 16 05:00:26.914223 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cfd575d63a7f5790527678829bf051565a7f0201bf0aca1324264d352930a84e-rootfs.mount: Deactivated successfully. Sep 16 05:00:27.299144 kubelet[3267]: E0916 05:00:27.299059 3267 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-j2x66" podUID="f3a955d0-a8fa-4172-9864-859670acf183" Sep 16 05:00:27.768584 containerd[1909]: time="2025-09-16T05:00:27.768464771Z" level=info msg="CreateContainer within sandbox \"e7b5d285bdf1407abb2f4ddeab83b35ebc409de7a363019c07441a34b6a885ac\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 16 05:00:27.792368 containerd[1909]: time="2025-09-16T05:00:27.792311549Z" level=info msg="Container cf545e8f8cd8a337574b75cd8b84ba488b4be321c666f00012b1f915a4ce39b3: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:00:27.811765 containerd[1909]: time="2025-09-16T05:00:27.811715819Z" level=info msg="CreateContainer within sandbox \"e7b5d285bdf1407abb2f4ddeab83b35ebc409de7a363019c07441a34b6a885ac\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"cf545e8f8cd8a337574b75cd8b84ba488b4be321c666f00012b1f915a4ce39b3\"" Sep 16 05:00:27.813199 containerd[1909]: time="2025-09-16T05:00:27.812920454Z" level=info msg="StartContainer for \"cf545e8f8cd8a337574b75cd8b84ba488b4be321c666f00012b1f915a4ce39b3\"" Sep 16 05:00:27.814866 containerd[1909]: time="2025-09-16T05:00:27.814768145Z" level=info msg="connecting to shim cf545e8f8cd8a337574b75cd8b84ba488b4be321c666f00012b1f915a4ce39b3" address="unix:///run/containerd/s/5db527492e7cbdd0eacf20578004cf65eed7af8327dcf510cb98983a0778ad4a" protocol=ttrpc version=3 Sep 16 05:00:27.841708 systemd[1]: Started cri-containerd-cf545e8f8cd8a337574b75cd8b84ba488b4be321c666f00012b1f915a4ce39b3.scope - libcontainer container cf545e8f8cd8a337574b75cd8b84ba488b4be321c666f00012b1f915a4ce39b3. Sep 16 05:00:27.896277 containerd[1909]: time="2025-09-16T05:00:27.896202406Z" level=info msg="StartContainer for \"cf545e8f8cd8a337574b75cd8b84ba488b4be321c666f00012b1f915a4ce39b3\" returns successfully" Sep 16 05:00:28.023625 containerd[1909]: time="2025-09-16T05:00:28.023272032Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cf545e8f8cd8a337574b75cd8b84ba488b4be321c666f00012b1f915a4ce39b3\" id:\"a8c3b28939ecca41ee1442af4a7227b983182d903bf27a3a35a7164f4d318af4\" pid:5369 exited_at:{seconds:1757998828 nanos:22844465}" Sep 16 05:00:28.703157 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Sep 16 05:00:29.109818 containerd[1909]: time="2025-09-16T05:00:29.109774008Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cf545e8f8cd8a337574b75cd8b84ba488b4be321c666f00012b1f915a4ce39b3\" id:\"64ac130aad9f0adb6b741fb36065b651915b4deef7883e3f62203a4ada623f40\" pid:5451 exit_status:1 exited_at:{seconds:1757998829 nanos:109513121}" Sep 16 05:00:31.301791 containerd[1909]: time="2025-09-16T05:00:31.301721520Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cf545e8f8cd8a337574b75cd8b84ba488b4be321c666f00012b1f915a4ce39b3\" id:\"30065b307f58e16626da3486298d79e0fec02ff12bfd29dd2192520f3fc44031\" pid:5709 exit_status:1 exited_at:{seconds:1757998831 nanos:300702027}" Sep 16 05:00:31.884007 (udev-worker)[5887]: Network interface NamePolicy= disabled on kernel command line. Sep 16 05:00:31.886534 systemd-networkd[1815]: lxc_health: Link UP Sep 16 05:00:31.894631 (udev-worker)[5888]: Network interface NamePolicy= disabled on kernel command line. Sep 16 05:00:31.895131 systemd-networkd[1815]: lxc_health: Gained carrier Sep 16 05:00:32.259447 kubelet[3267]: I0916 05:00:32.258397 3267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-455sg" podStartSLOduration=9.258374099 podStartE2EDuration="9.258374099s" podCreationTimestamp="2025-09-16 05:00:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-16 05:00:28.822319108 +0000 UTC m=+110.654465607" watchObservedRunningTime="2025-09-16 05:00:32.258374099 +0000 UTC m=+114.090520597" Sep 16 05:00:33.056387 systemd-networkd[1815]: lxc_health: Gained IPv6LL Sep 16 05:00:33.636782 containerd[1909]: time="2025-09-16T05:00:33.636719027Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cf545e8f8cd8a337574b75cd8b84ba488b4be321c666f00012b1f915a4ce39b3\" id:\"791632074c61a0dab6168343aa0a4295d08f22beba329f838829fc00b9e782be\" pid:5923 exited_at:{seconds:1757998833 nanos:634984321}" Sep 16 05:00:35.474972 ntpd[2144]: Listen normally on 13 lxc_health [fe80::e014:e0ff:fecf:b82b%14]:123 Sep 16 05:00:35.475513 ntpd[2144]: 16 Sep 05:00:35 ntpd[2144]: Listen normally on 13 lxc_health [fe80::e014:e0ff:fecf:b82b%14]:123 Sep 16 05:00:35.866904 containerd[1909]: time="2025-09-16T05:00:35.866857385Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cf545e8f8cd8a337574b75cd8b84ba488b4be321c666f00012b1f915a4ce39b3\" id:\"d6bfa5715b6675e669c9ef6e7a3433a6104d09b25fc08ae1dc53daa0564317ed\" pid:5953 exited_at:{seconds:1757998835 nanos:866243997}" Sep 16 05:00:37.995508 containerd[1909]: time="2025-09-16T05:00:37.995455102Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cf545e8f8cd8a337574b75cd8b84ba488b4be321c666f00012b1f915a4ce39b3\" id:\"520c81fafa99479bcffdb6428be6921eb250da5c97e8a5a2e404ca0ca40e3c47\" pid:5983 exited_at:{seconds:1757998837 nanos:995050206}" Sep 16 05:00:38.329976 containerd[1909]: time="2025-09-16T05:00:38.329933953Z" level=info msg="StopPodSandbox for \"d7492d7503e88045e1afbb2c8cf3b591912ef8cf8b5dd0f412226242ae85813e\"" Sep 16 05:00:38.330144 containerd[1909]: time="2025-09-16T05:00:38.330110209Z" level=info msg="TearDown network for sandbox \"d7492d7503e88045e1afbb2c8cf3b591912ef8cf8b5dd0f412226242ae85813e\" successfully" Sep 16 05:00:38.330144 containerd[1909]: time="2025-09-16T05:00:38.330122891Z" level=info msg="StopPodSandbox for \"d7492d7503e88045e1afbb2c8cf3b591912ef8cf8b5dd0f412226242ae85813e\" returns successfully" Sep 16 05:00:38.330639 containerd[1909]: time="2025-09-16T05:00:38.330608195Z" level=info msg="RemovePodSandbox for \"d7492d7503e88045e1afbb2c8cf3b591912ef8cf8b5dd0f412226242ae85813e\"" Sep 16 05:00:38.333916 containerd[1909]: time="2025-09-16T05:00:38.333838924Z" level=info msg="Forcibly stopping sandbox \"d7492d7503e88045e1afbb2c8cf3b591912ef8cf8b5dd0f412226242ae85813e\"" Sep 16 05:00:38.334112 containerd[1909]: time="2025-09-16T05:00:38.334049840Z" level=info msg="TearDown network for sandbox \"d7492d7503e88045e1afbb2c8cf3b591912ef8cf8b5dd0f412226242ae85813e\" successfully" Sep 16 05:00:38.336115 containerd[1909]: time="2025-09-16T05:00:38.336057991Z" level=info msg="Ensure that sandbox d7492d7503e88045e1afbb2c8cf3b591912ef8cf8b5dd0f412226242ae85813e in task-service has been cleanup successfully" Sep 16 05:00:38.349156 containerd[1909]: time="2025-09-16T05:00:38.348842515Z" level=info msg="RemovePodSandbox \"d7492d7503e88045e1afbb2c8cf3b591912ef8cf8b5dd0f412226242ae85813e\" returns successfully" Sep 16 05:00:38.349446 containerd[1909]: time="2025-09-16T05:00:38.349427981Z" level=info msg="StopPodSandbox for \"90f47b1991561c92593628c360b286cc18999bd1b252ad6ccc2407bf6c466b6f\"" Sep 16 05:00:38.349570 containerd[1909]: time="2025-09-16T05:00:38.349544317Z" level=info msg="TearDown network for sandbox \"90f47b1991561c92593628c360b286cc18999bd1b252ad6ccc2407bf6c466b6f\" successfully" Sep 16 05:00:38.349608 containerd[1909]: time="2025-09-16T05:00:38.349569735Z" level=info msg="StopPodSandbox for \"90f47b1991561c92593628c360b286cc18999bd1b252ad6ccc2407bf6c466b6f\" returns successfully" Sep 16 05:00:38.349853 containerd[1909]: time="2025-09-16T05:00:38.349831494Z" level=info msg="RemovePodSandbox for \"90f47b1991561c92593628c360b286cc18999bd1b252ad6ccc2407bf6c466b6f\"" Sep 16 05:00:38.349932 containerd[1909]: time="2025-09-16T05:00:38.349855902Z" level=info msg="Forcibly stopping sandbox \"90f47b1991561c92593628c360b286cc18999bd1b252ad6ccc2407bf6c466b6f\"" Sep 16 05:00:38.349932 containerd[1909]: time="2025-09-16T05:00:38.349919755Z" level=info msg="TearDown network for sandbox \"90f47b1991561c92593628c360b286cc18999bd1b252ad6ccc2407bf6c466b6f\" successfully" Sep 16 05:00:38.351066 containerd[1909]: time="2025-09-16T05:00:38.351030376Z" level=info msg="Ensure that sandbox 90f47b1991561c92593628c360b286cc18999bd1b252ad6ccc2407bf6c466b6f in task-service has been cleanup successfully" Sep 16 05:00:38.358172 containerd[1909]: time="2025-09-16T05:00:38.358109225Z" level=info msg="RemovePodSandbox \"90f47b1991561c92593628c360b286cc18999bd1b252ad6ccc2407bf6c466b6f\" returns successfully" Sep 16 05:00:40.239280 containerd[1909]: time="2025-09-16T05:00:40.239222837Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cf545e8f8cd8a337574b75cd8b84ba488b4be321c666f00012b1f915a4ce39b3\" id:\"6c88214c7c1b472d133fbf8883c34a0fc8e43c9fce6f746058eefb929cd8ce1a\" pid:6010 exited_at:{seconds:1757998840 nanos:238770788}" Sep 16 05:00:40.273229 sshd[5198]: Connection closed by 139.178.68.195 port 35388 Sep 16 05:00:40.274450 sshd-session[5137]: pam_unix(sshd:session): session closed for user core Sep 16 05:00:40.286947 systemd[1]: sshd@28-172.31.28.73:22-139.178.68.195:35388.service: Deactivated successfully. Sep 16 05:00:40.290694 systemd[1]: session-29.scope: Deactivated successfully. Sep 16 05:00:40.293989 systemd-logind[1858]: Session 29 logged out. Waiting for processes to exit. Sep 16 05:00:40.295771 systemd-logind[1858]: Removed session 29. Sep 16 05:00:56.229614 systemd[1]: cri-containerd-9e8dc75ee8c9b9c0cad90ab4f555d8566657c8dfd4e73a5150eb9e2c0143e41f.scope: Deactivated successfully. Sep 16 05:00:56.230658 systemd[1]: cri-containerd-9e8dc75ee8c9b9c0cad90ab4f555d8566657c8dfd4e73a5150eb9e2c0143e41f.scope: Consumed 2.879s CPU time, 67.8M memory peak, 19.9M read from disk. Sep 16 05:00:56.232457 containerd[1909]: time="2025-09-16T05:00:56.231767866Z" level=info msg="received exit event container_id:\"9e8dc75ee8c9b9c0cad90ab4f555d8566657c8dfd4e73a5150eb9e2c0143e41f\" id:\"9e8dc75ee8c9b9c0cad90ab4f555d8566657c8dfd4e73a5150eb9e2c0143e41f\" pid:3116 exit_status:1 exited_at:{seconds:1757998856 nanos:230514158}" Sep 16 05:00:56.234497 containerd[1909]: time="2025-09-16T05:00:56.233356463Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9e8dc75ee8c9b9c0cad90ab4f555d8566657c8dfd4e73a5150eb9e2c0143e41f\" id:\"9e8dc75ee8c9b9c0cad90ab4f555d8566657c8dfd4e73a5150eb9e2c0143e41f\" pid:3116 exit_status:1 exited_at:{seconds:1757998856 nanos:230514158}" Sep 16 05:00:56.264474 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e8dc75ee8c9b9c0cad90ab4f555d8566657c8dfd4e73a5150eb9e2c0143e41f-rootfs.mount: Deactivated successfully. Sep 16 05:00:56.846356 kubelet[3267]: I0916 05:00:56.846314 3267 scope.go:117] "RemoveContainer" containerID="9e8dc75ee8c9b9c0cad90ab4f555d8566657c8dfd4e73a5150eb9e2c0143e41f" Sep 16 05:00:56.849691 containerd[1909]: time="2025-09-16T05:00:56.849593384Z" level=info msg="CreateContainer within sandbox \"169f21647db58d97dece93f672dce17351964d9e3887e85a94aa541079ed0a6d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 16 05:00:56.865778 containerd[1909]: time="2025-09-16T05:00:56.865737624Z" level=info msg="Container 8dbb650fa57d0095e49fd5ded23717eed995c62428531e691fcea1865dfb6ff2: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:00:56.870202 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1234317572.mount: Deactivated successfully. Sep 16 05:00:56.882930 containerd[1909]: time="2025-09-16T05:00:56.882858161Z" level=info msg="CreateContainer within sandbox \"169f21647db58d97dece93f672dce17351964d9e3887e85a94aa541079ed0a6d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"8dbb650fa57d0095e49fd5ded23717eed995c62428531e691fcea1865dfb6ff2\"" Sep 16 05:00:56.883622 containerd[1909]: time="2025-09-16T05:00:56.883539691Z" level=info msg="StartContainer for \"8dbb650fa57d0095e49fd5ded23717eed995c62428531e691fcea1865dfb6ff2\"" Sep 16 05:00:56.884770 containerd[1909]: time="2025-09-16T05:00:56.884737431Z" level=info msg="connecting to shim 8dbb650fa57d0095e49fd5ded23717eed995c62428531e691fcea1865dfb6ff2" address="unix:///run/containerd/s/607254b4c029e2c12daa053691f8adb0f662b78fb9ef06b44640edc0b4375241" protocol=ttrpc version=3 Sep 16 05:00:56.914318 systemd[1]: Started cri-containerd-8dbb650fa57d0095e49fd5ded23717eed995c62428531e691fcea1865dfb6ff2.scope - libcontainer container 8dbb650fa57d0095e49fd5ded23717eed995c62428531e691fcea1865dfb6ff2. Sep 16 05:00:56.983518 containerd[1909]: time="2025-09-16T05:00:56.983457173Z" level=info msg="StartContainer for \"8dbb650fa57d0095e49fd5ded23717eed995c62428531e691fcea1865dfb6ff2\" returns successfully" Sep 16 05:01:00.680506 systemd[1]: cri-containerd-25b5a84e1f0a3a54ea07cd13f6d6990b36e4c9ab5ffc2413a1b646f8c079f5a4.scope: Deactivated successfully. Sep 16 05:01:00.680901 systemd[1]: cri-containerd-25b5a84e1f0a3a54ea07cd13f6d6990b36e4c9ab5ffc2413a1b646f8c079f5a4.scope: Consumed 2.139s CPU time, 34.2M memory peak, 13.5M read from disk. Sep 16 05:01:00.684807 containerd[1909]: time="2025-09-16T05:01:00.684682198Z" level=info msg="received exit event container_id:\"25b5a84e1f0a3a54ea07cd13f6d6990b36e4c9ab5ffc2413a1b646f8c079f5a4\" id:\"25b5a84e1f0a3a54ea07cd13f6d6990b36e4c9ab5ffc2413a1b646f8c079f5a4\" pid:3103 exit_status:1 exited_at:{seconds:1757998860 nanos:683988467}" Sep 16 05:01:00.687142 containerd[1909]: time="2025-09-16T05:01:00.687053709Z" level=info msg="TaskExit event in podsandbox handler container_id:\"25b5a84e1f0a3a54ea07cd13f6d6990b36e4c9ab5ffc2413a1b646f8c079f5a4\" id:\"25b5a84e1f0a3a54ea07cd13f6d6990b36e4c9ab5ffc2413a1b646f8c079f5a4\" pid:3103 exit_status:1 exited_at:{seconds:1757998860 nanos:683988467}" Sep 16 05:01:00.718988 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-25b5a84e1f0a3a54ea07cd13f6d6990b36e4c9ab5ffc2413a1b646f8c079f5a4-rootfs.mount: Deactivated successfully. Sep 16 05:01:00.865276 kubelet[3267]: I0916 05:01:00.864908 3267 scope.go:117] "RemoveContainer" containerID="25b5a84e1f0a3a54ea07cd13f6d6990b36e4c9ab5ffc2413a1b646f8c079f5a4" Sep 16 05:01:00.870446 containerd[1909]: time="2025-09-16T05:01:00.870002245Z" level=info msg="CreateContainer within sandbox \"69df08bec008f5d5343244b7dedb576a835d38db1283fd0fe884b4f1086c8eab\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 16 05:01:00.889426 containerd[1909]: time="2025-09-16T05:01:00.889377393Z" level=info msg="Container b8648ccaa2276398ff0b462e6abc71830d61d278ae07123268c4df2f72f80156: CDI devices from CRI Config.CDIDevices: []" Sep 16 05:01:00.903840 containerd[1909]: time="2025-09-16T05:01:00.903772763Z" level=info msg="CreateContainer within sandbox \"69df08bec008f5d5343244b7dedb576a835d38db1283fd0fe884b4f1086c8eab\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"b8648ccaa2276398ff0b462e6abc71830d61d278ae07123268c4df2f72f80156\"" Sep 16 05:01:00.904867 containerd[1909]: time="2025-09-16T05:01:00.904827440Z" level=info msg="StartContainer for \"b8648ccaa2276398ff0b462e6abc71830d61d278ae07123268c4df2f72f80156\"" Sep 16 05:01:00.906413 containerd[1909]: time="2025-09-16T05:01:00.906363012Z" level=info msg="connecting to shim b8648ccaa2276398ff0b462e6abc71830d61d278ae07123268c4df2f72f80156" address="unix:///run/containerd/s/b538fc7763431c49261db2ad4c054ba1e15f93f5877f8d97919845ebad9555a4" protocol=ttrpc version=3 Sep 16 05:01:00.939325 systemd[1]: Started cri-containerd-b8648ccaa2276398ff0b462e6abc71830d61d278ae07123268c4df2f72f80156.scope - libcontainer container b8648ccaa2276398ff0b462e6abc71830d61d278ae07123268c4df2f72f80156. Sep 16 05:01:01.010137 containerd[1909]: time="2025-09-16T05:01:01.010019936Z" level=info msg="StartContainer for \"b8648ccaa2276398ff0b462e6abc71830d61d278ae07123268c4df2f72f80156\" returns successfully" Sep 16 05:01:01.319367 kubelet[3267]: E0916 05:01:01.319300 3267 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-73?timeout=10s\": context deadline exceeded" Sep 16 05:01:11.320731 kubelet[3267]: E0916 05:01:11.320407 3267 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-73?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"