Jul 7 00:17:22.917142 kernel: Linux version 6.12.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 21:58:13 -00 2025 Jul 7 00:17:22.917184 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e91aabf5a2d4674d97b8508f9502216224d5fb9433440e4c8f906b950e21abf8 Jul 7 00:17:22.917898 kernel: BIOS-provided physical RAM map: Jul 7 00:17:22.917920 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 7 00:17:22.917930 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Jul 7 00:17:22.917941 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Jul 7 00:17:22.917956 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jul 7 00:17:22.917967 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jul 7 00:17:22.917984 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Jul 7 00:17:22.917994 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jul 7 00:17:22.918005 kernel: NX (Execute Disable) protection: active Jul 7 00:17:22.918017 kernel: APIC: Static calls initialized Jul 7 00:17:22.918027 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Jul 7 00:17:22.918038 kernel: extended physical RAM map: Jul 7 00:17:22.918055 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 7 00:17:22.918066 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Jul 7 00:17:22.918085 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Jul 7 00:17:22.918100 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Jul 7 00:17:22.918111 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Jul 7 00:17:22.918122 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jul 7 00:17:22.918135 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jul 7 00:17:22.918147 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Jul 7 00:17:22.918159 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jul 7 00:17:22.918173 kernel: efi: EFI v2.7 by EDK II Jul 7 00:17:22.918188 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 Jul 7 00:17:22.918232 kernel: secureboot: Secure boot disabled Jul 7 00:17:22.918250 kernel: SMBIOS 2.7 present. Jul 7 00:17:22.918262 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jul 7 00:17:22.918273 kernel: DMI: Memory slots populated: 1/1 Jul 7 00:17:22.918283 kernel: Hypervisor detected: KVM Jul 7 00:17:22.918295 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 7 00:17:22.918307 kernel: kvm-clock: using sched offset of 4988688035 cycles Jul 7 00:17:22.918320 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 7 00:17:22.918333 kernel: tsc: Detected 2499.996 MHz processor Jul 7 00:17:22.918347 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 7 00:17:22.918364 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 7 00:17:22.918378 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Jul 7 00:17:22.918391 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 7 00:17:22.918405 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 7 00:17:22.918419 kernel: Using GB pages for direct mapping Jul 7 00:17:22.918439 kernel: ACPI: Early table checksum verification disabled Jul 7 00:17:22.918456 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Jul 7 00:17:22.918470 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Jul 7 00:17:22.918485 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 7 00:17:22.918500 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jul 7 00:17:22.918514 kernel: ACPI: FACS 0x00000000789D0000 000040 Jul 7 00:17:22.918529 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jul 7 00:17:22.918544 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 7 00:17:22.918559 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 7 00:17:22.918575 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jul 7 00:17:22.918590 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jul 7 00:17:22.918605 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jul 7 00:17:22.918619 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jul 7 00:17:22.918634 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Jul 7 00:17:22.918647 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Jul 7 00:17:22.918661 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Jul 7 00:17:22.918674 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Jul 7 00:17:22.918700 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Jul 7 00:17:22.918714 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Jul 7 00:17:22.918730 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Jul 7 00:17:22.918744 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Jul 7 00:17:22.918758 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Jul 7 00:17:22.918773 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Jul 7 00:17:22.918787 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Jul 7 00:17:22.918803 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Jul 7 00:17:22.918817 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jul 7 00:17:22.918832 kernel: NUMA: Initialized distance table, cnt=1 Jul 7 00:17:22.918848 kernel: NODE_DATA(0) allocated [mem 0x7a8eddc0-0x7a8f4fff] Jul 7 00:17:22.918862 kernel: Zone ranges: Jul 7 00:17:22.918876 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 7 00:17:22.918888 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Jul 7 00:17:22.918904 kernel: Normal empty Jul 7 00:17:22.918918 kernel: Device empty Jul 7 00:17:22.918933 kernel: Movable zone start for each node Jul 7 00:17:22.918947 kernel: Early memory node ranges Jul 7 00:17:22.918961 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 7 00:17:22.918979 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Jul 7 00:17:22.918993 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Jul 7 00:17:22.919008 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Jul 7 00:17:22.919023 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 7 00:17:22.919037 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 7 00:17:22.919052 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jul 7 00:17:22.919066 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Jul 7 00:17:22.919080 kernel: ACPI: PM-Timer IO Port: 0xb008 Jul 7 00:17:22.919094 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 7 00:17:22.919108 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jul 7 00:17:22.919126 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 7 00:17:22.919141 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 7 00:17:22.919156 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 7 00:17:22.919170 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 7 00:17:22.919185 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 7 00:17:22.920212 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 7 00:17:22.920252 kernel: TSC deadline timer available Jul 7 00:17:22.920268 kernel: CPU topo: Max. logical packages: 1 Jul 7 00:17:22.920282 kernel: CPU topo: Max. logical dies: 1 Jul 7 00:17:22.920302 kernel: CPU topo: Max. dies per package: 1 Jul 7 00:17:22.920316 kernel: CPU topo: Max. threads per core: 2 Jul 7 00:17:22.920331 kernel: CPU topo: Num. cores per package: 1 Jul 7 00:17:22.920345 kernel: CPU topo: Num. threads per package: 2 Jul 7 00:17:22.920360 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jul 7 00:17:22.920374 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 7 00:17:22.920389 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Jul 7 00:17:22.920404 kernel: Booting paravirtualized kernel on KVM Jul 7 00:17:22.920417 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 7 00:17:22.920435 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 7 00:17:22.920450 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jul 7 00:17:22.920465 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jul 7 00:17:22.920479 kernel: pcpu-alloc: [0] 0 1 Jul 7 00:17:22.920494 kernel: kvm-guest: PV spinlocks enabled Jul 7 00:17:22.920509 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 7 00:17:22.920526 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e91aabf5a2d4674d97b8508f9502216224d5fb9433440e4c8f906b950e21abf8 Jul 7 00:17:22.920542 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 00:17:22.920559 kernel: random: crng init done Jul 7 00:17:22.920574 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 00:17:22.920588 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 7 00:17:22.920603 kernel: Fallback order for Node 0: 0 Jul 7 00:17:22.920618 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Jul 7 00:17:22.920633 kernel: Policy zone: DMA32 Jul 7 00:17:22.920658 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 00:17:22.920676 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 7 00:17:22.920692 kernel: Kernel/User page tables isolation: enabled Jul 7 00:17:22.920707 kernel: ftrace: allocating 40095 entries in 157 pages Jul 7 00:17:22.920721 kernel: ftrace: allocated 157 pages with 5 groups Jul 7 00:17:22.920737 kernel: Dynamic Preempt: voluntary Jul 7 00:17:22.920755 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 00:17:22.920772 kernel: rcu: RCU event tracing is enabled. Jul 7 00:17:22.920787 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 7 00:17:22.920803 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 00:17:22.920819 kernel: Rude variant of Tasks RCU enabled. Jul 7 00:17:22.920837 kernel: Tracing variant of Tasks RCU enabled. Jul 7 00:17:22.920853 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 00:17:22.920869 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 7 00:17:22.920884 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 00:17:22.920900 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 00:17:22.920916 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 00:17:22.920932 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 7 00:17:22.920947 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 00:17:22.920966 kernel: Console: colour dummy device 80x25 Jul 7 00:17:22.920982 kernel: printk: legacy console [tty0] enabled Jul 7 00:17:22.920997 kernel: printk: legacy console [ttyS0] enabled Jul 7 00:17:22.921012 kernel: ACPI: Core revision 20240827 Jul 7 00:17:22.921028 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jul 7 00:17:22.921044 kernel: APIC: Switch to symmetric I/O mode setup Jul 7 00:17:22.921059 kernel: x2apic enabled Jul 7 00:17:22.921075 kernel: APIC: Switched APIC routing to: physical x2apic Jul 7 00:17:22.921090 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jul 7 00:17:22.921109 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Jul 7 00:17:22.921125 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 7 00:17:22.921140 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jul 7 00:17:22.921156 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 7 00:17:22.921170 kernel: Spectre V2 : Mitigation: Retpolines Jul 7 00:17:22.921185 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 7 00:17:22.924905 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jul 7 00:17:22.924941 kernel: RETBleed: Vulnerable Jul 7 00:17:22.924955 kernel: Speculative Store Bypass: Vulnerable Jul 7 00:17:22.924970 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jul 7 00:17:22.924986 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 7 00:17:22.925008 kernel: GDS: Unknown: Dependent on hypervisor status Jul 7 00:17:22.925023 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 7 00:17:22.925038 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 7 00:17:22.925052 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 7 00:17:22.925066 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 7 00:17:22.925080 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jul 7 00:17:22.925093 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jul 7 00:17:22.925106 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jul 7 00:17:22.925120 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jul 7 00:17:22.925134 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jul 7 00:17:22.925151 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jul 7 00:17:22.925164 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 7 00:17:22.925177 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jul 7 00:17:22.925190 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jul 7 00:17:22.925248 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jul 7 00:17:22.925262 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jul 7 00:17:22.925275 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jul 7 00:17:22.925289 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jul 7 00:17:22.925302 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jul 7 00:17:22.925317 kernel: Freeing SMP alternatives memory: 32K Jul 7 00:17:22.925331 kernel: pid_max: default: 32768 minimum: 301 Jul 7 00:17:22.925347 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 7 00:17:22.925365 kernel: landlock: Up and running. Jul 7 00:17:22.925379 kernel: SELinux: Initializing. Jul 7 00:17:22.925394 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 7 00:17:22.925409 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 7 00:17:22.925425 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jul 7 00:17:22.925440 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jul 7 00:17:22.925456 kernel: signal: max sigframe size: 3632 Jul 7 00:17:22.925472 kernel: rcu: Hierarchical SRCU implementation. Jul 7 00:17:22.925489 kernel: rcu: Max phase no-delay instances is 400. Jul 7 00:17:22.925504 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 7 00:17:22.925521 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 7 00:17:22.925536 kernel: smp: Bringing up secondary CPUs ... Jul 7 00:17:22.925555 kernel: smpboot: x86: Booting SMP configuration: Jul 7 00:17:22.925577 kernel: .... node #0, CPUs: #1 Jul 7 00:17:22.925594 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jul 7 00:17:22.925610 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 7 00:17:22.925625 kernel: smp: Brought up 1 node, 2 CPUs Jul 7 00:17:22.925639 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Jul 7 00:17:22.925656 kernel: Memory: 1908048K/2037804K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54432K init, 2536K bss, 125192K reserved, 0K cma-reserved) Jul 7 00:17:22.925675 kernel: devtmpfs: initialized Jul 7 00:17:22.925690 kernel: x86/mm: Memory block size: 128MB Jul 7 00:17:22.925706 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Jul 7 00:17:22.925723 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 00:17:22.925737 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 7 00:17:22.925752 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 00:17:22.925767 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 00:17:22.925783 kernel: audit: initializing netlink subsys (disabled) Jul 7 00:17:22.925800 kernel: audit: type=2000 audit(1751847440.230:1): state=initialized audit_enabled=0 res=1 Jul 7 00:17:22.925814 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 00:17:22.925829 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 7 00:17:22.925844 kernel: cpuidle: using governor menu Jul 7 00:17:22.925860 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 00:17:22.925876 kernel: dca service started, version 1.12.1 Jul 7 00:17:22.925890 kernel: PCI: Using configuration type 1 for base access Jul 7 00:17:22.925905 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 7 00:17:22.925920 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 00:17:22.925938 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 00:17:22.925954 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 00:17:22.925968 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 00:17:22.925983 kernel: ACPI: Added _OSI(Module Device) Jul 7 00:17:22.925997 kernel: ACPI: Added _OSI(Processor Device) Jul 7 00:17:22.926012 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 00:17:22.926029 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jul 7 00:17:22.926045 kernel: ACPI: Interpreter enabled Jul 7 00:17:22.926060 kernel: ACPI: PM: (supports S0 S5) Jul 7 00:17:22.926079 kernel: ACPI: Using IOAPIC for interrupt routing Jul 7 00:17:22.926095 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 7 00:17:22.926109 kernel: PCI: Using E820 reservations for host bridge windows Jul 7 00:17:22.926123 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 7 00:17:22.926139 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 7 00:17:22.926411 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 7 00:17:22.926553 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jul 7 00:17:22.926695 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jul 7 00:17:22.926719 kernel: acpiphp: Slot [3] registered Jul 7 00:17:22.926736 kernel: acpiphp: Slot [4] registered Jul 7 00:17:22.926751 kernel: acpiphp: Slot [5] registered Jul 7 00:17:22.926767 kernel: acpiphp: Slot [6] registered Jul 7 00:17:22.926782 kernel: acpiphp: Slot [7] registered Jul 7 00:17:22.926797 kernel: acpiphp: Slot [8] registered Jul 7 00:17:22.926813 kernel: acpiphp: Slot [9] registered Jul 7 00:17:22.926829 kernel: acpiphp: Slot [10] registered Jul 7 00:17:22.926844 kernel: acpiphp: Slot [11] registered Jul 7 00:17:22.926863 kernel: acpiphp: Slot [12] registered Jul 7 00:17:22.926878 kernel: acpiphp: Slot [13] registered Jul 7 00:17:22.926894 kernel: acpiphp: Slot [14] registered Jul 7 00:17:22.926909 kernel: acpiphp: Slot [15] registered Jul 7 00:17:22.926924 kernel: acpiphp: Slot [16] registered Jul 7 00:17:22.926940 kernel: acpiphp: Slot [17] registered Jul 7 00:17:22.926955 kernel: acpiphp: Slot [18] registered Jul 7 00:17:22.926971 kernel: acpiphp: Slot [19] registered Jul 7 00:17:22.926986 kernel: acpiphp: Slot [20] registered Jul 7 00:17:22.927004 kernel: acpiphp: Slot [21] registered Jul 7 00:17:22.927019 kernel: acpiphp: Slot [22] registered Jul 7 00:17:22.927034 kernel: acpiphp: Slot [23] registered Jul 7 00:17:22.927050 kernel: acpiphp: Slot [24] registered Jul 7 00:17:22.927065 kernel: acpiphp: Slot [25] registered Jul 7 00:17:22.927080 kernel: acpiphp: Slot [26] registered Jul 7 00:17:22.927095 kernel: acpiphp: Slot [27] registered Jul 7 00:17:22.927111 kernel: acpiphp: Slot [28] registered Jul 7 00:17:22.927126 kernel: acpiphp: Slot [29] registered Jul 7 00:17:22.927141 kernel: acpiphp: Slot [30] registered Jul 7 00:17:22.927160 kernel: acpiphp: Slot [31] registered Jul 7 00:17:22.927175 kernel: PCI host bridge to bus 0000:00 Jul 7 00:17:22.927339 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 7 00:17:22.927477 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 7 00:17:22.927609 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 7 00:17:22.927735 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jul 7 00:17:22.927859 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Jul 7 00:17:22.927989 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 7 00:17:22.928153 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jul 7 00:17:22.929387 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Jul 7 00:17:22.929554 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Jul 7 00:17:22.929695 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jul 7 00:17:22.929833 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jul 7 00:17:22.929974 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jul 7 00:17:22.930108 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jul 7 00:17:22.931345 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jul 7 00:17:22.931512 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jul 7 00:17:22.931658 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jul 7 00:17:22.931810 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Jul 7 00:17:22.931950 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Jul 7 00:17:22.932092 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Jul 7 00:17:22.932243 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 7 00:17:22.932388 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Jul 7 00:17:22.932523 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Jul 7 00:17:22.932665 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Jul 7 00:17:22.932806 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Jul 7 00:17:22.932833 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 7 00:17:22.932850 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 7 00:17:22.932866 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 7 00:17:22.932881 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 7 00:17:22.932898 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 7 00:17:22.932914 kernel: iommu: Default domain type: Translated Jul 7 00:17:22.932930 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 7 00:17:22.932946 kernel: efivars: Registered efivars operations Jul 7 00:17:22.932962 kernel: PCI: Using ACPI for IRQ routing Jul 7 00:17:22.932980 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 7 00:17:22.932996 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Jul 7 00:17:22.933012 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Jul 7 00:17:22.933028 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Jul 7 00:17:22.933159 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jul 7 00:17:22.935385 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jul 7 00:17:22.935551 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 7 00:17:22.935574 kernel: vgaarb: loaded Jul 7 00:17:22.935593 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jul 7 00:17:22.935617 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jul 7 00:17:22.935634 kernel: clocksource: Switched to clocksource kvm-clock Jul 7 00:17:22.935651 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 00:17:22.935669 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 00:17:22.935686 kernel: pnp: PnP ACPI init Jul 7 00:17:22.935704 kernel: pnp: PnP ACPI: found 5 devices Jul 7 00:17:22.935721 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 7 00:17:22.935738 kernel: NET: Registered PF_INET protocol family Jul 7 00:17:22.935756 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 7 00:17:22.935776 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 7 00:17:22.935793 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 00:17:22.935811 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 7 00:17:22.935828 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 7 00:17:22.935845 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 7 00:17:22.935862 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 7 00:17:22.935879 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 7 00:17:22.935897 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 00:17:22.935914 kernel: NET: Registered PF_XDP protocol family Jul 7 00:17:22.936049 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 7 00:17:22.936166 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 7 00:17:22.937159 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 7 00:17:22.937337 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jul 7 00:17:22.937461 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Jul 7 00:17:22.937605 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 7 00:17:22.937627 kernel: PCI: CLS 0 bytes, default 64 Jul 7 00:17:22.937644 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 7 00:17:22.937665 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jul 7 00:17:22.937681 kernel: clocksource: Switched to clocksource tsc Jul 7 00:17:22.937697 kernel: Initialise system trusted keyrings Jul 7 00:17:22.937713 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 7 00:17:22.937729 kernel: Key type asymmetric registered Jul 7 00:17:22.937745 kernel: Asymmetric key parser 'x509' registered Jul 7 00:17:22.937761 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 7 00:17:22.937777 kernel: io scheduler mq-deadline registered Jul 7 00:17:22.937793 kernel: io scheduler kyber registered Jul 7 00:17:22.937813 kernel: io scheduler bfq registered Jul 7 00:17:22.937828 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 7 00:17:22.937844 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 00:17:22.937861 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 7 00:17:22.937877 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 7 00:17:22.937892 kernel: i8042: Warning: Keylock active Jul 7 00:17:22.937907 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 7 00:17:22.937923 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 7 00:17:22.938069 kernel: rtc_cmos 00:00: RTC can wake from S4 Jul 7 00:17:22.939246 kernel: rtc_cmos 00:00: registered as rtc0 Jul 7 00:17:22.939425 kernel: rtc_cmos 00:00: setting system clock to 2025-07-07T00:17:22 UTC (1751847442) Jul 7 00:17:22.939548 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jul 7 00:17:22.939568 kernel: intel_pstate: CPU model not supported Jul 7 00:17:22.939609 kernel: efifb: probing for efifb Jul 7 00:17:22.939628 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Jul 7 00:17:22.939645 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Jul 7 00:17:22.939664 kernel: efifb: scrolling: redraw Jul 7 00:17:22.939681 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 7 00:17:22.939697 kernel: Console: switching to colour frame buffer device 100x37 Jul 7 00:17:22.939715 kernel: fb0: EFI VGA frame buffer device Jul 7 00:17:22.939732 kernel: pstore: Using crash dump compression: deflate Jul 7 00:17:22.939749 kernel: pstore: Registered efi_pstore as persistent store backend Jul 7 00:17:22.939766 kernel: NET: Registered PF_INET6 protocol family Jul 7 00:17:22.939783 kernel: Segment Routing with IPv6 Jul 7 00:17:22.939800 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 00:17:22.939816 kernel: NET: Registered PF_PACKET protocol family Jul 7 00:17:22.939835 kernel: Key type dns_resolver registered Jul 7 00:17:22.939852 kernel: IPI shorthand broadcast: enabled Jul 7 00:17:22.939869 kernel: sched_clock: Marking stable (2705003182, 142373550)->(2923079332, -75702600) Jul 7 00:17:22.939886 kernel: registered taskstats version 1 Jul 7 00:17:22.939903 kernel: Loading compiled-in X.509 certificates Jul 7 00:17:22.939919 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.35-flatcar: 025c05e23c9778f7a70ff09fb369dd949499fb06' Jul 7 00:17:22.939936 kernel: Demotion targets for Node 0: null Jul 7 00:17:22.939952 kernel: Key type .fscrypt registered Jul 7 00:17:22.939968 kernel: Key type fscrypt-provisioning registered Jul 7 00:17:22.939987 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 00:17:22.940004 kernel: ima: Allocated hash algorithm: sha1 Jul 7 00:17:22.940020 kernel: ima: No architecture policies found Jul 7 00:17:22.940037 kernel: clk: Disabling unused clocks Jul 7 00:17:22.940054 kernel: Warning: unable to open an initial console. Jul 7 00:17:22.940073 kernel: Freeing unused kernel image (initmem) memory: 54432K Jul 7 00:17:22.940089 kernel: Write protecting the kernel read-only data: 24576k Jul 7 00:17:22.940106 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 7 00:17:22.940125 kernel: Run /init as init process Jul 7 00:17:22.940145 kernel: with arguments: Jul 7 00:17:22.940161 kernel: /init Jul 7 00:17:22.940178 kernel: with environment: Jul 7 00:17:22.940194 kernel: HOME=/ Jul 7 00:17:22.941346 kernel: TERM=linux Jul 7 00:17:22.941372 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 00:17:22.941390 systemd[1]: Successfully made /usr/ read-only. Jul 7 00:17:22.941412 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 00:17:22.941429 systemd[1]: Detected virtualization amazon. Jul 7 00:17:22.941445 systemd[1]: Detected architecture x86-64. Jul 7 00:17:22.941461 systemd[1]: Running in initrd. Jul 7 00:17:22.941477 systemd[1]: No hostname configured, using default hostname. Jul 7 00:17:22.941497 systemd[1]: Hostname set to . Jul 7 00:17:22.941513 systemd[1]: Initializing machine ID from VM UUID. Jul 7 00:17:22.941529 systemd[1]: Queued start job for default target initrd.target. Jul 7 00:17:22.941545 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:17:22.941562 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:17:22.941580 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 00:17:22.941596 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 00:17:22.941613 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 00:17:22.941634 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 00:17:22.941652 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 00:17:22.941668 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 00:17:22.941685 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:17:22.941701 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:17:22.941718 systemd[1]: Reached target paths.target - Path Units. Jul 7 00:17:22.941734 systemd[1]: Reached target slices.target - Slice Units. Jul 7 00:17:22.941754 systemd[1]: Reached target swap.target - Swaps. Jul 7 00:17:22.941770 systemd[1]: Reached target timers.target - Timer Units. Jul 7 00:17:22.941787 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 00:17:22.941804 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 00:17:22.941820 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 00:17:22.941837 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 7 00:17:22.941854 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:17:22.941870 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 00:17:22.941890 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:17:22.941906 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 00:17:22.941923 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 00:17:22.941939 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 00:17:22.941955 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 00:17:22.941972 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 7 00:17:22.941989 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 00:17:22.942005 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 00:17:22.942022 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 00:17:22.942041 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:17:22.942058 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 00:17:22.942108 systemd-journald[207]: Collecting audit messages is disabled. Jul 7 00:17:22.942151 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:17:22.942168 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 00:17:22.942185 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 00:17:22.942277 systemd-journald[207]: Journal started Jul 7 00:17:22.942315 systemd-journald[207]: Runtime Journal (/run/log/journal/ec2dbaaddc20cf6f692b51187f19f58f) is 4.8M, max 38.4M, 33.6M free. Jul 7 00:17:22.911579 systemd-modules-load[208]: Inserted module 'overlay' Jul 7 00:17:22.965375 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 00:17:22.965413 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:17:22.965435 kernel: Bridge firewalling registered Jul 7 00:17:22.965296 systemd-modules-load[208]: Inserted module 'br_netfilter' Jul 7 00:17:22.970242 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 00:17:22.971065 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 00:17:22.971929 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 00:17:22.975996 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 00:17:22.980356 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:17:22.986632 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 00:17:22.993423 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 00:17:23.006399 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:17:23.008230 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 7 00:17:23.018692 systemd-tmpfiles[229]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 7 00:17:23.022535 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:17:23.023446 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:17:23.028152 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 00:17:23.029818 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:17:23.040361 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 00:17:23.053668 dracut-cmdline[244]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e91aabf5a2d4674d97b8508f9502216224d5fb9433440e4c8f906b950e21abf8 Jul 7 00:17:23.085405 systemd-resolved[246]: Positive Trust Anchors: Jul 7 00:17:23.086051 systemd-resolved[246]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 00:17:23.086092 systemd-resolved[246]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 00:17:23.091840 systemd-resolved[246]: Defaulting to hostname 'linux'. Jul 7 00:17:23.093471 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 00:17:23.094278 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:17:23.146244 kernel: SCSI subsystem initialized Jul 7 00:17:23.156231 kernel: Loading iSCSI transport class v2.0-870. Jul 7 00:17:23.168235 kernel: iscsi: registered transport (tcp) Jul 7 00:17:23.189432 kernel: iscsi: registered transport (qla4xxx) Jul 7 00:17:23.189513 kernel: QLogic iSCSI HBA Driver Jul 7 00:17:23.209082 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 00:17:23.224765 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 00:17:23.228287 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 00:17:23.272564 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 00:17:23.274591 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 00:17:23.330249 kernel: raid6: avx512x4 gen() 17879 MB/s Jul 7 00:17:23.348233 kernel: raid6: avx512x2 gen() 17969 MB/s Jul 7 00:17:23.366229 kernel: raid6: avx512x1 gen() 17862 MB/s Jul 7 00:17:23.384227 kernel: raid6: avx2x4 gen() 17778 MB/s Jul 7 00:17:23.402231 kernel: raid6: avx2x2 gen() 17550 MB/s Jul 7 00:17:23.420516 kernel: raid6: avx2x1 gen() 13451 MB/s Jul 7 00:17:23.420582 kernel: raid6: using algorithm avx512x2 gen() 17969 MB/s Jul 7 00:17:23.439501 kernel: raid6: .... xor() 24272 MB/s, rmw enabled Jul 7 00:17:23.439571 kernel: raid6: using avx512x2 recovery algorithm Jul 7 00:17:23.461243 kernel: xor: automatically using best checksumming function avx Jul 7 00:17:23.629240 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 00:17:23.636360 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 00:17:23.638405 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:17:23.670163 systemd-udevd[457]: Using default interface naming scheme 'v255'. Jul 7 00:17:23.677022 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:17:23.681357 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 00:17:23.717237 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Jul 7 00:17:23.717527 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation Jul 7 00:17:23.745047 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 00:17:23.747093 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 00:17:23.809784 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:17:23.813156 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 00:17:23.901231 kernel: cryptd: max_cpu_qlen set to 1000 Jul 7 00:17:23.925249 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 7 00:17:23.930593 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jul 7 00:17:23.938238 kernel: AES CTR mode by8 optimization enabled Jul 7 00:17:23.941928 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 7 00:17:23.942195 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 7 00:17:23.939661 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:17:23.984160 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jul 7 00:17:23.984518 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 7 00:17:23.985046 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:77:14:69:3f:6f Jul 7 00:17:23.985265 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 7 00:17:23.985286 kernel: GPT:9289727 != 16777215 Jul 7 00:17:23.985304 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 7 00:17:23.985321 kernel: GPT:9289727 != 16777215 Jul 7 00:17:23.985337 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 7 00:17:23.985359 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 7 00:17:23.939935 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:17:23.977076 (udev-worker)[511]: Network interface NamePolicy= disabled on kernel command line. Jul 7 00:17:23.984690 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:17:23.987862 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:17:23.990874 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 00:17:24.008240 kernel: nvme nvme0: using unchecked data buffer Jul 7 00:17:24.022289 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:17:24.102827 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jul 7 00:17:24.115934 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 7 00:17:24.117468 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 00:17:24.127318 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jul 7 00:17:24.127827 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jul 7 00:17:24.139966 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jul 7 00:17:24.148605 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 00:17:24.149245 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:17:24.150615 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 00:17:24.152441 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 00:17:24.155341 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 00:17:24.175860 disk-uuid[692]: Primary Header is updated. Jul 7 00:17:24.175860 disk-uuid[692]: Secondary Entries is updated. Jul 7 00:17:24.175860 disk-uuid[692]: Secondary Header is updated. Jul 7 00:17:24.184346 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 7 00:17:24.184104 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 00:17:25.199224 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 7 00:17:25.199438 disk-uuid[697]: The operation has completed successfully. Jul 7 00:17:25.356852 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 00:17:25.356999 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 00:17:25.378212 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 00:17:25.391718 sh[960]: Success Jul 7 00:17:25.420709 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 00:17:25.420788 kernel: device-mapper: uevent: version 1.0.3 Jul 7 00:17:25.420809 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 7 00:17:25.434308 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Jul 7 00:17:25.533892 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 00:17:25.538313 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 00:17:25.554026 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 00:17:25.572717 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 7 00:17:25.572796 kernel: BTRFS: device fsid 9d729180-1373-4e9f-840c-4db0e9220239 devid 1 transid 39 /dev/mapper/usr (254:0) scanned by mount (983) Jul 7 00:17:25.578971 kernel: BTRFS info (device dm-0): first mount of filesystem 9d729180-1373-4e9f-840c-4db0e9220239 Jul 7 00:17:25.579049 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:17:25.579081 kernel: BTRFS info (device dm-0): using free-space-tree Jul 7 00:17:25.704088 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 00:17:25.705413 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 7 00:17:25.706119 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 00:17:25.707392 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 00:17:25.709970 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 00:17:25.746225 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1016) Jul 7 00:17:25.750562 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a5b10ed8-ad12-45a6-8115-f8814df6901b Jul 7 00:17:25.750627 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:17:25.754092 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 7 00:17:25.765287 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a5b10ed8-ad12-45a6-8115-f8814df6901b Jul 7 00:17:25.766964 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 00:17:25.770448 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 00:17:25.869161 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 00:17:25.872344 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 00:17:25.920366 systemd-networkd[1152]: lo: Link UP Jul 7 00:17:25.920376 systemd-networkd[1152]: lo: Gained carrier Jul 7 00:17:25.922081 systemd-networkd[1152]: Enumeration completed Jul 7 00:17:25.922254 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 00:17:25.923137 systemd-networkd[1152]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:17:25.923143 systemd-networkd[1152]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:17:25.923802 systemd[1]: Reached target network.target - Network. Jul 7 00:17:25.926354 systemd-networkd[1152]: eth0: Link UP Jul 7 00:17:25.926359 systemd-networkd[1152]: eth0: Gained carrier Jul 7 00:17:25.926378 systemd-networkd[1152]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:17:25.937314 systemd-networkd[1152]: eth0: DHCPv4 address 172.31.31.140/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 7 00:17:26.335065 ignition[1087]: Ignition 2.21.0 Jul 7 00:17:26.335082 ignition[1087]: Stage: fetch-offline Jul 7 00:17:26.335327 ignition[1087]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:17:26.335340 ignition[1087]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 00:17:26.335619 ignition[1087]: Ignition finished successfully Jul 7 00:17:26.339152 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 00:17:26.340635 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 7 00:17:26.364857 ignition[1161]: Ignition 2.21.0 Jul 7 00:17:26.364873 ignition[1161]: Stage: fetch Jul 7 00:17:26.365281 ignition[1161]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:17:26.365295 ignition[1161]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 00:17:26.365432 ignition[1161]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 00:17:26.426285 ignition[1161]: PUT result: OK Jul 7 00:17:26.450073 ignition[1161]: parsed url from cmdline: "" Jul 7 00:17:26.450090 ignition[1161]: no config URL provided Jul 7 00:17:26.450102 ignition[1161]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 00:17:26.450119 ignition[1161]: no config at "/usr/lib/ignition/user.ign" Jul 7 00:17:26.450155 ignition[1161]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 00:17:26.463398 ignition[1161]: PUT result: OK Jul 7 00:17:26.463548 ignition[1161]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 7 00:17:26.472250 ignition[1161]: GET result: OK Jul 7 00:17:26.472354 ignition[1161]: parsing config with SHA512: cb30927068279c467f43120f24ea0b7306f9d17ad67b708fcf5a6d9691f4a58c1bc71979125dbcef50c1985f6aa6b12cfed98e728e1c5ab5bae7a815b96d6507 Jul 7 00:17:26.479141 unknown[1161]: fetched base config from "system" Jul 7 00:17:26.479508 ignition[1161]: fetch: fetch complete Jul 7 00:17:26.479151 unknown[1161]: fetched base config from "system" Jul 7 00:17:26.479513 ignition[1161]: fetch: fetch passed Jul 7 00:17:26.479156 unknown[1161]: fetched user config from "aws" Jul 7 00:17:26.479554 ignition[1161]: Ignition finished successfully Jul 7 00:17:26.483227 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 7 00:17:26.485230 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 00:17:26.517672 ignition[1167]: Ignition 2.21.0 Jul 7 00:17:26.517688 ignition[1167]: Stage: kargs Jul 7 00:17:26.518084 ignition[1167]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:17:26.518096 ignition[1167]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 00:17:26.518239 ignition[1167]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 00:17:26.519537 ignition[1167]: PUT result: OK Jul 7 00:17:26.522180 ignition[1167]: kargs: kargs passed Jul 7 00:17:26.522264 ignition[1167]: Ignition finished successfully Jul 7 00:17:26.524519 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 00:17:26.525979 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 00:17:26.559596 ignition[1173]: Ignition 2.21.0 Jul 7 00:17:26.559613 ignition[1173]: Stage: disks Jul 7 00:17:26.560037 ignition[1173]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:17:26.560050 ignition[1173]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 00:17:26.560189 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 00:17:26.561520 ignition[1173]: PUT result: OK Jul 7 00:17:26.565385 ignition[1173]: disks: disks passed Jul 7 00:17:26.565467 ignition[1173]: Ignition finished successfully Jul 7 00:17:26.567701 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 00:17:26.568389 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 00:17:26.568772 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 00:17:26.569396 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 00:17:26.569936 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 00:17:26.570518 systemd[1]: Reached target basic.target - Basic System. Jul 7 00:17:26.572340 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 00:17:26.609633 systemd-fsck[1182]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 7 00:17:26.619144 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 00:17:26.621456 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 00:17:26.788241 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 98c55dfc-aac4-4fdd-8ec0-1f5587b3aa36 r/w with ordered data mode. Quota mode: none. Jul 7 00:17:26.789233 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 00:17:26.790106 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 00:17:26.792411 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 00:17:26.795421 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 00:17:26.797086 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 7 00:17:26.797898 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 00:17:26.797928 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 00:17:26.803781 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 00:17:26.806319 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 00:17:26.819238 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1201) Jul 7 00:17:26.826225 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a5b10ed8-ad12-45a6-8115-f8814df6901b Jul 7 00:17:26.826295 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:17:26.826326 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 7 00:17:26.839966 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 00:17:27.201683 initrd-setup-root[1225]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 00:17:27.231431 initrd-setup-root[1232]: cut: /sysroot/etc/group: No such file or directory Jul 7 00:17:27.253051 initrd-setup-root[1239]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 00:17:27.257967 initrd-setup-root[1246]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 00:17:27.300266 systemd-networkd[1152]: eth0: Gained IPv6LL Jul 7 00:17:27.551654 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 00:17:27.553820 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 00:17:27.557380 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 00:17:27.571956 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 00:17:27.574219 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a5b10ed8-ad12-45a6-8115-f8814df6901b Jul 7 00:17:27.604776 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 00:17:27.608155 ignition[1314]: INFO : Ignition 2.21.0 Jul 7 00:17:27.608155 ignition[1314]: INFO : Stage: mount Jul 7 00:17:27.609726 ignition[1314]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:17:27.609726 ignition[1314]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 00:17:27.609726 ignition[1314]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 00:17:27.612813 ignition[1314]: INFO : PUT result: OK Jul 7 00:17:27.615539 ignition[1314]: INFO : mount: mount passed Jul 7 00:17:27.616127 ignition[1314]: INFO : Ignition finished successfully Jul 7 00:17:27.617125 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 00:17:27.619046 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 00:17:27.791367 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 00:17:27.819253 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1326) Jul 7 00:17:27.822421 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a5b10ed8-ad12-45a6-8115-f8814df6901b Jul 7 00:17:27.822500 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:17:27.825105 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 7 00:17:27.832232 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 00:17:27.860199 ignition[1342]: INFO : Ignition 2.21.0 Jul 7 00:17:27.860199 ignition[1342]: INFO : Stage: files Jul 7 00:17:27.861625 ignition[1342]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:17:27.861625 ignition[1342]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 00:17:27.861625 ignition[1342]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 00:17:27.863076 ignition[1342]: INFO : PUT result: OK Jul 7 00:17:27.865447 ignition[1342]: DEBUG : files: compiled without relabeling support, skipping Jul 7 00:17:27.867284 ignition[1342]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 00:17:27.867284 ignition[1342]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 00:17:27.880960 ignition[1342]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 00:17:27.881984 ignition[1342]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 00:17:27.881984 ignition[1342]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 00:17:27.881517 unknown[1342]: wrote ssh authorized keys file for user: core Jul 7 00:17:27.893734 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 7 00:17:27.894858 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 7 00:17:27.966532 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 7 00:17:28.155424 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 7 00:17:28.155424 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 7 00:17:28.157457 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 7 00:17:28.590830 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 7 00:17:28.697792 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 7 00:17:28.697792 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 7 00:17:28.699687 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 00:17:28.699687 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 00:17:28.699687 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 00:17:28.699687 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 00:17:28.699687 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 00:17:28.699687 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 00:17:28.699687 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 00:17:28.705284 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 00:17:28.705284 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 00:17:28.705284 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 00:17:28.708320 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 00:17:28.708320 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 00:17:28.708320 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 7 00:17:29.235995 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 7 00:17:29.627700 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 00:17:29.627700 ignition[1342]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 7 00:17:29.630466 ignition[1342]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 00:17:29.634347 ignition[1342]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 00:17:29.634347 ignition[1342]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 7 00:17:29.634347 ignition[1342]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jul 7 00:17:29.637148 ignition[1342]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 00:17:29.637148 ignition[1342]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 00:17:29.637148 ignition[1342]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 00:17:29.637148 ignition[1342]: INFO : files: files passed Jul 7 00:17:29.637148 ignition[1342]: INFO : Ignition finished successfully Jul 7 00:17:29.636701 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 00:17:29.638539 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 00:17:29.644383 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 00:17:29.652653 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 00:17:29.652774 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 00:17:29.668222 initrd-setup-root-after-ignition[1373]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:17:29.668222 initrd-setup-root-after-ignition[1373]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:17:29.671588 initrd-setup-root-after-ignition[1377]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:17:29.672093 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 00:17:29.673702 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 00:17:29.675622 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 00:17:29.745426 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 00:17:29.745553 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 00:17:29.747473 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 00:17:29.747959 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 00:17:29.748679 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 00:17:29.749748 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 00:17:29.780414 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 00:17:29.782528 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 00:17:29.806595 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:17:29.807466 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:17:29.808567 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 00:17:29.809424 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 00:17:29.809615 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 00:17:29.810932 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 00:17:29.811903 systemd[1]: Stopped target basic.target - Basic System. Jul 7 00:17:29.812716 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 00:17:29.813506 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 00:17:29.814302 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 00:17:29.815129 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 7 00:17:29.816001 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 00:17:29.816804 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 00:17:29.817638 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 00:17:29.818802 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 00:17:29.819679 systemd[1]: Stopped target swap.target - Swaps. Jul 7 00:17:29.820434 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 00:17:29.820672 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 00:17:29.821686 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:17:29.822519 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:17:29.823362 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 00:17:29.823499 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:17:29.824150 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 00:17:29.824398 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 00:17:29.825440 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 00:17:29.825631 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 00:17:29.826359 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 00:17:29.826559 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 00:17:29.829333 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 00:17:29.830187 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 00:17:29.830452 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:17:29.835691 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 00:17:29.836996 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 00:17:29.837905 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:17:29.839416 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 00:17:29.840314 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 00:17:29.848244 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 00:17:29.849071 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 00:17:29.869504 ignition[1397]: INFO : Ignition 2.21.0 Jul 7 00:17:29.871619 ignition[1397]: INFO : Stage: umount Jul 7 00:17:29.871619 ignition[1397]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:17:29.871619 ignition[1397]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 00:17:29.871619 ignition[1397]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 00:17:29.875308 ignition[1397]: INFO : PUT result: OK Jul 7 00:17:29.872600 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 00:17:29.877284 ignition[1397]: INFO : umount: umount passed Jul 7 00:17:29.877284 ignition[1397]: INFO : Ignition finished successfully Jul 7 00:17:29.880723 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 00:17:29.880876 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 00:17:29.882014 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 00:17:29.882145 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 00:17:29.883618 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 00:17:29.883712 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 00:17:29.884548 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 00:17:29.884613 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 00:17:29.885184 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 7 00:17:29.885376 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 7 00:17:29.885919 systemd[1]: Stopped target network.target - Network. Jul 7 00:17:29.886518 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 00:17:29.886583 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 00:17:29.887302 systemd[1]: Stopped target paths.target - Path Units. Jul 7 00:17:29.887890 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 00:17:29.887979 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:17:29.888570 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 00:17:29.889170 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 00:17:29.889831 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 00:17:29.889886 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 00:17:29.890488 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 00:17:29.890536 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 00:17:29.891258 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 00:17:29.891342 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 00:17:29.892268 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 00:17:29.892331 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 00:17:29.892906 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 00:17:29.892972 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 00:17:29.893745 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 00:17:29.894412 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 00:17:29.899669 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 00:17:29.899780 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 00:17:29.904043 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 7 00:17:29.904539 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 00:17:29.904702 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 00:17:29.907116 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 7 00:17:29.908586 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 7 00:17:29.909030 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 00:17:29.909084 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:17:29.910893 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 00:17:29.912387 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 00:17:29.912471 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 00:17:29.913102 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 00:17:29.913168 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:17:29.916366 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 00:17:29.916446 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 00:17:29.916908 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 00:17:29.916970 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:17:29.918215 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:17:29.920773 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 7 00:17:29.920866 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 7 00:17:29.934596 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 00:17:29.936439 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:17:29.939273 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 00:17:29.939342 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 00:17:29.940675 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 00:17:29.940724 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:17:29.941433 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 00:17:29.941504 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 00:17:29.942778 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 00:17:29.942846 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 00:17:29.944064 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 00:17:29.944134 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:17:29.946268 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 00:17:29.947243 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 7 00:17:29.947325 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 00:17:29.948974 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 00:17:29.949043 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:17:29.951345 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:17:29.951409 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:17:29.954976 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 7 00:17:29.955060 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 7 00:17:29.955118 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 00:17:29.955572 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 00:17:29.955743 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 00:17:29.964542 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 00:17:29.964670 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 00:17:29.966217 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 00:17:29.967966 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 00:17:29.984025 systemd[1]: Switching root. Jul 7 00:17:30.031274 systemd-journald[207]: Journal stopped Jul 7 00:17:32.090338 systemd-journald[207]: Received SIGTERM from PID 1 (systemd). Jul 7 00:17:32.090439 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 00:17:32.090465 kernel: SELinux: policy capability open_perms=1 Jul 7 00:17:32.090486 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 00:17:32.090507 kernel: SELinux: policy capability always_check_network=0 Jul 7 00:17:32.090529 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 00:17:32.090550 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 00:17:32.090575 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 00:17:32.090594 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 00:17:32.090612 kernel: SELinux: policy capability userspace_initial_context=0 Jul 7 00:17:32.090632 kernel: audit: type=1403 audit(1751847450.512:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 00:17:32.090671 systemd[1]: Successfully loaded SELinux policy in 84.111ms. Jul 7 00:17:32.090705 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.492ms. Jul 7 00:17:32.090728 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 00:17:32.090754 systemd[1]: Detected virtualization amazon. Jul 7 00:17:32.090774 systemd[1]: Detected architecture x86-64. Jul 7 00:17:32.090794 systemd[1]: Detected first boot. Jul 7 00:17:32.090814 systemd[1]: Initializing machine ID from VM UUID. Jul 7 00:17:32.090833 zram_generator::config[1441]: No configuration found. Jul 7 00:17:32.090853 kernel: Guest personality initialized and is inactive Jul 7 00:17:32.090871 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 7 00:17:32.090890 kernel: Initialized host personality Jul 7 00:17:32.090912 kernel: NET: Registered PF_VSOCK protocol family Jul 7 00:17:32.090930 systemd[1]: Populated /etc with preset unit settings. Jul 7 00:17:32.090951 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 7 00:17:32.090973 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 7 00:17:32.091000 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 7 00:17:32.091021 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 7 00:17:32.091044 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 00:17:32.091067 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 00:17:32.091088 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 00:17:32.091114 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 00:17:32.091137 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 00:17:32.091158 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 00:17:32.091181 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 00:17:32.091238 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 00:17:32.091257 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:17:32.091275 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:17:32.091294 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 00:17:32.091316 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 00:17:32.091336 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 00:17:32.091355 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 00:17:32.091374 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 7 00:17:32.091394 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:17:32.091412 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:17:32.091431 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 7 00:17:32.091453 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 7 00:17:32.091477 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 7 00:17:32.091498 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 00:17:32.091520 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:17:32.091540 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 00:17:32.091561 systemd[1]: Reached target slices.target - Slice Units. Jul 7 00:17:32.091582 systemd[1]: Reached target swap.target - Swaps. Jul 7 00:17:32.091601 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 00:17:32.091620 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 00:17:32.091640 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 7 00:17:32.091662 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:17:32.091680 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 00:17:32.091699 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:17:32.091719 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 00:17:32.091737 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 00:17:32.091756 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 00:17:32.091775 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 00:17:32.091795 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:17:32.091813 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 00:17:32.091837 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 00:17:32.091856 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 00:17:32.091877 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 00:17:32.091896 systemd[1]: Reached target machines.target - Containers. Jul 7 00:17:32.091915 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 00:17:32.091935 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:17:32.091955 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 00:17:32.091973 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 00:17:32.091996 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:17:32.092015 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 00:17:32.092035 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:17:32.092054 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 00:17:32.092073 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:17:32.092094 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 00:17:32.092113 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 7 00:17:32.092132 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 7 00:17:32.092152 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 7 00:17:32.092175 systemd[1]: Stopped systemd-fsck-usr.service. Jul 7 00:17:32.092234 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 00:17:32.092256 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 00:17:32.092275 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 00:17:32.092296 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 00:17:32.092315 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 00:17:32.092335 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 7 00:17:32.092356 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 00:17:32.092380 systemd[1]: verity-setup.service: Deactivated successfully. Jul 7 00:17:32.092400 systemd[1]: Stopped verity-setup.service. Jul 7 00:17:32.092422 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:17:32.092445 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 00:17:32.092464 kernel: fuse: init (API version 7.41) Jul 7 00:17:32.092485 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 00:17:32.092504 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 00:17:32.092524 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 00:17:32.092544 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 00:17:32.092565 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 00:17:32.092587 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:17:32.092610 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 00:17:32.092630 kernel: loop: module loaded Jul 7 00:17:32.092649 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 00:17:32.092669 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:17:32.092690 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:17:32.092709 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:17:32.092729 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:17:32.092750 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 00:17:32.092775 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 00:17:32.092797 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:17:32.092819 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:17:32.092841 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 00:17:32.092863 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 00:17:32.092886 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 00:17:32.092921 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 00:17:32.092942 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 00:17:32.092964 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 00:17:32.092990 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 00:17:32.093013 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 00:17:32.093041 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 7 00:17:32.093063 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 00:17:32.093082 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:17:32.093105 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 00:17:32.093126 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 00:17:32.093146 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 00:17:32.093165 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 00:17:32.093185 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:17:32.098329 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 00:17:32.098404 systemd-journald[1520]: Collecting audit messages is disabled. Jul 7 00:17:32.098450 systemd-journald[1520]: Journal started Jul 7 00:17:32.098488 systemd-journald[1520]: Runtime Journal (/run/log/journal/ec2dbaaddc20cf6f692b51187f19f58f) is 4.8M, max 38.4M, 33.6M free. Jul 7 00:17:31.586478 systemd[1]: Queued start job for default target multi-user.target. Jul 7 00:17:32.101407 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 00:17:31.610617 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 7 00:17:31.611347 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 7 00:17:32.114565 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 00:17:32.117283 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 7 00:17:32.119293 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 00:17:32.120195 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 00:17:32.131280 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:17:32.159241 kernel: ACPI: bus type drm_connector registered Jul 7 00:17:32.161995 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 00:17:32.162817 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 00:17:32.174763 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 00:17:32.184645 kernel: loop0: detected capacity change from 0 to 146240 Jul 7 00:17:32.181683 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 00:17:32.190401 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 00:17:32.198748 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 7 00:17:32.212442 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 00:17:32.239442 systemd-journald[1520]: Time spent on flushing to /var/log/journal/ec2dbaaddc20cf6f692b51187f19f58f is 46.645ms for 1020 entries. Jul 7 00:17:32.239442 systemd-journald[1520]: System Journal (/var/log/journal/ec2dbaaddc20cf6f692b51187f19f58f) is 8M, max 195.6M, 187.6M free. Jul 7 00:17:32.299022 systemd-journald[1520]: Received client request to flush runtime journal. Jul 7 00:17:32.239512 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:17:32.259668 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 7 00:17:32.300545 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 00:17:32.302927 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 00:17:32.309963 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 00:17:32.325565 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 00:17:32.350439 kernel: loop1: detected capacity change from 0 to 221472 Jul 7 00:17:32.375932 systemd-tmpfiles[1591]: ACLs are not supported, ignoring. Jul 7 00:17:32.376367 systemd-tmpfiles[1591]: ACLs are not supported, ignoring. Jul 7 00:17:32.383692 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:17:32.582224 kernel: loop2: detected capacity change from 0 to 72352 Jul 7 00:17:32.615338 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 00:17:32.640078 kernel: loop3: detected capacity change from 0 to 113872 Jul 7 00:17:32.749246 kernel: loop4: detected capacity change from 0 to 146240 Jul 7 00:17:32.771240 kernel: loop5: detected capacity change from 0 to 221472 Jul 7 00:17:32.799240 kernel: loop6: detected capacity change from 0 to 72352 Jul 7 00:17:32.826229 kernel: loop7: detected capacity change from 0 to 113872 Jul 7 00:17:32.842277 (sd-merge)[1598]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jul 7 00:17:32.844522 (sd-merge)[1598]: Merged extensions into '/usr'. Jul 7 00:17:32.850299 systemd[1]: Reload requested from client PID 1556 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 00:17:32.850463 systemd[1]: Reloading... Jul 7 00:17:32.960388 zram_generator::config[1623]: No configuration found. Jul 7 00:17:33.171983 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:17:33.343012 systemd[1]: Reloading finished in 491 ms. Jul 7 00:17:33.357381 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 00:17:33.358123 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 00:17:33.366589 systemd[1]: Starting ensure-sysext.service... Jul 7 00:17:33.372341 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 00:17:33.376317 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:17:33.393142 systemd-tmpfiles[1677]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 7 00:17:33.393171 systemd-tmpfiles[1677]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 7 00:17:33.393480 systemd-tmpfiles[1677]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 00:17:33.393725 systemd-tmpfiles[1677]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 00:17:33.394997 systemd-tmpfiles[1677]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 00:17:33.395382 systemd-tmpfiles[1677]: ACLs are not supported, ignoring. Jul 7 00:17:33.395506 systemd-tmpfiles[1677]: ACLs are not supported, ignoring. Jul 7 00:17:33.395988 systemd[1]: Reload requested from client PID 1676 ('systemctl') (unit ensure-sysext.service)... Jul 7 00:17:33.396009 systemd[1]: Reloading... Jul 7 00:17:33.401745 systemd-tmpfiles[1677]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 00:17:33.403243 systemd-tmpfiles[1677]: Skipping /boot Jul 7 00:17:33.428460 systemd-tmpfiles[1677]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 00:17:33.428631 systemd-tmpfiles[1677]: Skipping /boot Jul 7 00:17:33.463484 systemd-udevd[1678]: Using default interface naming scheme 'v255'. Jul 7 00:17:33.503234 zram_generator::config[1705]: No configuration found. Jul 7 00:17:33.789754 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:17:33.858993 ldconfig[1552]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 00:17:33.870833 (udev-worker)[1725]: Network interface NamePolicy= disabled on kernel command line. Jul 7 00:17:33.921227 kernel: mousedev: PS/2 mouse device common for all mice Jul 7 00:17:33.951230 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jul 7 00:17:33.962224 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jul 7 00:17:33.965702 kernel: ACPI: button: Power Button [PWRF] Jul 7 00:17:33.965802 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Jul 7 00:17:33.969226 kernel: ACPI: button: Sleep Button [SLPF] Jul 7 00:17:34.040870 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 7 00:17:34.041407 systemd[1]: Reloading finished in 645 ms. Jul 7 00:17:34.052021 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:17:34.053660 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 00:17:34.056915 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:17:34.091454 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 00:17:34.097235 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 00:17:34.103451 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 00:17:34.111466 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 00:17:34.122243 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 00:17:34.125463 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 00:17:34.140511 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:17:34.140816 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:17:34.142741 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:17:34.148233 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:17:34.150680 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:17:34.151400 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:17:34.151567 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 00:17:34.151712 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:17:34.157196 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 00:17:34.160998 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:17:34.163240 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:17:34.163467 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:17:34.163591 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 00:17:34.163721 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:17:34.171109 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:17:34.172053 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:17:34.184906 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 00:17:34.187481 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:17:34.187692 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 00:17:34.187959 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 00:17:34.188820 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:17:34.198402 systemd[1]: Finished ensure-sysext.service. Jul 7 00:17:34.210442 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 00:17:34.253835 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 00:17:34.259774 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 00:17:34.261828 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 00:17:34.262069 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 00:17:34.269972 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:17:34.273271 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:17:34.275826 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 00:17:34.278261 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:17:34.278528 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:17:34.285434 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:17:34.286154 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:17:34.290307 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 00:17:34.291917 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 00:17:34.312288 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 00:17:34.313844 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 00:17:34.321610 augenrules[1908]: No rules Jul 7 00:17:34.326028 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 00:17:34.327270 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 00:17:34.465126 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 00:17:34.508649 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 7 00:17:34.513478 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 00:17:34.530145 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:17:34.562428 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:17:34.562954 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:17:34.572579 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:17:34.590830 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 00:17:34.714401 systemd-networkd[1825]: lo: Link UP Jul 7 00:17:34.714773 systemd-networkd[1825]: lo: Gained carrier Jul 7 00:17:34.716682 systemd-networkd[1825]: Enumeration completed Jul 7 00:17:34.717321 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 00:17:34.719467 systemd-networkd[1825]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:17:34.719476 systemd-networkd[1825]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:17:34.721396 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 7 00:17:34.725791 systemd-networkd[1825]: eth0: Link UP Jul 7 00:17:34.726036 systemd-networkd[1825]: eth0: Gained carrier Jul 7 00:17:34.726070 systemd-networkd[1825]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:17:34.726256 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 00:17:34.736009 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:17:34.737355 systemd-networkd[1825]: eth0: DHCPv4 address 172.31.31.140/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 7 00:17:34.761289 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 7 00:17:34.767724 systemd-resolved[1828]: Positive Trust Anchors: Jul 7 00:17:34.767747 systemd-resolved[1828]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 00:17:34.767798 systemd-resolved[1828]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 00:17:34.772465 systemd-resolved[1828]: Defaulting to hostname 'linux'. Jul 7 00:17:34.774386 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 00:17:34.775133 systemd[1]: Reached target network.target - Network. Jul 7 00:17:34.775569 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:17:34.775959 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 00:17:34.776438 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 00:17:34.776801 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 00:17:34.777136 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 7 00:17:34.777617 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 00:17:34.778050 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 00:17:34.778392 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 00:17:34.778729 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 00:17:34.778825 systemd[1]: Reached target paths.target - Path Units. Jul 7 00:17:34.779143 systemd[1]: Reached target timers.target - Timer Units. Jul 7 00:17:34.781790 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 00:17:34.783820 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 00:17:34.787010 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 7 00:17:34.787681 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 7 00:17:34.788132 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 7 00:17:34.791037 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 00:17:34.791952 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 7 00:17:34.793221 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 00:17:34.794499 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 00:17:34.795042 systemd[1]: Reached target basic.target - Basic System. Jul 7 00:17:34.795539 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 00:17:34.795576 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 00:17:34.796728 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 00:17:34.799302 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 7 00:17:34.803449 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 00:17:34.812740 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 00:17:34.815190 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 00:17:34.821512 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 00:17:34.824468 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 00:17:34.828721 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 7 00:17:34.845387 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 00:17:34.852493 systemd[1]: Started ntpd.service - Network Time Service. Jul 7 00:17:34.856398 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 00:17:34.862425 systemd[1]: Starting setup-oem.service - Setup OEM... Jul 7 00:17:34.869464 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 00:17:34.874325 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 00:17:34.884522 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 00:17:34.887832 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 00:17:34.897299 jq[1962]: false Jul 7 00:17:34.889546 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 00:17:34.892084 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 00:17:34.903246 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 00:17:34.915783 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 00:17:34.916791 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 00:17:34.920527 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 00:17:34.963574 oslogin_cache_refresh[1964]: Refreshing passwd entry cache Jul 7 00:17:34.967626 google_oslogin_nss_cache[1964]: oslogin_cache_refresh[1964]: Refreshing passwd entry cache Jul 7 00:17:34.969286 extend-filesystems[1963]: Found /dev/nvme0n1p6 Jul 7 00:17:34.977188 google_oslogin_nss_cache[1964]: oslogin_cache_refresh[1964]: Failure getting users, quitting Jul 7 00:17:34.977188 google_oslogin_nss_cache[1964]: oslogin_cache_refresh[1964]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 00:17:34.977188 google_oslogin_nss_cache[1964]: oslogin_cache_refresh[1964]: Refreshing group entry cache Jul 7 00:17:34.975241 oslogin_cache_refresh[1964]: Failure getting users, quitting Jul 7 00:17:34.975264 oslogin_cache_refresh[1964]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 00:17:34.975326 oslogin_cache_refresh[1964]: Refreshing group entry cache Jul 7 00:17:34.984665 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 00:17:34.985136 update_engine[1974]: I20250707 00:17:34.983340 1974 main.cc:92] Flatcar Update Engine starting Jul 7 00:17:34.981551 oslogin_cache_refresh[1964]: Failure getting groups, quitting Jul 7 00:17:34.985502 google_oslogin_nss_cache[1964]: oslogin_cache_refresh[1964]: Failure getting groups, quitting Jul 7 00:17:34.985502 google_oslogin_nss_cache[1964]: oslogin_cache_refresh[1964]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 00:17:34.981566 oslogin_cache_refresh[1964]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 00:17:34.986021 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 00:17:34.990776 ntpd[1966]: ntpd 4.2.8p17@1.4004-o Sun Jul 6 21:17:42 UTC 2025 (1): Starting Jul 7 00:17:34.990818 ntpd[1966]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 7 00:17:34.991181 ntpd[1966]: 7 Jul 00:17:34 ntpd[1966]: ntpd 4.2.8p17@1.4004-o Sun Jul 6 21:17:42 UTC 2025 (1): Starting Jul 7 00:17:34.991181 ntpd[1966]: 7 Jul 00:17:34 ntpd[1966]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 7 00:17:34.991181 ntpd[1966]: 7 Jul 00:17:34 ntpd[1966]: ---------------------------------------------------- Jul 7 00:17:34.991181 ntpd[1966]: 7 Jul 00:17:34 ntpd[1966]: ntp-4 is maintained by Network Time Foundation, Jul 7 00:17:34.991181 ntpd[1966]: 7 Jul 00:17:34 ntpd[1966]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 7 00:17:34.991181 ntpd[1966]: 7 Jul 00:17:34 ntpd[1966]: corporation. Support and training for ntp-4 are Jul 7 00:17:34.991181 ntpd[1966]: 7 Jul 00:17:34 ntpd[1966]: available at https://www.nwtime.org/support Jul 7 00:17:34.991181 ntpd[1966]: 7 Jul 00:17:34 ntpd[1966]: ---------------------------------------------------- Jul 7 00:17:34.990828 ntpd[1966]: ---------------------------------------------------- Jul 7 00:17:34.990836 ntpd[1966]: ntp-4 is maintained by Network Time Foundation, Jul 7 00:17:34.990844 ntpd[1966]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 7 00:17:34.990852 ntpd[1966]: corporation. Support and training for ntp-4 are Jul 7 00:17:34.990862 ntpd[1966]: available at https://www.nwtime.org/support Jul 7 00:17:34.990871 ntpd[1966]: ---------------------------------------------------- Jul 7 00:17:35.004677 extend-filesystems[1963]: Found /dev/nvme0n1p9 Jul 7 00:17:35.014674 tar[1980]: linux-amd64/helm Jul 7 00:17:35.015003 jq[1975]: true Jul 7 00:17:34.999123 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 7 00:17:35.018409 ntpd[1966]: 7 Jul 00:17:35 ntpd[1966]: proto: precision = 0.104 usec (-23) Jul 7 00:17:35.018409 ntpd[1966]: 7 Jul 00:17:35 ntpd[1966]: basedate set to 2025-06-24 Jul 7 00:17:35.018409 ntpd[1966]: 7 Jul 00:17:35 ntpd[1966]: gps base set to 2025-06-29 (week 2373) Jul 7 00:17:35.005589 ntpd[1966]: proto: precision = 0.104 usec (-23) Jul 7 00:17:35.001603 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 7 00:17:35.018727 extend-filesystems[1963]: Checking size of /dev/nvme0n1p9 Jul 7 00:17:35.008490 ntpd[1966]: basedate set to 2025-06-24 Jul 7 00:17:35.030430 ntpd[1966]: 7 Jul 00:17:35 ntpd[1966]: Listen and drop on 0 v6wildcard [::]:123 Jul 7 00:17:35.030430 ntpd[1966]: 7 Jul 00:17:35 ntpd[1966]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 7 00:17:35.030430 ntpd[1966]: 7 Jul 00:17:35 ntpd[1966]: Listen normally on 2 lo 127.0.0.1:123 Jul 7 00:17:35.030430 ntpd[1966]: 7 Jul 00:17:35 ntpd[1966]: Listen normally on 3 eth0 172.31.31.140:123 Jul 7 00:17:35.030430 ntpd[1966]: 7 Jul 00:17:35 ntpd[1966]: Listen normally on 4 lo [::1]:123 Jul 7 00:17:35.030430 ntpd[1966]: 7 Jul 00:17:35 ntpd[1966]: bind(21) AF_INET6 fe80::477:14ff:fe69:3f6f%2#123 flags 0x11 failed: Cannot assign requested address Jul 7 00:17:35.030430 ntpd[1966]: 7 Jul 00:17:35 ntpd[1966]: unable to create socket on eth0 (5) for fe80::477:14ff:fe69:3f6f%2#123 Jul 7 00:17:35.030430 ntpd[1966]: 7 Jul 00:17:35 ntpd[1966]: failed to init interface for address fe80::477:14ff:fe69:3f6f%2 Jul 7 00:17:35.030430 ntpd[1966]: 7 Jul 00:17:35 ntpd[1966]: Listening on routing socket on fd #21 for interface updates Jul 7 00:17:35.008513 ntpd[1966]: gps base set to 2025-06-29 (week 2373) Jul 7 00:17:35.026806 ntpd[1966]: Listen and drop on 0 v6wildcard [::]:123 Jul 7 00:17:35.026866 ntpd[1966]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 7 00:17:35.027363 ntpd[1966]: Listen normally on 2 lo 127.0.0.1:123 Jul 7 00:17:35.027407 ntpd[1966]: Listen normally on 3 eth0 172.31.31.140:123 Jul 7 00:17:35.027450 ntpd[1966]: Listen normally on 4 lo [::1]:123 Jul 7 00:17:35.027496 ntpd[1966]: bind(21) AF_INET6 fe80::477:14ff:fe69:3f6f%2#123 flags 0x11 failed: Cannot assign requested address Jul 7 00:17:35.027518 ntpd[1966]: unable to create socket on eth0 (5) for fe80::477:14ff:fe69:3f6f%2#123 Jul 7 00:17:35.027532 ntpd[1966]: failed to init interface for address fe80::477:14ff:fe69:3f6f%2 Jul 7 00:17:35.027565 ntpd[1966]: Listening on routing socket on fd #21 for interface updates Jul 7 00:17:35.052746 (ntainerd)[1994]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 00:17:35.058172 ntpd[1966]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 7 00:17:35.084195 ntpd[1966]: 7 Jul 00:17:35 ntpd[1966]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 7 00:17:35.084195 ntpd[1966]: 7 Jul 00:17:35 ntpd[1966]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 7 00:17:35.081722 ntpd[1966]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 7 00:17:35.085692 coreos-metadata[1959]: Jul 07 00:17:35.085 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 7 00:17:35.088292 coreos-metadata[1959]: Jul 07 00:17:35.086 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jul 7 00:17:35.088412 jq[1999]: true Jul 7 00:17:35.089025 coreos-metadata[1959]: Jul 07 00:17:35.088 INFO Fetch successful Jul 7 00:17:35.089025 coreos-metadata[1959]: Jul 07 00:17:35.088 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jul 7 00:17:35.093591 coreos-metadata[1959]: Jul 07 00:17:35.093 INFO Fetch successful Jul 7 00:17:35.093591 coreos-metadata[1959]: Jul 07 00:17:35.093 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jul 7 00:17:35.094485 coreos-metadata[1959]: Jul 07 00:17:35.094 INFO Fetch successful Jul 7 00:17:35.095344 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 00:17:35.099488 coreos-metadata[1959]: Jul 07 00:17:35.098 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jul 7 00:17:35.096320 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 00:17:35.109338 extend-filesystems[1963]: Resized partition /dev/nvme0n1p9 Jul 7 00:17:35.113618 coreos-metadata[1959]: Jul 07 00:17:35.106 INFO Fetch successful Jul 7 00:17:35.113618 coreos-metadata[1959]: Jul 07 00:17:35.108 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jul 7 00:17:35.113618 coreos-metadata[1959]: Jul 07 00:17:35.113 INFO Fetch failed with 404: resource not found Jul 7 00:17:35.113618 coreos-metadata[1959]: Jul 07 00:17:35.113 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jul 7 00:17:35.120043 coreos-metadata[1959]: Jul 07 00:17:35.117 INFO Fetch successful Jul 7 00:17:35.120043 coreos-metadata[1959]: Jul 07 00:17:35.117 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jul 7 00:17:35.138436 coreos-metadata[1959]: Jul 07 00:17:35.125 INFO Fetch successful Jul 7 00:17:35.138436 coreos-metadata[1959]: Jul 07 00:17:35.125 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jul 7 00:17:35.138613 extend-filesystems[2020]: resize2fs 1.47.2 (1-Jan-2025) Jul 7 00:17:35.141198 coreos-metadata[1959]: Jul 07 00:17:35.139 INFO Fetch successful Jul 7 00:17:35.141198 coreos-metadata[1959]: Jul 07 00:17:35.139 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jul 7 00:17:35.142302 coreos-metadata[1959]: Jul 07 00:17:35.141 INFO Fetch successful Jul 7 00:17:35.142302 coreos-metadata[1959]: Jul 07 00:17:35.141 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jul 7 00:17:35.145793 coreos-metadata[1959]: Jul 07 00:17:35.143 INFO Fetch successful Jul 7 00:17:35.158090 dbus-daemon[1960]: [system] SELinux support is enabled Jul 7 00:17:35.166253 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 7 00:17:35.161910 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 00:17:35.191377 dbus-daemon[1960]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1825 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 7 00:17:35.180892 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 00:17:35.180955 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 00:17:35.182853 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 00:17:35.182882 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 00:17:35.187189 systemd[1]: Finished setup-oem.service - Setup OEM. Jul 7 00:17:35.198798 dbus-daemon[1960]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 7 00:17:35.208218 update_engine[1974]: I20250707 00:17:35.206413 1974 update_check_scheduler.cc:74] Next update check in 7m6s Jul 7 00:17:35.209415 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 7 00:17:35.210099 systemd[1]: Started update-engine.service - Update Engine. Jul 7 00:17:35.276710 systemd-logind[1973]: Watching system buttons on /dev/input/event2 (Power Button) Jul 7 00:17:35.277015 systemd-logind[1973]: Watching system buttons on /dev/input/event3 (Sleep Button) Jul 7 00:17:35.277039 systemd-logind[1973]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 7 00:17:35.277441 systemd-logind[1973]: New seat seat0. Jul 7 00:17:35.279705 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 00:17:35.281647 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 00:17:35.293231 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 7 00:17:35.305235 extend-filesystems[2020]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 7 00:17:35.305235 extend-filesystems[2020]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 7 00:17:35.305235 extend-filesystems[2020]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 7 00:17:35.324151 extend-filesystems[1963]: Resized filesystem in /dev/nvme0n1p9 Jul 7 00:17:35.313356 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 00:17:35.313648 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 00:17:35.351236 bash[2052]: Updated "/home/core/.ssh/authorized_keys" Jul 7 00:17:35.351055 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 00:17:35.367357 systemd[1]: Starting sshkeys.service... Jul 7 00:17:35.369263 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 7 00:17:35.372765 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 00:17:35.409990 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 7 00:17:35.416559 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 7 00:17:35.600615 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 7 00:17:35.628038 dbus-daemon[1960]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 7 00:17:35.659410 dbus-daemon[1960]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2030 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 7 00:17:35.681968 systemd[1]: Starting polkit.service - Authorization Manager... Jul 7 00:17:35.813717 systemd-networkd[1825]: eth0: Gained IPv6LL Jul 7 00:17:35.826109 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 00:17:35.829489 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 00:17:35.835401 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jul 7 00:17:35.845650 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:17:35.850723 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 00:17:35.884588 locksmithd[2031]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 00:17:35.889669 polkitd[2144]: Started polkitd version 126 Jul 7 00:17:35.922959 coreos-metadata[2060]: Jul 07 00:17:35.922 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 7 00:17:35.935872 coreos-metadata[2060]: Jul 07 00:17:35.931 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jul 7 00:17:35.935872 coreos-metadata[2060]: Jul 07 00:17:35.935 INFO Fetch successful Jul 7 00:17:35.935872 coreos-metadata[2060]: Jul 07 00:17:35.935 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 7 00:17:35.938714 coreos-metadata[2060]: Jul 07 00:17:35.937 INFO Fetch successful Jul 7 00:17:35.943532 unknown[2060]: wrote ssh authorized keys file for user: core Jul 7 00:17:35.981249 polkitd[2144]: Loading rules from directory /etc/polkit-1/rules.d Jul 7 00:17:35.981862 polkitd[2144]: Loading rules from directory /run/polkit-1/rules.d Jul 7 00:17:35.981925 polkitd[2144]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 7 00:17:35.992467 polkitd[2144]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jul 7 00:17:35.992540 polkitd[2144]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 7 00:17:35.992586 polkitd[2144]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 7 00:17:36.003518 polkitd[2144]: Finished loading, compiling and executing 2 rules Jul 7 00:17:36.003866 systemd[1]: Started polkit.service - Authorization Manager. Jul 7 00:17:36.019442 dbus-daemon[1960]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 7 00:17:36.026884 polkitd[2144]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 7 00:17:36.038772 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 00:17:36.045917 amazon-ssm-agent[2153]: Initializing new seelog logger Jul 7 00:17:36.047428 update-ssh-keys[2176]: Updated "/home/core/.ssh/authorized_keys" Jul 7 00:17:36.053118 amazon-ssm-agent[2153]: New Seelog Logger Creation Complete Jul 7 00:17:36.053118 amazon-ssm-agent[2153]: 2025/07/07 00:17:36 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 00:17:36.053118 amazon-ssm-agent[2153]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 00:17:36.049290 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 7 00:17:36.054312 systemd[1]: Finished sshkeys.service. Jul 7 00:17:36.059847 amazon-ssm-agent[2153]: 2025/07/07 00:17:36 processing appconfig overrides Jul 7 00:17:36.070746 amazon-ssm-agent[2153]: 2025/07/07 00:17:36 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 00:17:36.070746 amazon-ssm-agent[2153]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 00:17:36.070746 amazon-ssm-agent[2153]: 2025/07/07 00:17:36 processing appconfig overrides Jul 7 00:17:36.070746 amazon-ssm-agent[2153]: 2025/07/07 00:17:36 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 00:17:36.070746 amazon-ssm-agent[2153]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 00:17:36.070746 amazon-ssm-agent[2153]: 2025/07/07 00:17:36 processing appconfig overrides Jul 7 00:17:36.083268 amazon-ssm-agent[2153]: 2025-07-07 00:17:36.0689 INFO Proxy environment variables: Jul 7 00:17:36.108595 amazon-ssm-agent[2153]: 2025/07/07 00:17:36 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 00:17:36.108595 amazon-ssm-agent[2153]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 00:17:36.108595 amazon-ssm-agent[2153]: 2025/07/07 00:17:36 processing appconfig overrides Jul 7 00:17:36.142189 systemd-hostnamed[2030]: Hostname set to (transient) Jul 7 00:17:36.143566 systemd-resolved[1828]: System hostname changed to 'ip-172-31-31-140'. Jul 7 00:17:36.164014 containerd[1994]: time="2025-07-07T00:17:36Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 7 00:17:36.168146 containerd[1994]: time="2025-07-07T00:17:36.167798767Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 7 00:17:36.182751 amazon-ssm-agent[2153]: 2025-07-07 00:17:36.0692 INFO https_proxy: Jul 7 00:17:36.209054 containerd[1994]: time="2025-07-07T00:17:36.202189402Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="13.923µs" Jul 7 00:17:36.209054 containerd[1994]: time="2025-07-07T00:17:36.207959320Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 7 00:17:36.209054 containerd[1994]: time="2025-07-07T00:17:36.207990915Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 7 00:17:36.209054 containerd[1994]: time="2025-07-07T00:17:36.208167352Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 7 00:17:36.209054 containerd[1994]: time="2025-07-07T00:17:36.208185960Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 7 00:17:36.209054 containerd[1994]: time="2025-07-07T00:17:36.208231549Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 00:17:36.209054 containerd[1994]: time="2025-07-07T00:17:36.208294813Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 00:17:36.209054 containerd[1994]: time="2025-07-07T00:17:36.208309347Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 00:17:36.209054 containerd[1994]: time="2025-07-07T00:17:36.208619565Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 00:17:36.209054 containerd[1994]: time="2025-07-07T00:17:36.208637546Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 00:17:36.209054 containerd[1994]: time="2025-07-07T00:17:36.208651832Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 00:17:36.209054 containerd[1994]: time="2025-07-07T00:17:36.208663678Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 7 00:17:36.209577 containerd[1994]: time="2025-07-07T00:17:36.208739838Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 7 00:17:36.209577 containerd[1994]: time="2025-07-07T00:17:36.208965971Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 00:17:36.209577 containerd[1994]: time="2025-07-07T00:17:36.208998993Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 00:17:36.209577 containerd[1994]: time="2025-07-07T00:17:36.209012735Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 7 00:17:36.211194 containerd[1994]: time="2025-07-07T00:17:36.210756960Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 7 00:17:36.211891 containerd[1994]: time="2025-07-07T00:17:36.211742813Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 7 00:17:36.211891 containerd[1994]: time="2025-07-07T00:17:36.211859785Z" level=info msg="metadata content store policy set" policy=shared Jul 7 00:17:36.219821 containerd[1994]: time="2025-07-07T00:17:36.218855449Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 7 00:17:36.219821 containerd[1994]: time="2025-07-07T00:17:36.218944212Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 7 00:17:36.219821 containerd[1994]: time="2025-07-07T00:17:36.218964348Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 7 00:17:36.219821 containerd[1994]: time="2025-07-07T00:17:36.219021328Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 7 00:17:36.219821 containerd[1994]: time="2025-07-07T00:17:36.219039912Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 7 00:17:36.219821 containerd[1994]: time="2025-07-07T00:17:36.219055230Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 7 00:17:36.219821 containerd[1994]: time="2025-07-07T00:17:36.219087186Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 7 00:17:36.219821 containerd[1994]: time="2025-07-07T00:17:36.219105073Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 7 00:17:36.219821 containerd[1994]: time="2025-07-07T00:17:36.219129576Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 7 00:17:36.219821 containerd[1994]: time="2025-07-07T00:17:36.219144281Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 7 00:17:36.219821 containerd[1994]: time="2025-07-07T00:17:36.219156896Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 7 00:17:36.219821 containerd[1994]: time="2025-07-07T00:17:36.219174984Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 7 00:17:36.219821 containerd[1994]: time="2025-07-07T00:17:36.219341990Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 7 00:17:36.219821 containerd[1994]: time="2025-07-07T00:17:36.219374782Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 7 00:17:36.220477 containerd[1994]: time="2025-07-07T00:17:36.219396369Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 7 00:17:36.220477 containerd[1994]: time="2025-07-07T00:17:36.219412863Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 7 00:17:36.220477 containerd[1994]: time="2025-07-07T00:17:36.219427233Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 7 00:17:36.220477 containerd[1994]: time="2025-07-07T00:17:36.219443525Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 7 00:17:36.220477 containerd[1994]: time="2025-07-07T00:17:36.219459605Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 7 00:17:36.220477 containerd[1994]: time="2025-07-07T00:17:36.219479533Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 7 00:17:36.220477 containerd[1994]: time="2025-07-07T00:17:36.219496681Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 7 00:17:36.220477 containerd[1994]: time="2025-07-07T00:17:36.219511603Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 7 00:17:36.220477 containerd[1994]: time="2025-07-07T00:17:36.219526237Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 7 00:17:36.220477 containerd[1994]: time="2025-07-07T00:17:36.219599144Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 7 00:17:36.220477 containerd[1994]: time="2025-07-07T00:17:36.219616130Z" level=info msg="Start snapshots syncer" Jul 7 00:17:36.222694 containerd[1994]: time="2025-07-07T00:17:36.220917730Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 7 00:17:36.222694 containerd[1994]: time="2025-07-07T00:17:36.221682279Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 7 00:17:36.222979 containerd[1994]: time="2025-07-07T00:17:36.221752104Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 7 00:17:36.224170 containerd[1994]: time="2025-07-07T00:17:36.223724746Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 7 00:17:36.224170 containerd[1994]: time="2025-07-07T00:17:36.223922011Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 7 00:17:36.224170 containerd[1994]: time="2025-07-07T00:17:36.223955440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 7 00:17:36.224170 containerd[1994]: time="2025-07-07T00:17:36.223971893Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 7 00:17:36.224170 containerd[1994]: time="2025-07-07T00:17:36.223992159Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 7 00:17:36.224170 containerd[1994]: time="2025-07-07T00:17:36.224009946Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 7 00:17:36.224170 containerd[1994]: time="2025-07-07T00:17:36.224024718Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 7 00:17:36.224170 containerd[1994]: time="2025-07-07T00:17:36.224042208Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 7 00:17:36.224170 containerd[1994]: time="2025-07-07T00:17:36.224086954Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 7 00:17:36.224170 containerd[1994]: time="2025-07-07T00:17:36.224102943Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 7 00:17:36.224170 containerd[1994]: time="2025-07-07T00:17:36.224118215Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 7 00:17:36.225581 containerd[1994]: time="2025-07-07T00:17:36.225480807Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 00:17:36.226509 containerd[1994]: time="2025-07-07T00:17:36.226257629Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 00:17:36.226509 containerd[1994]: time="2025-07-07T00:17:36.226281444Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 00:17:36.226509 containerd[1994]: time="2025-07-07T00:17:36.226297486Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 00:17:36.226509 containerd[1994]: time="2025-07-07T00:17:36.226309783Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 7 00:17:36.226509 containerd[1994]: time="2025-07-07T00:17:36.226330728Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 7 00:17:36.226509 containerd[1994]: time="2025-07-07T00:17:36.226347190Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 7 00:17:36.226509 containerd[1994]: time="2025-07-07T00:17:36.226370451Z" level=info msg="runtime interface created" Jul 7 00:17:36.226509 containerd[1994]: time="2025-07-07T00:17:36.226377691Z" level=info msg="created NRI interface" Jul 7 00:17:36.226509 containerd[1994]: time="2025-07-07T00:17:36.226389650Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 7 00:17:36.226509 containerd[1994]: time="2025-07-07T00:17:36.226410642Z" level=info msg="Connect containerd service" Jul 7 00:17:36.226509 containerd[1994]: time="2025-07-07T00:17:36.226458113Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 00:17:36.238301 containerd[1994]: time="2025-07-07T00:17:36.235872458Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 00:17:36.283127 amazon-ssm-agent[2153]: 2025-07-07 00:17:36.0692 INFO http_proxy: Jul 7 00:17:36.375834 sshd_keygen[1989]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 00:17:36.382633 amazon-ssm-agent[2153]: 2025-07-07 00:17:36.0692 INFO no_proxy: Jul 7 00:17:36.482897 amazon-ssm-agent[2153]: 2025-07-07 00:17:36.0694 INFO Checking if agent identity type OnPrem can be assumed Jul 7 00:17:36.502774 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 00:17:36.508842 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 00:17:36.562467 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 00:17:36.562786 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 00:17:36.570096 containerd[1994]: time="2025-07-07T00:17:36.570040706Z" level=info msg="Start subscribing containerd event" Jul 7 00:17:36.571315 containerd[1994]: time="2025-07-07T00:17:36.570311293Z" level=info msg="Start recovering state" Jul 7 00:17:36.571566 containerd[1994]: time="2025-07-07T00:17:36.571547611Z" level=info msg="Start event monitor" Jul 7 00:17:36.571677 containerd[1994]: time="2025-07-07T00:17:36.571662385Z" level=info msg="Start cni network conf syncer for default" Jul 7 00:17:36.571753 containerd[1994]: time="2025-07-07T00:17:36.571742749Z" level=info msg="Start streaming server" Jul 7 00:17:36.571975 containerd[1994]: time="2025-07-07T00:17:36.571958108Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 7 00:17:36.572060 containerd[1994]: time="2025-07-07T00:17:36.572047132Z" level=info msg="runtime interface starting up..." Jul 7 00:17:36.572135 containerd[1994]: time="2025-07-07T00:17:36.572121624Z" level=info msg="starting plugins..." Jul 7 00:17:36.573215 containerd[1994]: time="2025-07-07T00:17:36.573003490Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 7 00:17:36.574146 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 00:17:36.580837 containerd[1994]: time="2025-07-07T00:17:36.576567464Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 00:17:36.580837 containerd[1994]: time="2025-07-07T00:17:36.576637426Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 00:17:36.580837 containerd[1994]: time="2025-07-07T00:17:36.577840382Z" level=info msg="containerd successfully booted in 0.416296s" Jul 7 00:17:36.576761 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 00:17:36.587345 amazon-ssm-agent[2153]: 2025-07-07 00:17:36.0696 INFO Checking if agent identity type EC2 can be assumed Jul 7 00:17:36.612976 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 00:17:36.625668 systemd[1]: Started sshd@0-172.31.31.140:22-147.75.109.163:37342.service - OpenSSH per-connection server daemon (147.75.109.163:37342). Jul 7 00:17:36.693329 amazon-ssm-agent[2153]: 2025-07-07 00:17:36.2744 INFO Agent will take identity from EC2 Jul 7 00:17:36.694734 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 00:17:36.700001 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 00:17:36.706630 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 7 00:17:36.708615 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 00:17:36.786686 amazon-ssm-agent[2153]: 2025-07-07 00:17:36.2761 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Jul 7 00:17:36.813819 amazon-ssm-agent[2153]: 2025/07/07 00:17:36 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 00:17:36.813819 amazon-ssm-agent[2153]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 00:17:36.814530 amazon-ssm-agent[2153]: 2025/07/07 00:17:36 processing appconfig overrides Jul 7 00:17:36.840768 amazon-ssm-agent[2153]: 2025-07-07 00:17:36.2761 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jul 7 00:17:36.840768 amazon-ssm-agent[2153]: 2025-07-07 00:17:36.2761 INFO [amazon-ssm-agent] Starting Core Agent Jul 7 00:17:36.840768 amazon-ssm-agent[2153]: 2025-07-07 00:17:36.2761 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Jul 7 00:17:36.840947 amazon-ssm-agent[2153]: 2025-07-07 00:17:36.2761 INFO [Registrar] Starting registrar module Jul 7 00:17:36.840947 amazon-ssm-agent[2153]: 2025-07-07 00:17:36.2826 INFO [EC2Identity] Checking disk for registration info Jul 7 00:17:36.840947 amazon-ssm-agent[2153]: 2025-07-07 00:17:36.2842 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Jul 7 00:17:36.840947 amazon-ssm-agent[2153]: 2025-07-07 00:17:36.2842 INFO [EC2Identity] Generating registration keypair Jul 7 00:17:36.840947 amazon-ssm-agent[2153]: 2025-07-07 00:17:36.7745 INFO [EC2Identity] Checking write access before registering Jul 7 00:17:36.840947 amazon-ssm-agent[2153]: 2025-07-07 00:17:36.7750 INFO [EC2Identity] Registering EC2 instance with Systems Manager Jul 7 00:17:36.840947 amazon-ssm-agent[2153]: 2025-07-07 00:17:36.8131 INFO [EC2Identity] EC2 registration was successful. Jul 7 00:17:36.840947 amazon-ssm-agent[2153]: 2025-07-07 00:17:36.8131 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Jul 7 00:17:36.840947 amazon-ssm-agent[2153]: 2025-07-07 00:17:36.8135 INFO [CredentialRefresher] credentialRefresher has started Jul 7 00:17:36.840947 amazon-ssm-agent[2153]: 2025-07-07 00:17:36.8136 INFO [CredentialRefresher] Starting credentials refresher loop Jul 7 00:17:36.840947 amazon-ssm-agent[2153]: 2025-07-07 00:17:36.8405 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jul 7 00:17:36.840947 amazon-ssm-agent[2153]: 2025-07-07 00:17:36.8407 INFO [CredentialRefresher] Credentials ready Jul 7 00:17:36.854969 tar[1980]: linux-amd64/LICENSE Jul 7 00:17:36.855402 tar[1980]: linux-amd64/README.md Jul 7 00:17:36.874140 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 00:17:36.885814 amazon-ssm-agent[2153]: 2025-07-07 00:17:36.8408 INFO [CredentialRefresher] Next credential rotation will be in 29.999993969416668 minutes Jul 7 00:17:36.931467 sshd[2217]: Accepted publickey for core from 147.75.109.163 port 37342 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:17:36.933888 sshd-session[2217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:17:36.942417 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 00:17:36.945608 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 00:17:36.957036 systemd-logind[1973]: New session 1 of user core. Jul 7 00:17:36.974365 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 00:17:36.979132 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 00:17:37.000578 (systemd)[2227]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 00:17:37.004891 systemd-logind[1973]: New session c1 of user core. Jul 7 00:17:37.210449 systemd[2227]: Queued start job for default target default.target. Jul 7 00:17:37.222599 systemd[2227]: Created slice app.slice - User Application Slice. Jul 7 00:17:37.222641 systemd[2227]: Reached target paths.target - Paths. Jul 7 00:17:37.223065 systemd[2227]: Reached target timers.target - Timers. Jul 7 00:17:37.224731 systemd[2227]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 00:17:37.239845 systemd[2227]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 00:17:37.240003 systemd[2227]: Reached target sockets.target - Sockets. Jul 7 00:17:37.240068 systemd[2227]: Reached target basic.target - Basic System. Jul 7 00:17:37.240118 systemd[2227]: Reached target default.target - Main User Target. Jul 7 00:17:37.240156 systemd[2227]: Startup finished in 226ms. Jul 7 00:17:37.240999 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 00:17:37.248441 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 00:17:37.398640 systemd[1]: Started sshd@1-172.31.31.140:22-147.75.109.163:37348.service - OpenSSH per-connection server daemon (147.75.109.163:37348). Jul 7 00:17:37.584479 sshd[2238]: Accepted publickey for core from 147.75.109.163 port 37348 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:17:37.588328 sshd-session[2238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:17:37.594921 systemd-logind[1973]: New session 2 of user core. Jul 7 00:17:37.602466 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 00:17:37.724437 sshd[2240]: Connection closed by 147.75.109.163 port 37348 Jul 7 00:17:37.725019 sshd-session[2238]: pam_unix(sshd:session): session closed for user core Jul 7 00:17:37.729830 systemd[1]: sshd@1-172.31.31.140:22-147.75.109.163:37348.service: Deactivated successfully. Jul 7 00:17:37.731967 systemd[1]: session-2.scope: Deactivated successfully. Jul 7 00:17:37.733287 systemd-logind[1973]: Session 2 logged out. Waiting for processes to exit. Jul 7 00:17:37.735234 systemd-logind[1973]: Removed session 2. Jul 7 00:17:37.761663 systemd[1]: Started sshd@2-172.31.31.140:22-147.75.109.163:37364.service - OpenSSH per-connection server daemon (147.75.109.163:37364). Jul 7 00:17:37.853529 amazon-ssm-agent[2153]: 2025-07-07 00:17:37.8533 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jul 7 00:17:37.951668 sshd[2246]: Accepted publickey for core from 147.75.109.163 port 37364 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:17:37.953340 sshd-session[2246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:17:37.955447 amazon-ssm-agent[2153]: 2025-07-07 00:17:37.8555 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2250) started Jul 7 00:17:37.960324 systemd-logind[1973]: New session 3 of user core. Jul 7 00:17:37.966856 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 00:17:37.991196 ntpd[1966]: Listen normally on 6 eth0 [fe80::477:14ff:fe69:3f6f%2]:123 Jul 7 00:17:37.991567 ntpd[1966]: 7 Jul 00:17:37 ntpd[1966]: Listen normally on 6 eth0 [fe80::477:14ff:fe69:3f6f%2]:123 Jul 7 00:17:38.056277 amazon-ssm-agent[2153]: 2025-07-07 00:17:37.8555 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jul 7 00:17:38.088510 sshd[2256]: Connection closed by 147.75.109.163 port 37364 Jul 7 00:17:38.088990 sshd-session[2246]: pam_unix(sshd:session): session closed for user core Jul 7 00:17:38.092671 systemd[1]: sshd@2-172.31.31.140:22-147.75.109.163:37364.service: Deactivated successfully. Jul 7 00:17:38.094609 systemd[1]: session-3.scope: Deactivated successfully. Jul 7 00:17:38.095757 systemd-logind[1973]: Session 3 logged out. Waiting for processes to exit. Jul 7 00:17:38.097045 systemd-logind[1973]: Removed session 3. Jul 7 00:17:39.886332 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:17:39.887498 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 00:17:39.889809 systemd[1]: Startup finished in 2.808s (kernel) + 7.808s (initrd) + 9.459s (userspace) = 20.077s. Jul 7 00:17:39.894343 (kubelet)[2271]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:17:40.956826 kubelet[2271]: E0707 00:17:40.956750 2271 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:17:40.959656 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:17:40.959851 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:17:40.960326 systemd[1]: kubelet.service: Consumed 1.075s CPU time, 264.4M memory peak. Jul 7 00:17:42.576663 systemd-resolved[1828]: Clock change detected. Flushing caches. Jul 7 00:17:48.712663 systemd[1]: Started sshd@3-172.31.31.140:22-147.75.109.163:51546.service - OpenSSH per-connection server daemon (147.75.109.163:51546). Jul 7 00:17:48.888549 sshd[2283]: Accepted publickey for core from 147.75.109.163 port 51546 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:17:48.890023 sshd-session[2283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:17:48.896205 systemd-logind[1973]: New session 4 of user core. Jul 7 00:17:48.902598 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 00:17:49.024132 sshd[2285]: Connection closed by 147.75.109.163 port 51546 Jul 7 00:17:49.024657 sshd-session[2283]: pam_unix(sshd:session): session closed for user core Jul 7 00:17:49.028729 systemd[1]: sshd@3-172.31.31.140:22-147.75.109.163:51546.service: Deactivated successfully. Jul 7 00:17:49.030520 systemd[1]: session-4.scope: Deactivated successfully. Jul 7 00:17:49.031216 systemd-logind[1973]: Session 4 logged out. Waiting for processes to exit. Jul 7 00:17:49.032822 systemd-logind[1973]: Removed session 4. Jul 7 00:17:49.056424 systemd[1]: Started sshd@4-172.31.31.140:22-147.75.109.163:51554.service - OpenSSH per-connection server daemon (147.75.109.163:51554). Jul 7 00:17:49.236897 sshd[2291]: Accepted publickey for core from 147.75.109.163 port 51554 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:17:49.238321 sshd-session[2291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:17:49.244671 systemd-logind[1973]: New session 5 of user core. Jul 7 00:17:49.251595 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 00:17:49.369404 sshd[2293]: Connection closed by 147.75.109.163 port 51554 Jul 7 00:17:49.370213 sshd-session[2291]: pam_unix(sshd:session): session closed for user core Jul 7 00:17:49.374222 systemd[1]: sshd@4-172.31.31.140:22-147.75.109.163:51554.service: Deactivated successfully. Jul 7 00:17:49.376453 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 00:17:49.378745 systemd-logind[1973]: Session 5 logged out. Waiting for processes to exit. Jul 7 00:17:49.380214 systemd-logind[1973]: Removed session 5. Jul 7 00:17:49.406386 systemd[1]: Started sshd@5-172.31.31.140:22-147.75.109.163:51564.service - OpenSSH per-connection server daemon (147.75.109.163:51564). Jul 7 00:17:49.586131 sshd[2299]: Accepted publickey for core from 147.75.109.163 port 51564 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:17:49.587930 sshd-session[2299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:17:49.594453 systemd-logind[1973]: New session 6 of user core. Jul 7 00:17:49.600634 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 00:17:49.724773 sshd[2301]: Connection closed by 147.75.109.163 port 51564 Jul 7 00:17:49.725272 sshd-session[2299]: pam_unix(sshd:session): session closed for user core Jul 7 00:17:49.729781 systemd[1]: sshd@5-172.31.31.140:22-147.75.109.163:51564.service: Deactivated successfully. Jul 7 00:17:49.732074 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 00:17:49.733020 systemd-logind[1973]: Session 6 logged out. Waiting for processes to exit. Jul 7 00:17:49.734548 systemd-logind[1973]: Removed session 6. Jul 7 00:17:49.761541 systemd[1]: Started sshd@6-172.31.31.140:22-147.75.109.163:51578.service - OpenSSH per-connection server daemon (147.75.109.163:51578). Jul 7 00:17:49.943708 sshd[2307]: Accepted publickey for core from 147.75.109.163 port 51578 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:17:49.945229 sshd-session[2307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:17:49.950451 systemd-logind[1973]: New session 7 of user core. Jul 7 00:17:49.954562 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 00:17:50.072077 sudo[2310]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 00:17:50.072408 sudo[2310]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:17:50.088464 sudo[2310]: pam_unix(sudo:session): session closed for user root Jul 7 00:17:50.112291 sshd[2309]: Connection closed by 147.75.109.163 port 51578 Jul 7 00:17:50.113011 sshd-session[2307]: pam_unix(sshd:session): session closed for user core Jul 7 00:17:50.117393 systemd[1]: sshd@6-172.31.31.140:22-147.75.109.163:51578.service: Deactivated successfully. Jul 7 00:17:50.119009 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 00:17:50.119716 systemd-logind[1973]: Session 7 logged out. Waiting for processes to exit. Jul 7 00:17:50.121151 systemd-logind[1973]: Removed session 7. Jul 7 00:17:50.145531 systemd[1]: Started sshd@7-172.31.31.140:22-147.75.109.163:51580.service - OpenSSH per-connection server daemon (147.75.109.163:51580). Jul 7 00:17:50.321776 sshd[2316]: Accepted publickey for core from 147.75.109.163 port 51580 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:17:50.323266 sshd-session[2316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:17:50.329994 systemd-logind[1973]: New session 8 of user core. Jul 7 00:17:50.336586 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 00:17:50.437427 sudo[2320]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 00:17:50.437806 sudo[2320]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:17:50.443273 sudo[2320]: pam_unix(sudo:session): session closed for user root Jul 7 00:17:50.449140 sudo[2319]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 7 00:17:50.449621 sudo[2319]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:17:50.460517 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 00:17:50.506784 augenrules[2342]: No rules Jul 7 00:17:50.508318 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 00:17:50.508805 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 00:17:50.510167 sudo[2319]: pam_unix(sudo:session): session closed for user root Jul 7 00:17:50.533457 sshd[2318]: Connection closed by 147.75.109.163 port 51580 Jul 7 00:17:50.533956 sshd-session[2316]: pam_unix(sshd:session): session closed for user core Jul 7 00:17:50.537288 systemd[1]: sshd@7-172.31.31.140:22-147.75.109.163:51580.service: Deactivated successfully. Jul 7 00:17:50.539262 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 00:17:50.541505 systemd-logind[1973]: Session 8 logged out. Waiting for processes to exit. Jul 7 00:17:50.542625 systemd-logind[1973]: Removed session 8. Jul 7 00:17:50.569323 systemd[1]: Started sshd@8-172.31.31.140:22-147.75.109.163:51592.service - OpenSSH per-connection server daemon (147.75.109.163:51592). Jul 7 00:17:50.754390 sshd[2351]: Accepted publickey for core from 147.75.109.163 port 51592 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:17:50.756019 sshd-session[2351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:17:50.762203 systemd-logind[1973]: New session 9 of user core. Jul 7 00:17:50.771393 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 00:17:50.877443 sudo[2354]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 00:17:50.877934 sudo[2354]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:17:51.335533 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 00:17:51.353824 (dockerd)[2375]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 00:17:51.625957 dockerd[2375]: time="2025-07-07T00:17:51.625569477Z" level=info msg="Starting up" Jul 7 00:17:51.626410 dockerd[2375]: time="2025-07-07T00:17:51.626382493Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 7 00:17:51.631662 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 00:17:51.633751 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:17:51.687707 systemd[1]: var-lib-docker-metacopy\x2dcheck4130836949-merged.mount: Deactivated successfully. Jul 7 00:17:51.732229 dockerd[2375]: time="2025-07-07T00:17:51.732167243Z" level=info msg="Loading containers: start." Jul 7 00:17:51.744557 kernel: Initializing XFRM netlink socket Jul 7 00:17:51.972203 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:17:51.983601 (kubelet)[2499]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:17:52.025927 (udev-worker)[2399]: Network interface NamePolicy= disabled on kernel command line. Jul 7 00:17:52.065012 kubelet[2499]: E0707 00:17:52.064966 2499 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:17:52.071764 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:17:52.072056 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:17:52.073374 systemd[1]: kubelet.service: Consumed 208ms CPU time, 110.8M memory peak. Jul 7 00:17:52.093030 systemd-networkd[1825]: docker0: Link UP Jul 7 00:17:52.106182 dockerd[2375]: time="2025-07-07T00:17:52.106114736Z" level=info msg="Loading containers: done." Jul 7 00:17:52.137559 dockerd[2375]: time="2025-07-07T00:17:52.137514517Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 00:17:52.137736 dockerd[2375]: time="2025-07-07T00:17:52.137609994Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 7 00:17:52.137736 dockerd[2375]: time="2025-07-07T00:17:52.137721603Z" level=info msg="Initializing buildkit" Jul 7 00:17:52.183264 dockerd[2375]: time="2025-07-07T00:17:52.183208874Z" level=info msg="Completed buildkit initialization" Jul 7 00:17:52.193779 dockerd[2375]: time="2025-07-07T00:17:52.193724958Z" level=info msg="Daemon has completed initialization" Jul 7 00:17:52.193931 dockerd[2375]: time="2025-07-07T00:17:52.193807149Z" level=info msg="API listen on /run/docker.sock" Jul 7 00:17:52.194840 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 00:17:53.223316 containerd[1994]: time="2025-07-07T00:17:53.223273521Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 7 00:17:53.833137 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount396889330.mount: Deactivated successfully. Jul 7 00:17:55.245803 containerd[1994]: time="2025-07-07T00:17:55.245748088Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:17:55.246961 containerd[1994]: time="2025-07-07T00:17:55.246924484Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077744" Jul 7 00:17:55.249140 containerd[1994]: time="2025-07-07T00:17:55.247837129Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:17:55.250439 containerd[1994]: time="2025-07-07T00:17:55.250398940Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:17:55.251379 containerd[1994]: time="2025-07-07T00:17:55.251328882Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 2.02801326s" Jul 7 00:17:55.251503 containerd[1994]: time="2025-07-07T00:17:55.251479898Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 7 00:17:55.256306 containerd[1994]: time="2025-07-07T00:17:55.256269586Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 7 00:17:56.740760 containerd[1994]: time="2025-07-07T00:17:56.740574575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:17:56.742871 containerd[1994]: time="2025-07-07T00:17:56.742806660Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713294" Jul 7 00:17:56.745662 containerd[1994]: time="2025-07-07T00:17:56.745595059Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:17:56.750457 containerd[1994]: time="2025-07-07T00:17:56.750386484Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:17:56.755117 containerd[1994]: time="2025-07-07T00:17:56.755060107Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 1.49874138s" Jul 7 00:17:56.756374 containerd[1994]: time="2025-07-07T00:17:56.755932769Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 7 00:17:56.757643 containerd[1994]: time="2025-07-07T00:17:56.757610469Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 7 00:17:58.090752 containerd[1994]: time="2025-07-07T00:17:58.090670124Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:17:58.096933 containerd[1994]: time="2025-07-07T00:17:58.096174546Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783671" Jul 7 00:17:58.100143 containerd[1994]: time="2025-07-07T00:17:58.100074199Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:17:58.106570 containerd[1994]: time="2025-07-07T00:17:58.105650213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:17:58.106570 containerd[1994]: time="2025-07-07T00:17:58.106446917Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 1.348796581s" Jul 7 00:17:58.106570 containerd[1994]: time="2025-07-07T00:17:58.106478343Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 7 00:17:58.107491 containerd[1994]: time="2025-07-07T00:17:58.107379259Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 7 00:17:59.215316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1613429132.mount: Deactivated successfully. Jul 7 00:17:59.782294 containerd[1994]: time="2025-07-07T00:17:59.782239808Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:17:59.784535 containerd[1994]: time="2025-07-07T00:17:59.784297848Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383943" Jul 7 00:17:59.787125 containerd[1994]: time="2025-07-07T00:17:59.787085129Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:17:59.790471 containerd[1994]: time="2025-07-07T00:17:59.790433288Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:17:59.791021 containerd[1994]: time="2025-07-07T00:17:59.790995386Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 1.683585923s" Jul 7 00:17:59.791109 containerd[1994]: time="2025-07-07T00:17:59.791097005Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 7 00:17:59.791599 containerd[1994]: time="2025-07-07T00:17:59.791571817Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 7 00:18:00.405089 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount408230920.mount: Deactivated successfully. Jul 7 00:18:02.326952 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 7 00:18:02.337801 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:18:02.662744 containerd[1994]: time="2025-07-07T00:18:02.662308181Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:18:02.675558 containerd[1994]: time="2025-07-07T00:18:02.675503669Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 7 00:18:02.683966 containerd[1994]: time="2025-07-07T00:18:02.681730906Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:18:02.696367 containerd[1994]: time="2025-07-07T00:18:02.693765768Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:18:02.697639 containerd[1994]: time="2025-07-07T00:18:02.697578930Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.905960799s" Jul 7 00:18:02.697639 containerd[1994]: time="2025-07-07T00:18:02.697632364Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 7 00:18:02.698625 containerd[1994]: time="2025-07-07T00:18:02.698188233Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 00:18:02.940050 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:18:02.958091 (kubelet)[2722]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:18:03.082365 kubelet[2722]: E0707 00:18:03.082311 2722 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:18:03.089204 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:18:03.089968 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:18:03.090678 systemd[1]: kubelet.service: Consumed 216ms CPU time, 109.1M memory peak. Jul 7 00:18:03.330315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3854218904.mount: Deactivated successfully. Jul 7 00:18:03.349359 containerd[1994]: time="2025-07-07T00:18:03.349278635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:18:03.352128 containerd[1994]: time="2025-07-07T00:18:03.352061449Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 7 00:18:03.360364 containerd[1994]: time="2025-07-07T00:18:03.359494544Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:18:03.370599 containerd[1994]: time="2025-07-07T00:18:03.370545691Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:18:03.385100 containerd[1994]: time="2025-07-07T00:18:03.385047135Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 686.821317ms" Jul 7 00:18:03.386616 containerd[1994]: time="2025-07-07T00:18:03.385295224Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 7 00:18:03.386750 containerd[1994]: time="2025-07-07T00:18:03.386659508Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 7 00:18:04.009449 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4266342743.mount: Deactivated successfully. Jul 7 00:18:06.741516 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 7 00:18:06.785519 containerd[1994]: time="2025-07-07T00:18:06.785464912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:18:06.790948 containerd[1994]: time="2025-07-07T00:18:06.790892099Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Jul 7 00:18:06.794675 containerd[1994]: time="2025-07-07T00:18:06.794625325Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:18:06.804157 containerd[1994]: time="2025-07-07T00:18:06.803695845Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:18:06.805854 containerd[1994]: time="2025-07-07T00:18:06.805787032Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.419096925s" Jul 7 00:18:06.805854 containerd[1994]: time="2025-07-07T00:18:06.805830916Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 7 00:18:09.880280 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:18:09.880557 systemd[1]: kubelet.service: Consumed 216ms CPU time, 109.1M memory peak. Jul 7 00:18:09.883416 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:18:09.915828 systemd[1]: Reload requested from client PID 2818 ('systemctl') (unit session-9.scope)... Jul 7 00:18:09.916032 systemd[1]: Reloading... Jul 7 00:18:10.030369 zram_generator::config[2862]: No configuration found. Jul 7 00:18:10.197534 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:18:10.337471 systemd[1]: Reloading finished in 420 ms. Jul 7 00:18:10.399484 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 7 00:18:10.399591 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 7 00:18:10.399915 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:18:10.399974 systemd[1]: kubelet.service: Consumed 141ms CPU time, 98M memory peak. Jul 7 00:18:10.402760 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:18:10.623411 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:18:10.633957 (kubelet)[2926]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 00:18:10.686984 kubelet[2926]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:18:10.686984 kubelet[2926]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 7 00:18:10.686984 kubelet[2926]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:18:10.688651 kubelet[2926]: I0707 00:18:10.688597 2926 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 00:18:10.875869 kubelet[2926]: I0707 00:18:10.875638 2926 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 7 00:18:10.875869 kubelet[2926]: I0707 00:18:10.875674 2926 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 00:18:10.876332 kubelet[2926]: I0707 00:18:10.876123 2926 server.go:934] "Client rotation is on, will bootstrap in background" Jul 7 00:18:10.923129 kubelet[2926]: I0707 00:18:10.922474 2926 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 00:18:10.929835 kubelet[2926]: E0707 00:18:10.929793 2926 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.31.140:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.31.140:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:18:10.944802 kubelet[2926]: I0707 00:18:10.944773 2926 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 00:18:10.948949 kubelet[2926]: I0707 00:18:10.948913 2926 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 00:18:10.954141 kubelet[2926]: I0707 00:18:10.954082 2926 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 7 00:18:10.954353 kubelet[2926]: I0707 00:18:10.954311 2926 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 00:18:10.954566 kubelet[2926]: I0707 00:18:10.954359 2926 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-140","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 00:18:10.954566 kubelet[2926]: I0707 00:18:10.954556 2926 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 00:18:10.954566 kubelet[2926]: I0707 00:18:10.954566 2926 container_manager_linux.go:300] "Creating device plugin manager" Jul 7 00:18:10.954877 kubelet[2926]: I0707 00:18:10.954672 2926 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:18:10.958708 kubelet[2926]: I0707 00:18:10.958469 2926 kubelet.go:408] "Attempting to sync node with API server" Jul 7 00:18:10.958708 kubelet[2926]: I0707 00:18:10.958509 2926 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 00:18:10.961676 kubelet[2926]: I0707 00:18:10.961633 2926 kubelet.go:314] "Adding apiserver pod source" Jul 7 00:18:10.961676 kubelet[2926]: I0707 00:18:10.961675 2926 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 00:18:10.964809 kubelet[2926]: W0707 00:18:10.963567 2926 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.31.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-140&limit=500&resourceVersion=0": dial tcp 172.31.31.140:6443: connect: connection refused Jul 7 00:18:10.964809 kubelet[2926]: E0707 00:18:10.963661 2926 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.31.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-140&limit=500&resourceVersion=0\": dial tcp 172.31.31.140:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:18:10.965112 kubelet[2926]: W0707 00:18:10.965088 2926 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.31.140:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.31.140:6443: connect: connection refused Jul 7 00:18:10.965352 kubelet[2926]: E0707 00:18:10.965309 2926 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.31.140:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.31.140:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:18:10.965589 kubelet[2926]: I0707 00:18:10.965568 2926 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 00:18:10.969889 kubelet[2926]: I0707 00:18:10.969851 2926 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 00:18:10.972365 kubelet[2926]: W0707 00:18:10.970923 2926 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 00:18:10.972365 kubelet[2926]: I0707 00:18:10.971923 2926 server.go:1274] "Started kubelet" Jul 7 00:18:10.972717 kubelet[2926]: I0707 00:18:10.972686 2926 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 00:18:10.973902 kubelet[2926]: I0707 00:18:10.973867 2926 server.go:449] "Adding debug handlers to kubelet server" Jul 7 00:18:10.979538 kubelet[2926]: I0707 00:18:10.978690 2926 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 00:18:10.982197 kubelet[2926]: I0707 00:18:10.982149 2926 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 00:18:10.982383 kubelet[2926]: I0707 00:18:10.982364 2926 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 00:18:10.985253 kubelet[2926]: E0707 00:18:10.982621 2926 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.31.140:6443/api/v1/namespaces/default/events\": dial tcp 172.31.31.140:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-31-140.184fd002e80a1f71 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-140,UID:ip-172-31-31-140,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-31-140,},FirstTimestamp:2025-07-07 00:18:10.971901809 +0000 UTC m=+0.332461926,LastTimestamp:2025-07-07 00:18:10.971901809 +0000 UTC m=+0.332461926,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-140,}" Jul 7 00:18:10.985951 kubelet[2926]: I0707 00:18:10.985933 2926 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 00:18:10.988084 kubelet[2926]: E0707 00:18:10.988055 2926 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-31-140\" not found" Jul 7 00:18:10.988202 kubelet[2926]: I0707 00:18:10.988195 2926 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 7 00:18:10.991369 kubelet[2926]: I0707 00:18:10.991104 2926 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 7 00:18:10.991369 kubelet[2926]: I0707 00:18:10.991174 2926 reconciler.go:26] "Reconciler: start to sync state" Jul 7 00:18:10.991677 kubelet[2926]: W0707 00:18:10.991644 2926 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.31.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.140:6443: connect: connection refused Jul 7 00:18:10.992705 kubelet[2926]: E0707 00:18:10.992586 2926 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.31.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.31.140:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:18:10.992705 kubelet[2926]: E0707 00:18:10.992667 2926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-140?timeout=10s\": dial tcp 172.31.31.140:6443: connect: connection refused" interval="200ms" Jul 7 00:18:10.998743 kubelet[2926]: I0707 00:18:10.998695 2926 factory.go:221] Registration of the systemd container factory successfully Jul 7 00:18:10.998845 kubelet[2926]: I0707 00:18:10.998791 2926 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 00:18:11.009889 kubelet[2926]: I0707 00:18:11.009693 2926 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 00:18:11.010639 kubelet[2926]: E0707 00:18:11.010592 2926 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 00:18:11.015400 kubelet[2926]: I0707 00:18:11.015211 2926 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 00:18:11.016488 kubelet[2926]: I0707 00:18:11.016450 2926 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 7 00:18:11.016488 kubelet[2926]: I0707 00:18:11.016490 2926 kubelet.go:2321] "Starting kubelet main sync loop" Jul 7 00:18:11.016636 kubelet[2926]: E0707 00:18:11.016545 2926 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 00:18:11.020983 kubelet[2926]: W0707 00:18:11.020826 2926 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.31.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.140:6443: connect: connection refused Jul 7 00:18:11.020983 kubelet[2926]: E0707 00:18:11.020907 2926 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.31.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.31.140:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:18:11.021144 kubelet[2926]: I0707 00:18:11.021132 2926 factory.go:221] Registration of the containerd container factory successfully Jul 7 00:18:11.051367 kubelet[2926]: I0707 00:18:11.051328 2926 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 7 00:18:11.051587 kubelet[2926]: I0707 00:18:11.051517 2926 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 7 00:18:11.051587 kubelet[2926]: I0707 00:18:11.051545 2926 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:18:11.058446 kubelet[2926]: I0707 00:18:11.058404 2926 policy_none.go:49] "None policy: Start" Jul 7 00:18:11.059591 kubelet[2926]: I0707 00:18:11.059562 2926 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 7 00:18:11.059591 kubelet[2926]: I0707 00:18:11.059588 2926 state_mem.go:35] "Initializing new in-memory state store" Jul 7 00:18:11.070512 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 7 00:18:11.081805 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 7 00:18:11.086966 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 7 00:18:11.089250 kubelet[2926]: E0707 00:18:11.089205 2926 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-31-140\" not found" Jul 7 00:18:11.094630 kubelet[2926]: I0707 00:18:11.094596 2926 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 00:18:11.095125 kubelet[2926]: I0707 00:18:11.095113 2926 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 00:18:11.095277 kubelet[2926]: I0707 00:18:11.095248 2926 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 00:18:11.096119 kubelet[2926]: I0707 00:18:11.095696 2926 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 00:18:11.098385 kubelet[2926]: E0707 00:18:11.098032 2926 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-31-140\" not found" Jul 7 00:18:11.135385 systemd[1]: Created slice kubepods-burstable-pod3e1b56fb717350c07d781d76ca591c32.slice - libcontainer container kubepods-burstable-pod3e1b56fb717350c07d781d76ca591c32.slice. Jul 7 00:18:11.151544 systemd[1]: Created slice kubepods-burstable-pod42c8375b8fca455d1a3d599327a9ed85.slice - libcontainer container kubepods-burstable-pod42c8375b8fca455d1a3d599327a9ed85.slice. Jul 7 00:18:11.165950 systemd[1]: Created slice kubepods-burstable-pod091f9ad3affcf58361e81190d1c2450c.slice - libcontainer container kubepods-burstable-pod091f9ad3affcf58361e81190d1c2450c.slice. Jul 7 00:18:11.194117 kubelet[2926]: E0707 00:18:11.194062 2926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-140?timeout=10s\": dial tcp 172.31.31.140:6443: connect: connection refused" interval="400ms" Jul 7 00:18:11.198436 kubelet[2926]: I0707 00:18:11.197876 2926 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-140" Jul 7 00:18:11.198436 kubelet[2926]: E0707 00:18:11.198252 2926 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.31.140:6443/api/v1/nodes\": dial tcp 172.31.31.140:6443: connect: connection refused" node="ip-172-31-31-140" Jul 7 00:18:11.292039 kubelet[2926]: I0707 00:18:11.292001 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42c8375b8fca455d1a3d599327a9ed85-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-140\" (UID: \"42c8375b8fca455d1a3d599327a9ed85\") " pod="kube-system/kube-controller-manager-ip-172-31-31-140" Jul 7 00:18:11.292295 kubelet[2926]: I0707 00:18:11.292280 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/42c8375b8fca455d1a3d599327a9ed85-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-140\" (UID: \"42c8375b8fca455d1a3d599327a9ed85\") " pod="kube-system/kube-controller-manager-ip-172-31-31-140" Jul 7 00:18:11.292403 kubelet[2926]: I0707 00:18:11.292387 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42c8375b8fca455d1a3d599327a9ed85-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-140\" (UID: \"42c8375b8fca455d1a3d599327a9ed85\") " pod="kube-system/kube-controller-manager-ip-172-31-31-140" Jul 7 00:18:11.292478 kubelet[2926]: I0707 00:18:11.292468 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3e1b56fb717350c07d781d76ca591c32-ca-certs\") pod \"kube-apiserver-ip-172-31-31-140\" (UID: \"3e1b56fb717350c07d781d76ca591c32\") " pod="kube-system/kube-apiserver-ip-172-31-31-140" Jul 7 00:18:11.292549 kubelet[2926]: I0707 00:18:11.292539 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42c8375b8fca455d1a3d599327a9ed85-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-140\" (UID: \"42c8375b8fca455d1a3d599327a9ed85\") " pod="kube-system/kube-controller-manager-ip-172-31-31-140" Jul 7 00:18:11.292614 kubelet[2926]: I0707 00:18:11.292606 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/42c8375b8fca455d1a3d599327a9ed85-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-140\" (UID: \"42c8375b8fca455d1a3d599327a9ed85\") " pod="kube-system/kube-controller-manager-ip-172-31-31-140" Jul 7 00:18:11.292694 kubelet[2926]: I0707 00:18:11.292670 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/091f9ad3affcf58361e81190d1c2450c-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-140\" (UID: \"091f9ad3affcf58361e81190d1c2450c\") " pod="kube-system/kube-scheduler-ip-172-31-31-140" Jul 7 00:18:11.292694 kubelet[2926]: I0707 00:18:11.292692 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3e1b56fb717350c07d781d76ca591c32-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-140\" (UID: \"3e1b56fb717350c07d781d76ca591c32\") " pod="kube-system/kube-apiserver-ip-172-31-31-140" Jul 7 00:18:11.292828 kubelet[2926]: I0707 00:18:11.292708 2926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3e1b56fb717350c07d781d76ca591c32-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-140\" (UID: \"3e1b56fb717350c07d781d76ca591c32\") " pod="kube-system/kube-apiserver-ip-172-31-31-140" Jul 7 00:18:11.400579 kubelet[2926]: I0707 00:18:11.400488 2926 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-140" Jul 7 00:18:11.400971 kubelet[2926]: E0707 00:18:11.400944 2926 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.31.140:6443/api/v1/nodes\": dial tcp 172.31.31.140:6443: connect: connection refused" node="ip-172-31-31-140" Jul 7 00:18:11.451224 containerd[1994]: time="2025-07-07T00:18:11.451158624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-140,Uid:3e1b56fb717350c07d781d76ca591c32,Namespace:kube-system,Attempt:0,}" Jul 7 00:18:11.474423 containerd[1994]: time="2025-07-07T00:18:11.472469569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-140,Uid:42c8375b8fca455d1a3d599327a9ed85,Namespace:kube-system,Attempt:0,}" Jul 7 00:18:11.474423 containerd[1994]: time="2025-07-07T00:18:11.474226901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-140,Uid:091f9ad3affcf58361e81190d1c2450c,Namespace:kube-system,Attempt:0,}" Jul 7 00:18:11.594742 containerd[1994]: time="2025-07-07T00:18:11.594681383Z" level=info msg="connecting to shim 3bb5e78755ef39b5634830bc32c7aeaaeed0a1b1ca575982bb604fa61004a171" address="unix:///run/containerd/s/7bd5b2e2f21916edb5a600a15c7681485e0f46fd013681b382255f82c4450ca3" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:18:11.595720 kubelet[2926]: E0707 00:18:11.595669 2926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-140?timeout=10s\": dial tcp 172.31.31.140:6443: connect: connection refused" interval="800ms" Jul 7 00:18:11.602667 containerd[1994]: time="2025-07-07T00:18:11.601403267Z" level=info msg="connecting to shim 27aff665c929a6902a5766bd5a381cb51b35876c7c8f32cbf36cc1b362b4f97b" address="unix:///run/containerd/s/8e32b425b7ff0a427dc87ce48fb07f77973407e574281cf1d881effd880c2b9e" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:18:11.607920 containerd[1994]: time="2025-07-07T00:18:11.607434387Z" level=info msg="connecting to shim cfe454232123aed82a20dc65a73c54e94f2c5631c8011f05fcf91f061e489e3c" address="unix:///run/containerd/s/15d3268ef26961d3bcdd745d87b909be5218bf76744b3888dc21fba1ca347bfe" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:18:11.717556 systemd[1]: Started cri-containerd-27aff665c929a6902a5766bd5a381cb51b35876c7c8f32cbf36cc1b362b4f97b.scope - libcontainer container 27aff665c929a6902a5766bd5a381cb51b35876c7c8f32cbf36cc1b362b4f97b. Jul 7 00:18:11.727638 systemd[1]: Started cri-containerd-3bb5e78755ef39b5634830bc32c7aeaaeed0a1b1ca575982bb604fa61004a171.scope - libcontainer container 3bb5e78755ef39b5634830bc32c7aeaaeed0a1b1ca575982bb604fa61004a171. Jul 7 00:18:11.729007 systemd[1]: Started cri-containerd-cfe454232123aed82a20dc65a73c54e94f2c5631c8011f05fcf91f061e489e3c.scope - libcontainer container cfe454232123aed82a20dc65a73c54e94f2c5631c8011f05fcf91f061e489e3c. Jul 7 00:18:11.810704 kubelet[2926]: I0707 00:18:11.810668 2926 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-140" Jul 7 00:18:11.816098 kubelet[2926]: E0707 00:18:11.815920 2926 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.31.140:6443/api/v1/nodes\": dial tcp 172.31.31.140:6443: connect: connection refused" node="ip-172-31-31-140" Jul 7 00:18:11.833363 containerd[1994]: time="2025-07-07T00:18:11.833308249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-140,Uid:42c8375b8fca455d1a3d599327a9ed85,Namespace:kube-system,Attempt:0,} returns sandbox id \"27aff665c929a6902a5766bd5a381cb51b35876c7c8f32cbf36cc1b362b4f97b\"" Jul 7 00:18:11.846524 containerd[1994]: time="2025-07-07T00:18:11.846465686Z" level=info msg="CreateContainer within sandbox \"27aff665c929a6902a5766bd5a381cb51b35876c7c8f32cbf36cc1b362b4f97b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 00:18:11.853366 containerd[1994]: time="2025-07-07T00:18:11.851681708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-140,Uid:3e1b56fb717350c07d781d76ca591c32,Namespace:kube-system,Attempt:0,} returns sandbox id \"cfe454232123aed82a20dc65a73c54e94f2c5631c8011f05fcf91f061e489e3c\"" Jul 7 00:18:11.855743 containerd[1994]: time="2025-07-07T00:18:11.855695095Z" level=info msg="CreateContainer within sandbox \"cfe454232123aed82a20dc65a73c54e94f2c5631c8011f05fcf91f061e489e3c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 00:18:11.870363 containerd[1994]: time="2025-07-07T00:18:11.869413900Z" level=info msg="Container 6e74632f8b3cff993f22d8f7237bfdb9deeababb02257f9299b19ca299ec6b0f: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:18:11.883167 containerd[1994]: time="2025-07-07T00:18:11.883106332Z" level=info msg="CreateContainer within sandbox \"27aff665c929a6902a5766bd5a381cb51b35876c7c8f32cbf36cc1b362b4f97b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6e74632f8b3cff993f22d8f7237bfdb9deeababb02257f9299b19ca299ec6b0f\"" Jul 7 00:18:11.886473 containerd[1994]: time="2025-07-07T00:18:11.886434386Z" level=info msg="Container afc679e4877532b124fb379551544ddbc6205b582a8a4245c7a44cda02062274: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:18:11.887691 containerd[1994]: time="2025-07-07T00:18:11.887654932Z" level=info msg="StartContainer for \"6e74632f8b3cff993f22d8f7237bfdb9deeababb02257f9299b19ca299ec6b0f\"" Jul 7 00:18:11.889960 containerd[1994]: time="2025-07-07T00:18:11.889928483Z" level=info msg="connecting to shim 6e74632f8b3cff993f22d8f7237bfdb9deeababb02257f9299b19ca299ec6b0f" address="unix:///run/containerd/s/8e32b425b7ff0a427dc87ce48fb07f77973407e574281cf1d881effd880c2b9e" protocol=ttrpc version=3 Jul 7 00:18:11.901518 containerd[1994]: time="2025-07-07T00:18:11.901466901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-140,Uid:091f9ad3affcf58361e81190d1c2450c,Namespace:kube-system,Attempt:0,} returns sandbox id \"3bb5e78755ef39b5634830bc32c7aeaaeed0a1b1ca575982bb604fa61004a171\"" Jul 7 00:18:11.902431 containerd[1994]: time="2025-07-07T00:18:11.902400716Z" level=info msg="CreateContainer within sandbox \"cfe454232123aed82a20dc65a73c54e94f2c5631c8011f05fcf91f061e489e3c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"afc679e4877532b124fb379551544ddbc6205b582a8a4245c7a44cda02062274\"" Jul 7 00:18:11.903119 containerd[1994]: time="2025-07-07T00:18:11.903095550Z" level=info msg="StartContainer for \"afc679e4877532b124fb379551544ddbc6205b582a8a4245c7a44cda02062274\"" Jul 7 00:18:11.904677 containerd[1994]: time="2025-07-07T00:18:11.904649617Z" level=info msg="connecting to shim afc679e4877532b124fb379551544ddbc6205b582a8a4245c7a44cda02062274" address="unix:///run/containerd/s/15d3268ef26961d3bcdd745d87b909be5218bf76744b3888dc21fba1ca347bfe" protocol=ttrpc version=3 Jul 7 00:18:11.905942 containerd[1994]: time="2025-07-07T00:18:11.905919096Z" level=info msg="CreateContainer within sandbox \"3bb5e78755ef39b5634830bc32c7aeaaeed0a1b1ca575982bb604fa61004a171\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 00:18:11.922822 containerd[1994]: time="2025-07-07T00:18:11.922784734Z" level=info msg="Container df632b625a5d9a2b70e45fafa641486bd92d98c3a802a9e2f921e349374cd26c: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:18:11.922895 systemd[1]: Started cri-containerd-6e74632f8b3cff993f22d8f7237bfdb9deeababb02257f9299b19ca299ec6b0f.scope - libcontainer container 6e74632f8b3cff993f22d8f7237bfdb9deeababb02257f9299b19ca299ec6b0f. Jul 7 00:18:11.942238 containerd[1994]: time="2025-07-07T00:18:11.942183103Z" level=info msg="CreateContainer within sandbox \"3bb5e78755ef39b5634830bc32c7aeaaeed0a1b1ca575982bb604fa61004a171\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"df632b625a5d9a2b70e45fafa641486bd92d98c3a802a9e2f921e349374cd26c\"" Jul 7 00:18:11.942918 containerd[1994]: time="2025-07-07T00:18:11.942845019Z" level=info msg="StartContainer for \"df632b625a5d9a2b70e45fafa641486bd92d98c3a802a9e2f921e349374cd26c\"" Jul 7 00:18:11.944199 containerd[1994]: time="2025-07-07T00:18:11.944148634Z" level=info msg="connecting to shim df632b625a5d9a2b70e45fafa641486bd92d98c3a802a9e2f921e349374cd26c" address="unix:///run/containerd/s/7bd5b2e2f21916edb5a600a15c7681485e0f46fd013681b382255f82c4450ca3" protocol=ttrpc version=3 Jul 7 00:18:11.945627 systemd[1]: Started cri-containerd-afc679e4877532b124fb379551544ddbc6205b582a8a4245c7a44cda02062274.scope - libcontainer container afc679e4877532b124fb379551544ddbc6205b582a8a4245c7a44cda02062274. Jul 7 00:18:11.986563 systemd[1]: Started cri-containerd-df632b625a5d9a2b70e45fafa641486bd92d98c3a802a9e2f921e349374cd26c.scope - libcontainer container df632b625a5d9a2b70e45fafa641486bd92d98c3a802a9e2f921e349374cd26c. Jul 7 00:18:12.004039 kubelet[2926]: W0707 00:18:12.003948 2926 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.31.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-140&limit=500&resourceVersion=0": dial tcp 172.31.31.140:6443: connect: connection refused Jul 7 00:18:12.004319 kubelet[2926]: E0707 00:18:12.004046 2926 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.31.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-140&limit=500&resourceVersion=0\": dial tcp 172.31.31.140:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:18:12.043702 containerd[1994]: time="2025-07-07T00:18:12.043659479Z" level=info msg="StartContainer for \"6e74632f8b3cff993f22d8f7237bfdb9deeababb02257f9299b19ca299ec6b0f\" returns successfully" Jul 7 00:18:12.120673 containerd[1994]: time="2025-07-07T00:18:12.120537919Z" level=info msg="StartContainer for \"afc679e4877532b124fb379551544ddbc6205b582a8a4245c7a44cda02062274\" returns successfully" Jul 7 00:18:12.127467 containerd[1994]: time="2025-07-07T00:18:12.127414820Z" level=info msg="StartContainer for \"df632b625a5d9a2b70e45fafa641486bd92d98c3a802a9e2f921e349374cd26c\" returns successfully" Jul 7 00:18:12.213197 kubelet[2926]: W0707 00:18:12.213125 2926 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.31.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.140:6443: connect: connection refused Jul 7 00:18:12.213389 kubelet[2926]: E0707 00:18:12.213212 2926 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.31.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.31.140:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:18:12.349455 kubelet[2926]: W0707 00:18:12.349280 2926 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.31.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.140:6443: connect: connection refused Jul 7 00:18:12.349455 kubelet[2926]: E0707 00:18:12.349393 2926 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.31.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.31.140:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:18:12.396175 kubelet[2926]: E0707 00:18:12.396120 2926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-140?timeout=10s\": dial tcp 172.31.31.140:6443: connect: connection refused" interval="1.6s" Jul 7 00:18:12.420922 kubelet[2926]: W0707 00:18:12.420845 2926 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.31.140:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.31.140:6443: connect: connection refused Jul 7 00:18:12.421069 kubelet[2926]: E0707 00:18:12.420934 2926 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.31.140:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.31.140:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:18:12.619181 kubelet[2926]: I0707 00:18:12.619081 2926 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-140" Jul 7 00:18:12.620487 kubelet[2926]: E0707 00:18:12.620449 2926 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.31.140:6443/api/v1/nodes\": dial tcp 172.31.31.140:6443: connect: connection refused" node="ip-172-31-31-140" Jul 7 00:18:14.223242 kubelet[2926]: I0707 00:18:14.222940 2926 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-140" Jul 7 00:18:14.692111 kubelet[2926]: E0707 00:18:14.691833 2926 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-31-140\" not found" node="ip-172-31-31-140" Jul 7 00:18:14.724013 kubelet[2926]: E0707 00:18:14.723905 2926 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-31-140.184fd002e80a1f71 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-140,UID:ip-172-31-31-140,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-31-140,},FirstTimestamp:2025-07-07 00:18:10.971901809 +0000 UTC m=+0.332461926,LastTimestamp:2025-07-07 00:18:10.971901809 +0000 UTC m=+0.332461926,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-140,}" Jul 7 00:18:14.777913 kubelet[2926]: I0707 00:18:14.777787 2926 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-31-140" Jul 7 00:18:14.777913 kubelet[2926]: E0707 00:18:14.777824 2926 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ip-172-31-31-140\": node \"ip-172-31-31-140\" not found" Jul 7 00:18:14.778751 kubelet[2926]: E0707 00:18:14.778652 2926 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-31-140.184fd002ea582551 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-140,UID:ip-172-31-31-140,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-31-140,},FirstTimestamp:2025-07-07 00:18:11.010569553 +0000 UTC m=+0.371129670,LastTimestamp:2025-07-07 00:18:11.010569553 +0000 UTC m=+0.371129670,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-140,}" Jul 7 00:18:14.832964 kubelet[2926]: E0707 00:18:14.832844 2926 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-31-140.184fd002ecaeda3f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-140,UID:ip-172-31-31-140,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-172-31-31-140 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-172-31-31-140,},FirstTimestamp:2025-07-07 00:18:11.049806399 +0000 UTC m=+0.410366507,LastTimestamp:2025-07-07 00:18:11.049806399 +0000 UTC m=+0.410366507,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-140,}" Jul 7 00:18:14.976273 kubelet[2926]: I0707 00:18:14.976226 2926 apiserver.go:52] "Watching apiserver" Jul 7 00:18:14.991554 kubelet[2926]: I0707 00:18:14.991520 2926 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 7 00:18:15.078273 kubelet[2926]: E0707 00:18:15.078037 2926 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-31-140\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-31-140" Jul 7 00:18:17.097963 systemd[1]: Reload requested from client PID 3193 ('systemctl') (unit session-9.scope)... Jul 7 00:18:17.097981 systemd[1]: Reloading... Jul 7 00:18:17.281407 zram_generator::config[3237]: No configuration found. Jul 7 00:18:17.431827 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:18:17.593245 systemd[1]: Reloading finished in 494 ms. Jul 7 00:18:17.631251 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:18:17.651359 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 00:18:17.651756 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:18:17.652952 systemd[1]: kubelet.service: Consumed 772ms CPU time, 126.4M memory peak. Jul 7 00:18:17.665677 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:18:18.076288 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:18:18.088963 (kubelet)[3297]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 00:18:18.184618 kubelet[3297]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:18:18.184618 kubelet[3297]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 7 00:18:18.184618 kubelet[3297]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:18:18.185049 kubelet[3297]: I0707 00:18:18.184698 3297 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 00:18:18.206234 kubelet[3297]: I0707 00:18:18.206187 3297 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 7 00:18:18.206234 kubelet[3297]: I0707 00:18:18.206221 3297 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 00:18:18.206631 kubelet[3297]: I0707 00:18:18.206606 3297 server.go:934] "Client rotation is on, will bootstrap in background" Jul 7 00:18:18.209359 kubelet[3297]: I0707 00:18:18.209267 3297 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 7 00:18:18.220173 kubelet[3297]: I0707 00:18:18.220134 3297 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 00:18:18.228303 sudo[3311]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 7 00:18:18.228748 sudo[3311]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 7 00:18:18.234578 kubelet[3297]: I0707 00:18:18.234122 3297 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 00:18:18.242056 kubelet[3297]: I0707 00:18:18.242006 3297 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 00:18:18.242215 kubelet[3297]: I0707 00:18:18.242150 3297 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 7 00:18:18.242333 kubelet[3297]: I0707 00:18:18.242284 3297 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 00:18:18.242590 kubelet[3297]: I0707 00:18:18.242320 3297 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-140","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 00:18:18.242754 kubelet[3297]: I0707 00:18:18.242606 3297 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 00:18:18.242754 kubelet[3297]: I0707 00:18:18.242622 3297 container_manager_linux.go:300] "Creating device plugin manager" Jul 7 00:18:18.242754 kubelet[3297]: I0707 00:18:18.242657 3297 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:18:18.244554 kubelet[3297]: I0707 00:18:18.242794 3297 kubelet.go:408] "Attempting to sync node with API server" Jul 7 00:18:18.244554 kubelet[3297]: I0707 00:18:18.242809 3297 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 00:18:18.244554 kubelet[3297]: I0707 00:18:18.243593 3297 kubelet.go:314] "Adding apiserver pod source" Jul 7 00:18:18.248370 kubelet[3297]: I0707 00:18:18.245369 3297 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 00:18:18.249400 kubelet[3297]: I0707 00:18:18.249020 3297 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 00:18:18.257922 kubelet[3297]: I0707 00:18:18.257877 3297 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 00:18:18.258547 kubelet[3297]: I0707 00:18:18.258530 3297 server.go:1274] "Started kubelet" Jul 7 00:18:18.278224 kubelet[3297]: I0707 00:18:18.277248 3297 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 00:18:18.295364 kubelet[3297]: I0707 00:18:18.294810 3297 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 00:18:18.300276 kubelet[3297]: I0707 00:18:18.299949 3297 server.go:449] "Adding debug handlers to kubelet server" Jul 7 00:18:18.303480 kubelet[3297]: I0707 00:18:18.303432 3297 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 00:18:18.304067 kubelet[3297]: I0707 00:18:18.303678 3297 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 00:18:18.304067 kubelet[3297]: I0707 00:18:18.304004 3297 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 00:18:18.309452 kubelet[3297]: I0707 00:18:18.308082 3297 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 7 00:18:18.309452 kubelet[3297]: E0707 00:18:18.308389 3297 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-31-140\" not found" Jul 7 00:18:18.318515 kubelet[3297]: I0707 00:18:18.318479 3297 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 7 00:18:18.318653 kubelet[3297]: I0707 00:18:18.318638 3297 reconciler.go:26] "Reconciler: start to sync state" Jul 7 00:18:18.320195 kubelet[3297]: I0707 00:18:18.320167 3297 factory.go:221] Registration of the systemd container factory successfully Jul 7 00:18:18.321772 kubelet[3297]: I0707 00:18:18.321736 3297 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 00:18:18.323916 kubelet[3297]: I0707 00:18:18.323082 3297 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 00:18:18.324681 kubelet[3297]: I0707 00:18:18.324651 3297 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 00:18:18.324805 kubelet[3297]: I0707 00:18:18.324692 3297 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 7 00:18:18.324805 kubelet[3297]: I0707 00:18:18.324714 3297 kubelet.go:2321] "Starting kubelet main sync loop" Jul 7 00:18:18.324805 kubelet[3297]: E0707 00:18:18.324762 3297 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 00:18:18.338850 kubelet[3297]: I0707 00:18:18.338239 3297 factory.go:221] Registration of the containerd container factory successfully Jul 7 00:18:18.342584 kubelet[3297]: E0707 00:18:18.342264 3297 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 00:18:18.416337 kubelet[3297]: I0707 00:18:18.416307 3297 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 7 00:18:18.416503 kubelet[3297]: I0707 00:18:18.416381 3297 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 7 00:18:18.416503 kubelet[3297]: I0707 00:18:18.416406 3297 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:18:18.417884 kubelet[3297]: I0707 00:18:18.416596 3297 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 00:18:18.417884 kubelet[3297]: I0707 00:18:18.416614 3297 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 00:18:18.417884 kubelet[3297]: I0707 00:18:18.416638 3297 policy_none.go:49] "None policy: Start" Jul 7 00:18:18.417884 kubelet[3297]: I0707 00:18:18.417777 3297 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 7 00:18:18.417884 kubelet[3297]: I0707 00:18:18.417801 3297 state_mem.go:35] "Initializing new in-memory state store" Jul 7 00:18:18.418141 kubelet[3297]: I0707 00:18:18.418066 3297 state_mem.go:75] "Updated machine memory state" Jul 7 00:18:18.425148 kubelet[3297]: E0707 00:18:18.425076 3297 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 7 00:18:18.429072 kubelet[3297]: I0707 00:18:18.429038 3297 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 00:18:18.429809 kubelet[3297]: I0707 00:18:18.429245 3297 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 00:18:18.429809 kubelet[3297]: I0707 00:18:18.429260 3297 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 00:18:18.434857 kubelet[3297]: I0707 00:18:18.434794 3297 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 00:18:18.556823 kubelet[3297]: I0707 00:18:18.556779 3297 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-140" Jul 7 00:18:18.573386 kubelet[3297]: I0707 00:18:18.573327 3297 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-31-140" Jul 7 00:18:18.573541 kubelet[3297]: I0707 00:18:18.573422 3297 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-31-140" Jul 7 00:18:18.640968 kubelet[3297]: E0707 00:18:18.640779 3297 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-31-140\" already exists" pod="kube-system/kube-scheduler-ip-172-31-31-140" Jul 7 00:18:18.820506 kubelet[3297]: I0707 00:18:18.820155 3297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/091f9ad3affcf58361e81190d1c2450c-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-140\" (UID: \"091f9ad3affcf58361e81190d1c2450c\") " pod="kube-system/kube-scheduler-ip-172-31-31-140" Jul 7 00:18:18.820506 kubelet[3297]: I0707 00:18:18.820197 3297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3e1b56fb717350c07d781d76ca591c32-ca-certs\") pod \"kube-apiserver-ip-172-31-31-140\" (UID: \"3e1b56fb717350c07d781d76ca591c32\") " pod="kube-system/kube-apiserver-ip-172-31-31-140" Jul 7 00:18:18.820506 kubelet[3297]: I0707 00:18:18.820223 3297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3e1b56fb717350c07d781d76ca591c32-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-140\" (UID: \"3e1b56fb717350c07d781d76ca591c32\") " pod="kube-system/kube-apiserver-ip-172-31-31-140" Jul 7 00:18:18.820506 kubelet[3297]: I0707 00:18:18.820251 3297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42c8375b8fca455d1a3d599327a9ed85-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-140\" (UID: \"42c8375b8fca455d1a3d599327a9ed85\") " pod="kube-system/kube-controller-manager-ip-172-31-31-140" Jul 7 00:18:18.820506 kubelet[3297]: I0707 00:18:18.820277 3297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/42c8375b8fca455d1a3d599327a9ed85-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-140\" (UID: \"42c8375b8fca455d1a3d599327a9ed85\") " pod="kube-system/kube-controller-manager-ip-172-31-31-140" Jul 7 00:18:18.820817 kubelet[3297]: I0707 00:18:18.820308 3297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42c8375b8fca455d1a3d599327a9ed85-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-140\" (UID: \"42c8375b8fca455d1a3d599327a9ed85\") " pod="kube-system/kube-controller-manager-ip-172-31-31-140" Jul 7 00:18:18.820817 kubelet[3297]: I0707 00:18:18.820331 3297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/42c8375b8fca455d1a3d599327a9ed85-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-140\" (UID: \"42c8375b8fca455d1a3d599327a9ed85\") " pod="kube-system/kube-controller-manager-ip-172-31-31-140" Jul 7 00:18:18.821299 kubelet[3297]: I0707 00:18:18.821138 3297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42c8375b8fca455d1a3d599327a9ed85-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-140\" (UID: \"42c8375b8fca455d1a3d599327a9ed85\") " pod="kube-system/kube-controller-manager-ip-172-31-31-140" Jul 7 00:18:18.821533 kubelet[3297]: I0707 00:18:18.821474 3297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3e1b56fb717350c07d781d76ca591c32-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-140\" (UID: \"3e1b56fb717350c07d781d76ca591c32\") " pod="kube-system/kube-apiserver-ip-172-31-31-140" Jul 7 00:18:18.983601 sudo[3311]: pam_unix(sudo:session): session closed for user root Jul 7 00:18:19.246798 kubelet[3297]: I0707 00:18:19.246628 3297 apiserver.go:52] "Watching apiserver" Jul 7 00:18:19.318892 kubelet[3297]: I0707 00:18:19.318821 3297 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 7 00:18:19.393362 kubelet[3297]: E0707 00:18:19.393032 3297 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-31-140\" already exists" pod="kube-system/kube-apiserver-ip-172-31-31-140" Jul 7 00:18:19.412984 kubelet[3297]: I0707 00:18:19.412918 3297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-31-140" podStartSLOduration=3.41290298 podStartE2EDuration="3.41290298s" podCreationTimestamp="2025-07-07 00:18:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:18:19.412653929 +0000 UTC m=+1.304097025" watchObservedRunningTime="2025-07-07 00:18:19.41290298 +0000 UTC m=+1.304346054" Jul 7 00:18:19.439684 kubelet[3297]: I0707 00:18:19.439600 3297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-31-140" podStartSLOduration=1.439581526 podStartE2EDuration="1.439581526s" podCreationTimestamp="2025-07-07 00:18:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:18:19.426419968 +0000 UTC m=+1.317863066" watchObservedRunningTime="2025-07-07 00:18:19.439581526 +0000 UTC m=+1.331024613" Jul 7 00:18:19.440023 kubelet[3297]: I0707 00:18:19.439711 3297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-31-140" podStartSLOduration=1.439705802 podStartE2EDuration="1.439705802s" podCreationTimestamp="2025-07-07 00:18:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:18:19.439116029 +0000 UTC m=+1.330559121" watchObservedRunningTime="2025-07-07 00:18:19.439705802 +0000 UTC m=+1.331148902" Jul 7 00:18:20.607663 sudo[2354]: pam_unix(sudo:session): session closed for user root Jul 7 00:18:20.630243 sshd[2353]: Connection closed by 147.75.109.163 port 51592 Jul 7 00:18:20.633083 sshd-session[2351]: pam_unix(sshd:session): session closed for user core Jul 7 00:18:20.638074 systemd-logind[1973]: Session 9 logged out. Waiting for processes to exit. Jul 7 00:18:20.639201 systemd[1]: sshd@8-172.31.31.140:22-147.75.109.163:51592.service: Deactivated successfully. Jul 7 00:18:20.641871 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 00:18:20.642126 systemd[1]: session-9.scope: Consumed 5.030s CPU time, 207.5M memory peak. Jul 7 00:18:20.644482 systemd-logind[1973]: Removed session 9. Jul 7 00:18:21.397448 update_engine[1974]: I20250707 00:18:21.397383 1974 update_attempter.cc:509] Updating boot flags... Jul 7 00:18:21.951479 kubelet[3297]: I0707 00:18:21.949523 3297 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 00:18:21.952055 containerd[1994]: time="2025-07-07T00:18:21.950383724Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 00:18:21.954968 kubelet[3297]: I0707 00:18:21.954755 3297 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 00:18:22.726399 systemd[1]: Created slice kubepods-besteffort-podbf588f7d_d600_4c0d_a996_787be3b3edf2.slice - libcontainer container kubepods-besteffort-podbf588f7d_d600_4c0d_a996_787be3b3edf2.slice. Jul 7 00:18:22.742915 systemd[1]: Created slice kubepods-burstable-pod4da188ec_67ac_46c3_b7e5_db5d8349946a.slice - libcontainer container kubepods-burstable-pod4da188ec_67ac_46c3_b7e5_db5d8349946a.slice. Jul 7 00:18:22.749132 kubelet[3297]: I0707 00:18:22.749086 3297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf588f7d-d600-4c0d-a996-787be3b3edf2-lib-modules\") pod \"kube-proxy-4gsx8\" (UID: \"bf588f7d-d600-4c0d-a996-787be3b3edf2\") " pod="kube-system/kube-proxy-4gsx8" Jul 7 00:18:22.749424 kubelet[3297]: I0707 00:18:22.749402 3297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4da188ec-67ac-46c3-b7e5-db5d8349946a-clustermesh-secrets\") pod \"cilium-tg75z\" (UID: \"4da188ec-67ac-46c3-b7e5-db5d8349946a\") " pod="kube-system/cilium-tg75z" Jul 7 00:18:22.749591 kubelet[3297]: I0707 00:18:22.749574 3297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4da188ec-67ac-46c3-b7e5-db5d8349946a-etc-cni-netd\") pod \"cilium-tg75z\" (UID: \"4da188ec-67ac-46c3-b7e5-db5d8349946a\") " pod="kube-system/cilium-tg75z" Jul 7 00:18:22.749690 kubelet[3297]: I0707 00:18:22.749677 3297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bf588f7d-d600-4c0d-a996-787be3b3edf2-kube-proxy\") pod \"kube-proxy-4gsx8\" (UID: \"bf588f7d-d600-4c0d-a996-787be3b3edf2\") " pod="kube-system/kube-proxy-4gsx8" Jul 7 00:18:22.749769 kubelet[3297]: I0707 00:18:22.749757 3297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4da188ec-67ac-46c3-b7e5-db5d8349946a-cilium-config-path\") pod \"cilium-tg75z\" (UID: \"4da188ec-67ac-46c3-b7e5-db5d8349946a\") " pod="kube-system/cilium-tg75z" Jul 7 00:18:22.749848 kubelet[3297]: I0707 00:18:22.749837 3297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4da188ec-67ac-46c3-b7e5-db5d8349946a-host-proc-sys-kernel\") pod \"cilium-tg75z\" (UID: \"4da188ec-67ac-46c3-b7e5-db5d8349946a\") " pod="kube-system/cilium-tg75z" Jul 7 00:18:22.749932 kubelet[3297]: I0707 00:18:22.749919 3297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4da188ec-67ac-46c3-b7e5-db5d8349946a-hubble-tls\") pod \"cilium-tg75z\" (UID: \"4da188ec-67ac-46c3-b7e5-db5d8349946a\") " pod="kube-system/cilium-tg75z" Jul 7 00:18:22.750012 kubelet[3297]: I0707 00:18:22.749997 3297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57qbx\" (UniqueName: \"kubernetes.io/projected/4da188ec-67ac-46c3-b7e5-db5d8349946a-kube-api-access-57qbx\") pod \"cilium-tg75z\" (UID: \"4da188ec-67ac-46c3-b7e5-db5d8349946a\") " pod="kube-system/cilium-tg75z" Jul 7 00:18:22.750087 kubelet[3297]: I0707 00:18:22.750076 3297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swdsj\" (UniqueName: \"kubernetes.io/projected/bf588f7d-d600-4c0d-a996-787be3b3edf2-kube-api-access-swdsj\") pod \"kube-proxy-4gsx8\" (UID: \"bf588f7d-d600-4c0d-a996-787be3b3edf2\") " pod="kube-system/kube-proxy-4gsx8" Jul 7 00:18:22.750162 kubelet[3297]: I0707 00:18:22.750151 3297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4da188ec-67ac-46c3-b7e5-db5d8349946a-cilium-cgroup\") pod \"cilium-tg75z\" (UID: \"4da188ec-67ac-46c3-b7e5-db5d8349946a\") " pod="kube-system/cilium-tg75z" Jul 7 00:18:22.750232 kubelet[3297]: I0707 00:18:22.750221 3297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4da188ec-67ac-46c3-b7e5-db5d8349946a-cni-path\") pod \"cilium-tg75z\" (UID: \"4da188ec-67ac-46c3-b7e5-db5d8349946a\") " pod="kube-system/cilium-tg75z" Jul 7 00:18:22.750402 kubelet[3297]: I0707 00:18:22.750380 3297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4da188ec-67ac-46c3-b7e5-db5d8349946a-lib-modules\") pod \"cilium-tg75z\" (UID: \"4da188ec-67ac-46c3-b7e5-db5d8349946a\") " pod="kube-system/cilium-tg75z" Jul 7 00:18:22.750481 kubelet[3297]: I0707 00:18:22.750469 3297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4da188ec-67ac-46c3-b7e5-db5d8349946a-xtables-lock\") pod \"cilium-tg75z\" (UID: \"4da188ec-67ac-46c3-b7e5-db5d8349946a\") " pod="kube-system/cilium-tg75z" Jul 7 00:18:22.750562 kubelet[3297]: I0707 00:18:22.750550 3297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4da188ec-67ac-46c3-b7e5-db5d8349946a-host-proc-sys-net\") pod \"cilium-tg75z\" (UID: \"4da188ec-67ac-46c3-b7e5-db5d8349946a\") " pod="kube-system/cilium-tg75z" Jul 7 00:18:22.750638 kubelet[3297]: I0707 00:18:22.750626 3297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4da188ec-67ac-46c3-b7e5-db5d8349946a-cilium-run\") pod \"cilium-tg75z\" (UID: \"4da188ec-67ac-46c3-b7e5-db5d8349946a\") " pod="kube-system/cilium-tg75z" Jul 7 00:18:22.750716 kubelet[3297]: I0707 00:18:22.750705 3297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4da188ec-67ac-46c3-b7e5-db5d8349946a-hostproc\") pod \"cilium-tg75z\" (UID: \"4da188ec-67ac-46c3-b7e5-db5d8349946a\") " pod="kube-system/cilium-tg75z" Jul 7 00:18:22.750802 kubelet[3297]: I0707 00:18:22.750790 3297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bf588f7d-d600-4c0d-a996-787be3b3edf2-xtables-lock\") pod \"kube-proxy-4gsx8\" (UID: \"bf588f7d-d600-4c0d-a996-787be3b3edf2\") " pod="kube-system/kube-proxy-4gsx8" Jul 7 00:18:22.750881 kubelet[3297]: I0707 00:18:22.750870 3297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4da188ec-67ac-46c3-b7e5-db5d8349946a-bpf-maps\") pod \"cilium-tg75z\" (UID: \"4da188ec-67ac-46c3-b7e5-db5d8349946a\") " pod="kube-system/cilium-tg75z" Jul 7 00:18:22.936995 systemd[1]: Created slice kubepods-besteffort-poda29ba126_2dd2_4518_86d9_d7cf6f445808.slice - libcontainer container kubepods-besteffort-poda29ba126_2dd2_4518_86d9_d7cf6f445808.slice. Jul 7 00:18:22.952603 kubelet[3297]: I0707 00:18:22.952543 3297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h79bw\" (UniqueName: \"kubernetes.io/projected/a29ba126-2dd2-4518-86d9-d7cf6f445808-kube-api-access-h79bw\") pod \"cilium-operator-5d85765b45-zlrt6\" (UID: \"a29ba126-2dd2-4518-86d9-d7cf6f445808\") " pod="kube-system/cilium-operator-5d85765b45-zlrt6" Jul 7 00:18:22.953203 kubelet[3297]: I0707 00:18:22.952618 3297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a29ba126-2dd2-4518-86d9-d7cf6f445808-cilium-config-path\") pod \"cilium-operator-5d85765b45-zlrt6\" (UID: \"a29ba126-2dd2-4518-86d9-d7cf6f445808\") " pod="kube-system/cilium-operator-5d85765b45-zlrt6" Jul 7 00:18:23.040563 containerd[1994]: time="2025-07-07T00:18:23.040421831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4gsx8,Uid:bf588f7d-d600-4c0d-a996-787be3b3edf2,Namespace:kube-system,Attempt:0,}" Jul 7 00:18:23.048666 containerd[1994]: time="2025-07-07T00:18:23.048534024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tg75z,Uid:4da188ec-67ac-46c3-b7e5-db5d8349946a,Namespace:kube-system,Attempt:0,}" Jul 7 00:18:23.095551 containerd[1994]: time="2025-07-07T00:18:23.095498732Z" level=info msg="connecting to shim a3e3ffa5e883a99b4c624f71f7760a75bc1da62f6ffe0cd10d34b6a1614526ea" address="unix:///run/containerd/s/a65623d9ee29c666da31e8b2869ad2337da114d9086693756faf364c452ce82e" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:18:23.107365 containerd[1994]: time="2025-07-07T00:18:23.107302302Z" level=info msg="connecting to shim cbb9f6ba5860ef94d88bc7d495577ddedc551e19b8e51d1e5627ffb192cfce11" address="unix:///run/containerd/s/5c91d7b0396fe1db55870b2209e2c516aa8ff84d5d74460503a6ef5261c0022f" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:18:23.131581 systemd[1]: Started cri-containerd-a3e3ffa5e883a99b4c624f71f7760a75bc1da62f6ffe0cd10d34b6a1614526ea.scope - libcontainer container a3e3ffa5e883a99b4c624f71f7760a75bc1da62f6ffe0cd10d34b6a1614526ea. Jul 7 00:18:23.152927 systemd[1]: Started cri-containerd-cbb9f6ba5860ef94d88bc7d495577ddedc551e19b8e51d1e5627ffb192cfce11.scope - libcontainer container cbb9f6ba5860ef94d88bc7d495577ddedc551e19b8e51d1e5627ffb192cfce11. Jul 7 00:18:23.193939 containerd[1994]: time="2025-07-07T00:18:23.193895376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4gsx8,Uid:bf588f7d-d600-4c0d-a996-787be3b3edf2,Namespace:kube-system,Attempt:0,} returns sandbox id \"a3e3ffa5e883a99b4c624f71f7760a75bc1da62f6ffe0cd10d34b6a1614526ea\"" Jul 7 00:18:23.202320 containerd[1994]: time="2025-07-07T00:18:23.202280984Z" level=info msg="CreateContainer within sandbox \"a3e3ffa5e883a99b4c624f71f7760a75bc1da62f6ffe0cd10d34b6a1614526ea\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 00:18:23.227229 containerd[1994]: time="2025-07-07T00:18:23.227168283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tg75z,Uid:4da188ec-67ac-46c3-b7e5-db5d8349946a,Namespace:kube-system,Attempt:0,} returns sandbox id \"cbb9f6ba5860ef94d88bc7d495577ddedc551e19b8e51d1e5627ffb192cfce11\"" Jul 7 00:18:23.230294 containerd[1994]: time="2025-07-07T00:18:23.229756050Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 7 00:18:23.245328 containerd[1994]: time="2025-07-07T00:18:23.245076084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-zlrt6,Uid:a29ba126-2dd2-4518-86d9-d7cf6f445808,Namespace:kube-system,Attempt:0,}" Jul 7 00:18:23.246395 containerd[1994]: time="2025-07-07T00:18:23.246287533Z" level=info msg="Container 387ebb483884d24ca522dbc0fa72d6eb3a94688cff53e60a6c921d8621722088: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:18:23.266251 containerd[1994]: time="2025-07-07T00:18:23.266197745Z" level=info msg="CreateContainer within sandbox \"a3e3ffa5e883a99b4c624f71f7760a75bc1da62f6ffe0cd10d34b6a1614526ea\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"387ebb483884d24ca522dbc0fa72d6eb3a94688cff53e60a6c921d8621722088\"" Jul 7 00:18:23.266959 containerd[1994]: time="2025-07-07T00:18:23.266907158Z" level=info msg="StartContainer for \"387ebb483884d24ca522dbc0fa72d6eb3a94688cff53e60a6c921d8621722088\"" Jul 7 00:18:23.271665 containerd[1994]: time="2025-07-07T00:18:23.268833253Z" level=info msg="connecting to shim 387ebb483884d24ca522dbc0fa72d6eb3a94688cff53e60a6c921d8621722088" address="unix:///run/containerd/s/a65623d9ee29c666da31e8b2869ad2337da114d9086693756faf364c452ce82e" protocol=ttrpc version=3 Jul 7 00:18:23.315691 containerd[1994]: time="2025-07-07T00:18:23.315138721Z" level=info msg="connecting to shim 90a9dc74bab96b31e554185441b1f013452257828b62691a1e7829aace710015" address="unix:///run/containerd/s/810a189511964dc29cae9ab8884c1ec8adfbb92fce641d2a84f80b89eb289fba" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:18:23.324577 systemd[1]: Started cri-containerd-387ebb483884d24ca522dbc0fa72d6eb3a94688cff53e60a6c921d8621722088.scope - libcontainer container 387ebb483884d24ca522dbc0fa72d6eb3a94688cff53e60a6c921d8621722088. Jul 7 00:18:23.365589 systemd[1]: Started cri-containerd-90a9dc74bab96b31e554185441b1f013452257828b62691a1e7829aace710015.scope - libcontainer container 90a9dc74bab96b31e554185441b1f013452257828b62691a1e7829aace710015. Jul 7 00:18:23.426617 containerd[1994]: time="2025-07-07T00:18:23.426575021Z" level=info msg="StartContainer for \"387ebb483884d24ca522dbc0fa72d6eb3a94688cff53e60a6c921d8621722088\" returns successfully" Jul 7 00:18:23.461789 containerd[1994]: time="2025-07-07T00:18:23.461684575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-zlrt6,Uid:a29ba126-2dd2-4518-86d9-d7cf6f445808,Namespace:kube-system,Attempt:0,} returns sandbox id \"90a9dc74bab96b31e554185441b1f013452257828b62691a1e7829aace710015\"" Jul 7 00:18:24.433897 kubelet[3297]: I0707 00:18:24.433040 3297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4gsx8" podStartSLOduration=2.43122556 podStartE2EDuration="2.43122556s" podCreationTimestamp="2025-07-07 00:18:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:18:24.431031782 +0000 UTC m=+6.322474881" watchObservedRunningTime="2025-07-07 00:18:24.43122556 +0000 UTC m=+6.322668663" Jul 7 00:18:27.819278 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3221381954.mount: Deactivated successfully. Jul 7 00:18:30.492073 containerd[1994]: time="2025-07-07T00:18:30.491994113Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:18:30.494110 containerd[1994]: time="2025-07-07T00:18:30.493944286Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 7 00:18:30.496393 containerd[1994]: time="2025-07-07T00:18:30.496328888Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:18:30.498104 containerd[1994]: time="2025-07-07T00:18:30.497738191Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.267936488s" Jul 7 00:18:30.498104 containerd[1994]: time="2025-07-07T00:18:30.497775933Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 7 00:18:30.499268 containerd[1994]: time="2025-07-07T00:18:30.499240256Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 7 00:18:30.500258 containerd[1994]: time="2025-07-07T00:18:30.500229682Z" level=info msg="CreateContainer within sandbox \"cbb9f6ba5860ef94d88bc7d495577ddedc551e19b8e51d1e5627ffb192cfce11\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 7 00:18:30.534366 containerd[1994]: time="2025-07-07T00:18:30.533846464Z" level=info msg="Container 669262df1f541d01ca0a37e2b10b07086c8bd813a91e08fb7a0eacff552e8e24: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:18:30.536870 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2598273974.mount: Deactivated successfully. Jul 7 00:18:30.582896 containerd[1994]: time="2025-07-07T00:18:30.582845772Z" level=info msg="CreateContainer within sandbox \"cbb9f6ba5860ef94d88bc7d495577ddedc551e19b8e51d1e5627ffb192cfce11\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"669262df1f541d01ca0a37e2b10b07086c8bd813a91e08fb7a0eacff552e8e24\"" Jul 7 00:18:30.583549 containerd[1994]: time="2025-07-07T00:18:30.583509963Z" level=info msg="StartContainer for \"669262df1f541d01ca0a37e2b10b07086c8bd813a91e08fb7a0eacff552e8e24\"" Jul 7 00:18:30.585632 containerd[1994]: time="2025-07-07T00:18:30.585593589Z" level=info msg="connecting to shim 669262df1f541d01ca0a37e2b10b07086c8bd813a91e08fb7a0eacff552e8e24" address="unix:///run/containerd/s/5c91d7b0396fe1db55870b2209e2c516aa8ff84d5d74460503a6ef5261c0022f" protocol=ttrpc version=3 Jul 7 00:18:30.633582 systemd[1]: Started cri-containerd-669262df1f541d01ca0a37e2b10b07086c8bd813a91e08fb7a0eacff552e8e24.scope - libcontainer container 669262df1f541d01ca0a37e2b10b07086c8bd813a91e08fb7a0eacff552e8e24. Jul 7 00:18:30.672806 containerd[1994]: time="2025-07-07T00:18:30.672747859Z" level=info msg="StartContainer for \"669262df1f541d01ca0a37e2b10b07086c8bd813a91e08fb7a0eacff552e8e24\" returns successfully" Jul 7 00:18:30.685457 systemd[1]: cri-containerd-669262df1f541d01ca0a37e2b10b07086c8bd813a91e08fb7a0eacff552e8e24.scope: Deactivated successfully. Jul 7 00:18:30.764581 containerd[1994]: time="2025-07-07T00:18:30.762865171Z" level=info msg="received exit event container_id:\"669262df1f541d01ca0a37e2b10b07086c8bd813a91e08fb7a0eacff552e8e24\" id:\"669262df1f541d01ca0a37e2b10b07086c8bd813a91e08fb7a0eacff552e8e24\" pid:3979 exited_at:{seconds:1751847510 nanos:689685834}" Jul 7 00:18:30.799199 containerd[1994]: time="2025-07-07T00:18:30.799141297Z" level=info msg="TaskExit event in podsandbox handler container_id:\"669262df1f541d01ca0a37e2b10b07086c8bd813a91e08fb7a0eacff552e8e24\" id:\"669262df1f541d01ca0a37e2b10b07086c8bd813a91e08fb7a0eacff552e8e24\" pid:3979 exited_at:{seconds:1751847510 nanos:689685834}" Jul 7 00:18:30.813900 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-669262df1f541d01ca0a37e2b10b07086c8bd813a91e08fb7a0eacff552e8e24-rootfs.mount: Deactivated successfully. Jul 7 00:18:31.458190 containerd[1994]: time="2025-07-07T00:18:31.458139286Z" level=info msg="CreateContainer within sandbox \"cbb9f6ba5860ef94d88bc7d495577ddedc551e19b8e51d1e5627ffb192cfce11\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 7 00:18:31.469366 containerd[1994]: time="2025-07-07T00:18:31.468689135Z" level=info msg="Container e044b2bc92757a462c48dd9f2cfcc66f4507a6e6a129394fed984d72a2a7f2e8: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:18:31.474562 containerd[1994]: time="2025-07-07T00:18:31.474519521Z" level=info msg="CreateContainer within sandbox \"cbb9f6ba5860ef94d88bc7d495577ddedc551e19b8e51d1e5627ffb192cfce11\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e044b2bc92757a462c48dd9f2cfcc66f4507a6e6a129394fed984d72a2a7f2e8\"" Jul 7 00:18:31.475281 containerd[1994]: time="2025-07-07T00:18:31.475180478Z" level=info msg="StartContainer for \"e044b2bc92757a462c48dd9f2cfcc66f4507a6e6a129394fed984d72a2a7f2e8\"" Jul 7 00:18:31.476642 containerd[1994]: time="2025-07-07T00:18:31.476601365Z" level=info msg="connecting to shim e044b2bc92757a462c48dd9f2cfcc66f4507a6e6a129394fed984d72a2a7f2e8" address="unix:///run/containerd/s/5c91d7b0396fe1db55870b2209e2c516aa8ff84d5d74460503a6ef5261c0022f" protocol=ttrpc version=3 Jul 7 00:18:31.502585 systemd[1]: Started cri-containerd-e044b2bc92757a462c48dd9f2cfcc66f4507a6e6a129394fed984d72a2a7f2e8.scope - libcontainer container e044b2bc92757a462c48dd9f2cfcc66f4507a6e6a129394fed984d72a2a7f2e8. Jul 7 00:18:31.553375 containerd[1994]: time="2025-07-07T00:18:31.553272279Z" level=info msg="StartContainer for \"e044b2bc92757a462c48dd9f2cfcc66f4507a6e6a129394fed984d72a2a7f2e8\" returns successfully" Jul 7 00:18:31.565648 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 00:18:31.566012 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:18:31.568470 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:18:31.572055 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:18:31.575555 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 7 00:18:31.575997 systemd[1]: cri-containerd-e044b2bc92757a462c48dd9f2cfcc66f4507a6e6a129394fed984d72a2a7f2e8.scope: Deactivated successfully. Jul 7 00:18:31.578545 containerd[1994]: time="2025-07-07T00:18:31.578510349Z" level=info msg="received exit event container_id:\"e044b2bc92757a462c48dd9f2cfcc66f4507a6e6a129394fed984d72a2a7f2e8\" id:\"e044b2bc92757a462c48dd9f2cfcc66f4507a6e6a129394fed984d72a2a7f2e8\" pid:4021 exited_at:{seconds:1751847511 nanos:577144494}" Jul 7 00:18:31.582412 containerd[1994]: time="2025-07-07T00:18:31.579402180Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e044b2bc92757a462c48dd9f2cfcc66f4507a6e6a129394fed984d72a2a7f2e8\" id:\"e044b2bc92757a462c48dd9f2cfcc66f4507a6e6a129394fed984d72a2a7f2e8\" pid:4021 exited_at:{seconds:1751847511 nanos:577144494}" Jul 7 00:18:31.618541 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:18:31.626549 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e044b2bc92757a462c48dd9f2cfcc66f4507a6e6a129394fed984d72a2a7f2e8-rootfs.mount: Deactivated successfully. Jul 7 00:18:32.467362 containerd[1994]: time="2025-07-07T00:18:32.467299929Z" level=info msg="CreateContainer within sandbox \"cbb9f6ba5860ef94d88bc7d495577ddedc551e19b8e51d1e5627ffb192cfce11\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 7 00:18:32.532691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount203072837.mount: Deactivated successfully. Jul 7 00:18:32.544401 containerd[1994]: time="2025-07-07T00:18:32.543644116Z" level=info msg="Container b640b168589cd2f83051998a1d7a1e7225478235ebfdaf11242579bdd43c4fab: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:18:32.560198 containerd[1994]: time="2025-07-07T00:18:32.560158268Z" level=info msg="CreateContainer within sandbox \"cbb9f6ba5860ef94d88bc7d495577ddedc551e19b8e51d1e5627ffb192cfce11\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b640b168589cd2f83051998a1d7a1e7225478235ebfdaf11242579bdd43c4fab\"" Jul 7 00:18:32.563187 containerd[1994]: time="2025-07-07T00:18:32.563082623Z" level=info msg="StartContainer for \"b640b168589cd2f83051998a1d7a1e7225478235ebfdaf11242579bdd43c4fab\"" Jul 7 00:18:32.566924 containerd[1994]: time="2025-07-07T00:18:32.566863116Z" level=info msg="connecting to shim b640b168589cd2f83051998a1d7a1e7225478235ebfdaf11242579bdd43c4fab" address="unix:///run/containerd/s/5c91d7b0396fe1db55870b2209e2c516aa8ff84d5d74460503a6ef5261c0022f" protocol=ttrpc version=3 Jul 7 00:18:32.602656 systemd[1]: Started cri-containerd-b640b168589cd2f83051998a1d7a1e7225478235ebfdaf11242579bdd43c4fab.scope - libcontainer container b640b168589cd2f83051998a1d7a1e7225478235ebfdaf11242579bdd43c4fab. Jul 7 00:18:32.666561 containerd[1994]: time="2025-07-07T00:18:32.666518186Z" level=info msg="StartContainer for \"b640b168589cd2f83051998a1d7a1e7225478235ebfdaf11242579bdd43c4fab\" returns successfully" Jul 7 00:18:32.676663 containerd[1994]: time="2025-07-07T00:18:32.675677714Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:18:32.676443 systemd[1]: cri-containerd-b640b168589cd2f83051998a1d7a1e7225478235ebfdaf11242579bdd43c4fab.scope: Deactivated successfully. Jul 7 00:18:32.676925 systemd[1]: cri-containerd-b640b168589cd2f83051998a1d7a1e7225478235ebfdaf11242579bdd43c4fab.scope: Consumed 26ms CPU time, 3.8M memory peak, 1M read from disk. Jul 7 00:18:32.678260 containerd[1994]: time="2025-07-07T00:18:32.678233872Z" level=info msg="received exit event container_id:\"b640b168589cd2f83051998a1d7a1e7225478235ebfdaf11242579bdd43c4fab\" id:\"b640b168589cd2f83051998a1d7a1e7225478235ebfdaf11242579bdd43c4fab\" pid:4079 exited_at:{seconds:1751847512 nanos:678029338}" Jul 7 00:18:32.678988 containerd[1994]: time="2025-07-07T00:18:32.678954970Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b640b168589cd2f83051998a1d7a1e7225478235ebfdaf11242579bdd43c4fab\" id:\"b640b168589cd2f83051998a1d7a1e7225478235ebfdaf11242579bdd43c4fab\" pid:4079 exited_at:{seconds:1751847512 nanos:678029338}" Jul 7 00:18:32.680131 containerd[1994]: time="2025-07-07T00:18:32.680104401Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 7 00:18:32.682361 containerd[1994]: time="2025-07-07T00:18:32.682095363Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:18:32.684286 containerd[1994]: time="2025-07-07T00:18:32.684255723Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.18498029s" Jul 7 00:18:32.684429 containerd[1994]: time="2025-07-07T00:18:32.684304681Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 7 00:18:32.722461 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b640b168589cd2f83051998a1d7a1e7225478235ebfdaf11242579bdd43c4fab-rootfs.mount: Deactivated successfully. Jul 7 00:18:32.731406 containerd[1994]: time="2025-07-07T00:18:32.731358252Z" level=info msg="CreateContainer within sandbox \"90a9dc74bab96b31e554185441b1f013452257828b62691a1e7829aace710015\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 7 00:18:32.757224 containerd[1994]: time="2025-07-07T00:18:32.757178713Z" level=info msg="Container 9a9c3a27b63c6fa07e80643a06b9a9fd23b9473818c8f8f320273e231b8fcd47: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:18:32.776065 containerd[1994]: time="2025-07-07T00:18:32.776017527Z" level=info msg="CreateContainer within sandbox \"90a9dc74bab96b31e554185441b1f013452257828b62691a1e7829aace710015\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9a9c3a27b63c6fa07e80643a06b9a9fd23b9473818c8f8f320273e231b8fcd47\"" Jul 7 00:18:32.776960 containerd[1994]: time="2025-07-07T00:18:32.776929916Z" level=info msg="StartContainer for \"9a9c3a27b63c6fa07e80643a06b9a9fd23b9473818c8f8f320273e231b8fcd47\"" Jul 7 00:18:32.777948 containerd[1994]: time="2025-07-07T00:18:32.777920564Z" level=info msg="connecting to shim 9a9c3a27b63c6fa07e80643a06b9a9fd23b9473818c8f8f320273e231b8fcd47" address="unix:///run/containerd/s/810a189511964dc29cae9ab8884c1ec8adfbb92fce641d2a84f80b89eb289fba" protocol=ttrpc version=3 Jul 7 00:18:32.803843 systemd[1]: Started cri-containerd-9a9c3a27b63c6fa07e80643a06b9a9fd23b9473818c8f8f320273e231b8fcd47.scope - libcontainer container 9a9c3a27b63c6fa07e80643a06b9a9fd23b9473818c8f8f320273e231b8fcd47. Jul 7 00:18:32.844663 containerd[1994]: time="2025-07-07T00:18:32.844625735Z" level=info msg="StartContainer for \"9a9c3a27b63c6fa07e80643a06b9a9fd23b9473818c8f8f320273e231b8fcd47\" returns successfully" Jul 7 00:18:33.478575 containerd[1994]: time="2025-07-07T00:18:33.478532433Z" level=info msg="CreateContainer within sandbox \"cbb9f6ba5860ef94d88bc7d495577ddedc551e19b8e51d1e5627ffb192cfce11\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 7 00:18:33.500352 containerd[1994]: time="2025-07-07T00:18:33.500233032Z" level=info msg="Container 6ef46c7b3ec76b58ac44a492d004ac93ad80225a6a9f891a1d8dd37a44e7b117: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:18:33.515109 containerd[1994]: time="2025-07-07T00:18:33.515061763Z" level=info msg="CreateContainer within sandbox \"cbb9f6ba5860ef94d88bc7d495577ddedc551e19b8e51d1e5627ffb192cfce11\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6ef46c7b3ec76b58ac44a492d004ac93ad80225a6a9f891a1d8dd37a44e7b117\"" Jul 7 00:18:33.516106 containerd[1994]: time="2025-07-07T00:18:33.516071275Z" level=info msg="StartContainer for \"6ef46c7b3ec76b58ac44a492d004ac93ad80225a6a9f891a1d8dd37a44e7b117\"" Jul 7 00:18:33.519326 containerd[1994]: time="2025-07-07T00:18:33.519279954Z" level=info msg="connecting to shim 6ef46c7b3ec76b58ac44a492d004ac93ad80225a6a9f891a1d8dd37a44e7b117" address="unix:///run/containerd/s/5c91d7b0396fe1db55870b2209e2c516aa8ff84d5d74460503a6ef5261c0022f" protocol=ttrpc version=3 Jul 7 00:18:33.566579 systemd[1]: Started cri-containerd-6ef46c7b3ec76b58ac44a492d004ac93ad80225a6a9f891a1d8dd37a44e7b117.scope - libcontainer container 6ef46c7b3ec76b58ac44a492d004ac93ad80225a6a9f891a1d8dd37a44e7b117. Jul 7 00:18:33.625838 systemd[1]: cri-containerd-6ef46c7b3ec76b58ac44a492d004ac93ad80225a6a9f891a1d8dd37a44e7b117.scope: Deactivated successfully. Jul 7 00:18:33.628965 containerd[1994]: time="2025-07-07T00:18:33.628918359Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6ef46c7b3ec76b58ac44a492d004ac93ad80225a6a9f891a1d8dd37a44e7b117\" id:\"6ef46c7b3ec76b58ac44a492d004ac93ad80225a6a9f891a1d8dd37a44e7b117\" pid:4154 exited_at:{seconds:1751847513 nanos:628243618}" Jul 7 00:18:33.631877 containerd[1994]: time="2025-07-07T00:18:33.631839115Z" level=info msg="received exit event container_id:\"6ef46c7b3ec76b58ac44a492d004ac93ad80225a6a9f891a1d8dd37a44e7b117\" id:\"6ef46c7b3ec76b58ac44a492d004ac93ad80225a6a9f891a1d8dd37a44e7b117\" pid:4154 exited_at:{seconds:1751847513 nanos:628243618}" Jul 7 00:18:33.661196 containerd[1994]: time="2025-07-07T00:18:33.661157120Z" level=info msg="StartContainer for \"6ef46c7b3ec76b58ac44a492d004ac93ad80225a6a9f891a1d8dd37a44e7b117\" returns successfully" Jul 7 00:18:33.699326 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ef46c7b3ec76b58ac44a492d004ac93ad80225a6a9f891a1d8dd37a44e7b117-rootfs.mount: Deactivated successfully. Jul 7 00:18:33.702683 kubelet[3297]: I0707 00:18:33.701132 3297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-zlrt6" podStartSLOduration=2.479622991 podStartE2EDuration="11.701108257s" podCreationTimestamp="2025-07-07 00:18:22 +0000 UTC" firstStartedPulling="2025-07-07 00:18:23.463660416 +0000 UTC m=+5.355103503" lastFinishedPulling="2025-07-07 00:18:32.68514569 +0000 UTC m=+14.576588769" observedRunningTime="2025-07-07 00:18:33.700498363 +0000 UTC m=+15.591941460" watchObservedRunningTime="2025-07-07 00:18:33.701108257 +0000 UTC m=+15.592551358" Jul 7 00:18:34.494131 containerd[1994]: time="2025-07-07T00:18:34.494086877Z" level=info msg="CreateContainer within sandbox \"cbb9f6ba5860ef94d88bc7d495577ddedc551e19b8e51d1e5627ffb192cfce11\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 7 00:18:34.525700 containerd[1994]: time="2025-07-07T00:18:34.524554114Z" level=info msg="Container 76852031269f0337640651d006dd506883642fb50642bfb64974c7909f6c8d0e: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:18:34.541490 containerd[1994]: time="2025-07-07T00:18:34.541442928Z" level=info msg="CreateContainer within sandbox \"cbb9f6ba5860ef94d88bc7d495577ddedc551e19b8e51d1e5627ffb192cfce11\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"76852031269f0337640651d006dd506883642fb50642bfb64974c7909f6c8d0e\"" Jul 7 00:18:34.542301 containerd[1994]: time="2025-07-07T00:18:34.542270623Z" level=info msg="StartContainer for \"76852031269f0337640651d006dd506883642fb50642bfb64974c7909f6c8d0e\"" Jul 7 00:18:34.543652 containerd[1994]: time="2025-07-07T00:18:34.543607826Z" level=info msg="connecting to shim 76852031269f0337640651d006dd506883642fb50642bfb64974c7909f6c8d0e" address="unix:///run/containerd/s/5c91d7b0396fe1db55870b2209e2c516aa8ff84d5d74460503a6ef5261c0022f" protocol=ttrpc version=3 Jul 7 00:18:34.579534 systemd[1]: Started cri-containerd-76852031269f0337640651d006dd506883642fb50642bfb64974c7909f6c8d0e.scope - libcontainer container 76852031269f0337640651d006dd506883642fb50642bfb64974c7909f6c8d0e. Jul 7 00:18:34.651590 containerd[1994]: time="2025-07-07T00:18:34.651541598Z" level=info msg="StartContainer for \"76852031269f0337640651d006dd506883642fb50642bfb64974c7909f6c8d0e\" returns successfully" Jul 7 00:18:34.874366 containerd[1994]: time="2025-07-07T00:18:34.873142422Z" level=info msg="TaskExit event in podsandbox handler container_id:\"76852031269f0337640651d006dd506883642fb50642bfb64974c7909f6c8d0e\" id:\"ddd3f9227134a47cba1f354594e88c7d1334a8a17e3fe1f3a32917b571bf92a0\" pid:4220 exited_at:{seconds:1751847514 nanos:872527040}" Jul 7 00:18:34.955872 kubelet[3297]: I0707 00:18:34.955838 3297 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 7 00:18:34.994499 systemd[1]: Created slice kubepods-burstable-podaea7cfe7_c40a_4ed6_9a43_4adf36c705c8.slice - libcontainer container kubepods-burstable-podaea7cfe7_c40a_4ed6_9a43_4adf36c705c8.slice. Jul 7 00:18:35.005055 systemd[1]: Created slice kubepods-burstable-pod3794ea09_2b30_4799_9033_a1dc65219fe6.slice - libcontainer container kubepods-burstable-pod3794ea09_2b30_4799_9033_a1dc65219fe6.slice. Jul 7 00:18:35.164485 kubelet[3297]: I0707 00:18:35.164067 3297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v6mp\" (UniqueName: \"kubernetes.io/projected/aea7cfe7-c40a-4ed6-9a43-4adf36c705c8-kube-api-access-8v6mp\") pod \"coredns-7c65d6cfc9-fc5z6\" (UID: \"aea7cfe7-c40a-4ed6-9a43-4adf36c705c8\") " pod="kube-system/coredns-7c65d6cfc9-fc5z6" Jul 7 00:18:35.164485 kubelet[3297]: I0707 00:18:35.164126 3297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aea7cfe7-c40a-4ed6-9a43-4adf36c705c8-config-volume\") pod \"coredns-7c65d6cfc9-fc5z6\" (UID: \"aea7cfe7-c40a-4ed6-9a43-4adf36c705c8\") " pod="kube-system/coredns-7c65d6cfc9-fc5z6" Jul 7 00:18:35.164485 kubelet[3297]: I0707 00:18:35.164145 3297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjk8q\" (UniqueName: \"kubernetes.io/projected/3794ea09-2b30-4799-9033-a1dc65219fe6-kube-api-access-zjk8q\") pod \"coredns-7c65d6cfc9-zmh9b\" (UID: \"3794ea09-2b30-4799-9033-a1dc65219fe6\") " pod="kube-system/coredns-7c65d6cfc9-zmh9b" Jul 7 00:18:35.164485 kubelet[3297]: I0707 00:18:35.164166 3297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3794ea09-2b30-4799-9033-a1dc65219fe6-config-volume\") pod \"coredns-7c65d6cfc9-zmh9b\" (UID: \"3794ea09-2b30-4799-9033-a1dc65219fe6\") " pod="kube-system/coredns-7c65d6cfc9-zmh9b" Jul 7 00:18:35.311334 containerd[1994]: time="2025-07-07T00:18:35.311292520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zmh9b,Uid:3794ea09-2b30-4799-9033-a1dc65219fe6,Namespace:kube-system,Attempt:0,}" Jul 7 00:18:35.600997 containerd[1994]: time="2025-07-07T00:18:35.600960903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fc5z6,Uid:aea7cfe7-c40a-4ed6-9a43-4adf36c705c8,Namespace:kube-system,Attempt:0,}" Jul 7 00:18:37.327028 systemd-networkd[1825]: cilium_host: Link UP Jul 7 00:18:37.329031 systemd-networkd[1825]: cilium_net: Link UP Jul 7 00:18:37.330435 systemd-networkd[1825]: cilium_net: Gained carrier Jul 7 00:18:37.330878 (udev-worker)[4320]: Network interface NamePolicy= disabled on kernel command line. Jul 7 00:18:37.331623 systemd-networkd[1825]: cilium_host: Gained carrier Jul 7 00:18:37.332131 (udev-worker)[4271]: Network interface NamePolicy= disabled on kernel command line. Jul 7 00:18:37.462674 (udev-worker)[4340]: Network interface NamePolicy= disabled on kernel command line. Jul 7 00:18:37.471244 systemd-networkd[1825]: cilium_vxlan: Link UP Jul 7 00:18:37.471258 systemd-networkd[1825]: cilium_vxlan: Gained carrier Jul 7 00:18:38.180417 kernel: NET: Registered PF_ALG protocol family Jul 7 00:18:38.348697 systemd-networkd[1825]: cilium_net: Gained IPv6LL Jul 7 00:18:38.349718 systemd-networkd[1825]: cilium_host: Gained IPv6LL Jul 7 00:18:38.989721 systemd-networkd[1825]: lxc_health: Link UP Jul 7 00:18:38.990718 (udev-worker)[4341]: Network interface NamePolicy= disabled on kernel command line. Jul 7 00:18:38.999134 systemd-networkd[1825]: lxc_health: Gained carrier Jul 7 00:18:39.085105 kubelet[3297]: I0707 00:18:39.085039 3297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tg75z" podStartSLOduration=9.815134622 podStartE2EDuration="17.085017828s" podCreationTimestamp="2025-07-07 00:18:22 +0000 UTC" firstStartedPulling="2025-07-07 00:18:23.228813733 +0000 UTC m=+5.120256818" lastFinishedPulling="2025-07-07 00:18:30.498696946 +0000 UTC m=+12.390140024" observedRunningTime="2025-07-07 00:18:35.54444241 +0000 UTC m=+17.435885504" watchObservedRunningTime="2025-07-07 00:18:39.085017828 +0000 UTC m=+20.976460927" Jul 7 00:18:39.309443 systemd-networkd[1825]: cilium_vxlan: Gained IPv6LL Jul 7 00:18:39.384499 systemd-networkd[1825]: lxc9c066d424655: Link UP Jul 7 00:18:39.390998 kernel: eth0: renamed from tmp6cf26 Jul 7 00:18:39.401068 systemd-networkd[1825]: lxc9c066d424655: Gained carrier Jul 7 00:18:39.651411 systemd-networkd[1825]: lxc19c34223a309: Link UP Jul 7 00:18:39.656701 kernel: eth0: renamed from tmp29164 Jul 7 00:18:39.662734 systemd-networkd[1825]: lxc19c34223a309: Gained carrier Jul 7 00:18:40.462450 systemd-networkd[1825]: lxc_health: Gained IPv6LL Jul 7 00:18:40.908633 systemd-networkd[1825]: lxc9c066d424655: Gained IPv6LL Jul 7 00:18:41.677440 systemd-networkd[1825]: lxc19c34223a309: Gained IPv6LL Jul 7 00:18:44.023536 containerd[1994]: time="2025-07-07T00:18:44.023478138Z" level=info msg="connecting to shim 291649894fe4518e7c9751176f255d6a681d46e62bf7c502e486abf2ba74dab0" address="unix:///run/containerd/s/1ce86b507cf6959cc862e7d3911bc1e0e4aa8503e593b07222b493cf5e61ffcc" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:18:44.029313 containerd[1994]: time="2025-07-07T00:18:44.027451522Z" level=info msg="connecting to shim 6cf267d8f9d309a5e563a4baef4d0d1d41cd2c78652e304724189d310cb1ed81" address="unix:///run/containerd/s/755ea938bfeb89961ea252618c1c7ba8492c346202b3fba1bb4aabbf1a18b5bb" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:18:44.095591 systemd[1]: Started cri-containerd-291649894fe4518e7c9751176f255d6a681d46e62bf7c502e486abf2ba74dab0.scope - libcontainer container 291649894fe4518e7c9751176f255d6a681d46e62bf7c502e486abf2ba74dab0. Jul 7 00:18:44.099221 systemd[1]: Started cri-containerd-6cf267d8f9d309a5e563a4baef4d0d1d41cd2c78652e304724189d310cb1ed81.scope - libcontainer container 6cf267d8f9d309a5e563a4baef4d0d1d41cd2c78652e304724189d310cb1ed81. Jul 7 00:18:44.207662 containerd[1994]: time="2025-07-07T00:18:44.207536860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fc5z6,Uid:aea7cfe7-c40a-4ed6-9a43-4adf36c705c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"291649894fe4518e7c9751176f255d6a681d46e62bf7c502e486abf2ba74dab0\"" Jul 7 00:18:44.209596 containerd[1994]: time="2025-07-07T00:18:44.209554251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zmh9b,Uid:3794ea09-2b30-4799-9033-a1dc65219fe6,Namespace:kube-system,Attempt:0,} returns sandbox id \"6cf267d8f9d309a5e563a4baef4d0d1d41cd2c78652e304724189d310cb1ed81\"" Jul 7 00:18:44.213839 containerd[1994]: time="2025-07-07T00:18:44.213782820Z" level=info msg="CreateContainer within sandbox \"291649894fe4518e7c9751176f255d6a681d46e62bf7c502e486abf2ba74dab0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 00:18:44.215245 containerd[1994]: time="2025-07-07T00:18:44.215198387Z" level=info msg="CreateContainer within sandbox \"6cf267d8f9d309a5e563a4baef4d0d1d41cd2c78652e304724189d310cb1ed81\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 00:18:44.241385 containerd[1994]: time="2025-07-07T00:18:44.241101098Z" level=info msg="Container 7dabe1827141e1d3c684c0d6092fff6725849691cffae697ac10b1bfb9545d11: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:18:44.241385 containerd[1994]: time="2025-07-07T00:18:44.241155091Z" level=info msg="Container 3ae192f2c360aab808f4ca70816db6fc1f30b934ed3eeaf8c11fd20986eb0e64: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:18:44.255667 containerd[1994]: time="2025-07-07T00:18:44.255634368Z" level=info msg="CreateContainer within sandbox \"6cf267d8f9d309a5e563a4baef4d0d1d41cd2c78652e304724189d310cb1ed81\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7dabe1827141e1d3c684c0d6092fff6725849691cffae697ac10b1bfb9545d11\"" Jul 7 00:18:44.257553 containerd[1994]: time="2025-07-07T00:18:44.257489764Z" level=info msg="StartContainer for \"7dabe1827141e1d3c684c0d6092fff6725849691cffae697ac10b1bfb9545d11\"" Jul 7 00:18:44.258283 containerd[1994]: time="2025-07-07T00:18:44.258232185Z" level=info msg="connecting to shim 7dabe1827141e1d3c684c0d6092fff6725849691cffae697ac10b1bfb9545d11" address="unix:///run/containerd/s/755ea938bfeb89961ea252618c1c7ba8492c346202b3fba1bb4aabbf1a18b5bb" protocol=ttrpc version=3 Jul 7 00:18:44.260292 containerd[1994]: time="2025-07-07T00:18:44.260241842Z" level=info msg="CreateContainer within sandbox \"291649894fe4518e7c9751176f255d6a681d46e62bf7c502e486abf2ba74dab0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3ae192f2c360aab808f4ca70816db6fc1f30b934ed3eeaf8c11fd20986eb0e64\"" Jul 7 00:18:44.262206 containerd[1994]: time="2025-07-07T00:18:44.262170157Z" level=info msg="StartContainer for \"3ae192f2c360aab808f4ca70816db6fc1f30b934ed3eeaf8c11fd20986eb0e64\"" Jul 7 00:18:44.264262 containerd[1994]: time="2025-07-07T00:18:44.264214787Z" level=info msg="connecting to shim 3ae192f2c360aab808f4ca70816db6fc1f30b934ed3eeaf8c11fd20986eb0e64" address="unix:///run/containerd/s/1ce86b507cf6959cc862e7d3911bc1e0e4aa8503e593b07222b493cf5e61ffcc" protocol=ttrpc version=3 Jul 7 00:18:44.296613 systemd[1]: Started cri-containerd-7dabe1827141e1d3c684c0d6092fff6725849691cffae697ac10b1bfb9545d11.scope - libcontainer container 7dabe1827141e1d3c684c0d6092fff6725849691cffae697ac10b1bfb9545d11. Jul 7 00:18:44.306618 systemd[1]: Started cri-containerd-3ae192f2c360aab808f4ca70816db6fc1f30b934ed3eeaf8c11fd20986eb0e64.scope - libcontainer container 3ae192f2c360aab808f4ca70816db6fc1f30b934ed3eeaf8c11fd20986eb0e64. Jul 7 00:18:44.380219 containerd[1994]: time="2025-07-07T00:18:44.379939761Z" level=info msg="StartContainer for \"3ae192f2c360aab808f4ca70816db6fc1f30b934ed3eeaf8c11fd20986eb0e64\" returns successfully" Jul 7 00:18:44.381975 containerd[1994]: time="2025-07-07T00:18:44.381522817Z" level=info msg="StartContainer for \"7dabe1827141e1d3c684c0d6092fff6725849691cffae697ac10b1bfb9545d11\" returns successfully" Jul 7 00:18:44.598685 kubelet[3297]: I0707 00:18:44.598560 3297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-zmh9b" podStartSLOduration=22.598540533 podStartE2EDuration="22.598540533s" podCreationTimestamp="2025-07-07 00:18:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:18:44.578985421 +0000 UTC m=+26.470428519" watchObservedRunningTime="2025-07-07 00:18:44.598540533 +0000 UTC m=+26.489983631" Jul 7 00:18:44.990077 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount386188433.mount: Deactivated successfully. Jul 7 00:18:45.328125 kubelet[3297]: I0707 00:18:45.327926 3297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-fc5z6" podStartSLOduration=23.327906794 podStartE2EDuration="23.327906794s" podCreationTimestamp="2025-07-07 00:18:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:18:44.599535421 +0000 UTC m=+26.490978516" watchObservedRunningTime="2025-07-07 00:18:45.327906794 +0000 UTC m=+27.219349891" Jul 7 00:18:46.576595 ntpd[1966]: Listen normally on 7 cilium_host 192.168.0.51:123 Jul 7 00:18:46.576690 ntpd[1966]: Listen normally on 8 cilium_net [fe80::cce6:3bff:fe96:abcb%4]:123 Jul 7 00:18:46.577137 ntpd[1966]: 7 Jul 00:18:46 ntpd[1966]: Listen normally on 7 cilium_host 192.168.0.51:123 Jul 7 00:18:46.577137 ntpd[1966]: 7 Jul 00:18:46 ntpd[1966]: Listen normally on 8 cilium_net [fe80::cce6:3bff:fe96:abcb%4]:123 Jul 7 00:18:46.577137 ntpd[1966]: 7 Jul 00:18:46 ntpd[1966]: Listen normally on 9 cilium_host [fe80::aca3:e8ff:fef0:3d94%5]:123 Jul 7 00:18:46.577137 ntpd[1966]: 7 Jul 00:18:46 ntpd[1966]: Listen normally on 10 cilium_vxlan [fe80::7886:88ff:febf:bab4%6]:123 Jul 7 00:18:46.577137 ntpd[1966]: 7 Jul 00:18:46 ntpd[1966]: Listen normally on 11 lxc_health [fe80::47d:4aff:fe72:2538%8]:123 Jul 7 00:18:46.577137 ntpd[1966]: 7 Jul 00:18:46 ntpd[1966]: Listen normally on 12 lxc9c066d424655 [fe80::fc49:4dff:fec8:b4c2%10]:123 Jul 7 00:18:46.577137 ntpd[1966]: 7 Jul 00:18:46 ntpd[1966]: Listen normally on 13 lxc19c34223a309 [fe80::fcf4:8bff:fec2:d32%12]:123 Jul 7 00:18:46.576762 ntpd[1966]: Listen normally on 9 cilium_host [fe80::aca3:e8ff:fef0:3d94%5]:123 Jul 7 00:18:46.576803 ntpd[1966]: Listen normally on 10 cilium_vxlan [fe80::7886:88ff:febf:bab4%6]:123 Jul 7 00:18:46.576843 ntpd[1966]: Listen normally on 11 lxc_health [fe80::47d:4aff:fe72:2538%8]:123 Jul 7 00:18:46.576880 ntpd[1966]: Listen normally on 12 lxc9c066d424655 [fe80::fc49:4dff:fec8:b4c2%10]:123 Jul 7 00:18:46.576917 ntpd[1966]: Listen normally on 13 lxc19c34223a309 [fe80::fcf4:8bff:fec2:d32%12]:123 Jul 7 00:18:51.591896 kubelet[3297]: I0707 00:18:51.591616 3297 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:19:05.449747 systemd[1]: Started sshd@9-172.31.31.140:22-147.75.109.163:39046.service - OpenSSH per-connection server daemon (147.75.109.163:39046). Jul 7 00:19:05.669865 sshd[4862]: Accepted publickey for core from 147.75.109.163 port 39046 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:19:05.672091 sshd-session[4862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:19:05.679036 systemd-logind[1973]: New session 10 of user core. Jul 7 00:19:05.688598 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 00:19:06.534973 sshd[4864]: Connection closed by 147.75.109.163 port 39046 Jul 7 00:19:06.535964 sshd-session[4862]: pam_unix(sshd:session): session closed for user core Jul 7 00:19:06.541571 systemd[1]: sshd@9-172.31.31.140:22-147.75.109.163:39046.service: Deactivated successfully. Jul 7 00:19:06.544028 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 00:19:06.545150 systemd-logind[1973]: Session 10 logged out. Waiting for processes to exit. Jul 7 00:19:06.547263 systemd-logind[1973]: Removed session 10. Jul 7 00:19:11.575809 systemd[1]: Started sshd@10-172.31.31.140:22-147.75.109.163:49974.service - OpenSSH per-connection server daemon (147.75.109.163:49974). Jul 7 00:19:11.773101 sshd[4877]: Accepted publickey for core from 147.75.109.163 port 49974 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:19:11.774693 sshd-session[4877]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:19:11.781266 systemd-logind[1973]: New session 11 of user core. Jul 7 00:19:11.800023 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 00:19:12.013282 sshd[4880]: Connection closed by 147.75.109.163 port 49974 Jul 7 00:19:12.013867 sshd-session[4877]: pam_unix(sshd:session): session closed for user core Jul 7 00:19:12.017774 systemd[1]: sshd@10-172.31.31.140:22-147.75.109.163:49974.service: Deactivated successfully. Jul 7 00:19:12.020502 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 00:19:12.022058 systemd-logind[1973]: Session 11 logged out. Waiting for processes to exit. Jul 7 00:19:12.024059 systemd-logind[1973]: Removed session 11. Jul 7 00:19:17.052661 systemd[1]: Started sshd@11-172.31.31.140:22-147.75.109.163:36162.service - OpenSSH per-connection server daemon (147.75.109.163:36162). Jul 7 00:19:17.236481 sshd[4893]: Accepted publickey for core from 147.75.109.163 port 36162 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:19:17.238035 sshd-session[4893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:19:17.244111 systemd-logind[1973]: New session 12 of user core. Jul 7 00:19:17.251700 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 00:19:17.465789 sshd[4895]: Connection closed by 147.75.109.163 port 36162 Jul 7 00:19:17.466497 sshd-session[4893]: pam_unix(sshd:session): session closed for user core Jul 7 00:19:17.471449 systemd-logind[1973]: Session 12 logged out. Waiting for processes to exit. Jul 7 00:19:17.472182 systemd[1]: sshd@11-172.31.31.140:22-147.75.109.163:36162.service: Deactivated successfully. Jul 7 00:19:17.474956 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 00:19:17.478014 systemd-logind[1973]: Removed session 12. Jul 7 00:19:17.503759 systemd[1]: Started sshd@12-172.31.31.140:22-147.75.109.163:36172.service - OpenSSH per-connection server daemon (147.75.109.163:36172). Jul 7 00:19:17.693433 sshd[4908]: Accepted publickey for core from 147.75.109.163 port 36172 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:19:17.696536 sshd-session[4908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:19:17.715190 systemd-logind[1973]: New session 13 of user core. Jul 7 00:19:17.724624 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 00:19:17.973461 sshd[4910]: Connection closed by 147.75.109.163 port 36172 Jul 7 00:19:17.974322 sshd-session[4908]: pam_unix(sshd:session): session closed for user core Jul 7 00:19:17.984530 systemd-logind[1973]: Session 13 logged out. Waiting for processes to exit. Jul 7 00:19:17.987668 systemd[1]: sshd@12-172.31.31.140:22-147.75.109.163:36172.service: Deactivated successfully. Jul 7 00:19:17.990976 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 00:19:18.010871 systemd-logind[1973]: Removed session 13. Jul 7 00:19:18.013547 systemd[1]: Started sshd@13-172.31.31.140:22-147.75.109.163:36188.service - OpenSSH per-connection server daemon (147.75.109.163:36188). Jul 7 00:19:18.204970 sshd[4920]: Accepted publickey for core from 147.75.109.163 port 36188 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:19:18.206951 sshd-session[4920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:19:18.214079 systemd-logind[1973]: New session 14 of user core. Jul 7 00:19:18.220620 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 00:19:18.509433 sshd[4922]: Connection closed by 147.75.109.163 port 36188 Jul 7 00:19:18.510567 sshd-session[4920]: pam_unix(sshd:session): session closed for user core Jul 7 00:19:18.515285 systemd[1]: sshd@13-172.31.31.140:22-147.75.109.163:36188.service: Deactivated successfully. Jul 7 00:19:18.517969 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 00:19:18.519487 systemd-logind[1973]: Session 14 logged out. Waiting for processes to exit. Jul 7 00:19:18.522199 systemd-logind[1973]: Removed session 14. Jul 7 00:19:23.545762 systemd[1]: Started sshd@14-172.31.31.140:22-147.75.109.163:36194.service - OpenSSH per-connection server daemon (147.75.109.163:36194). Jul 7 00:19:23.717181 sshd[4938]: Accepted publickey for core from 147.75.109.163 port 36194 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:19:23.718683 sshd-session[4938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:19:23.725374 systemd-logind[1973]: New session 15 of user core. Jul 7 00:19:23.732612 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 00:19:23.923548 sshd[4940]: Connection closed by 147.75.109.163 port 36194 Jul 7 00:19:23.924272 sshd-session[4938]: pam_unix(sshd:session): session closed for user core Jul 7 00:19:23.929043 systemd[1]: sshd@14-172.31.31.140:22-147.75.109.163:36194.service: Deactivated successfully. Jul 7 00:19:23.931953 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 00:19:23.933091 systemd-logind[1973]: Session 15 logged out. Waiting for processes to exit. Jul 7 00:19:23.935867 systemd-logind[1973]: Removed session 15. Jul 7 00:19:28.962708 systemd[1]: Started sshd@15-172.31.31.140:22-147.75.109.163:46538.service - OpenSSH per-connection server daemon (147.75.109.163:46538). Jul 7 00:19:29.140814 sshd[4954]: Accepted publickey for core from 147.75.109.163 port 46538 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:19:29.142327 sshd-session[4954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:19:29.147400 systemd-logind[1973]: New session 16 of user core. Jul 7 00:19:29.154549 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 00:19:29.348625 sshd[4956]: Connection closed by 147.75.109.163 port 46538 Jul 7 00:19:29.349446 sshd-session[4954]: pam_unix(sshd:session): session closed for user core Jul 7 00:19:29.353675 systemd[1]: sshd@15-172.31.31.140:22-147.75.109.163:46538.service: Deactivated successfully. Jul 7 00:19:29.356075 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 00:19:29.357652 systemd-logind[1973]: Session 16 logged out. Waiting for processes to exit. Jul 7 00:19:29.360047 systemd-logind[1973]: Removed session 16. Jul 7 00:19:34.383314 systemd[1]: Started sshd@16-172.31.31.140:22-147.75.109.163:46550.service - OpenSSH per-connection server daemon (147.75.109.163:46550). Jul 7 00:19:34.556930 sshd[4968]: Accepted publickey for core from 147.75.109.163 port 46550 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:19:34.558520 sshd-session[4968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:19:34.564178 systemd-logind[1973]: New session 17 of user core. Jul 7 00:19:34.571649 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 00:19:34.831520 sshd[4970]: Connection closed by 147.75.109.163 port 46550 Jul 7 00:19:34.832585 sshd-session[4968]: pam_unix(sshd:session): session closed for user core Jul 7 00:19:34.836817 systemd[1]: sshd@16-172.31.31.140:22-147.75.109.163:46550.service: Deactivated successfully. Jul 7 00:19:34.840594 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 00:19:34.842563 systemd-logind[1973]: Session 17 logged out. Waiting for processes to exit. Jul 7 00:19:34.845015 systemd-logind[1973]: Removed session 17. Jul 7 00:19:34.872515 systemd[1]: Started sshd@17-172.31.31.140:22-147.75.109.163:46558.service - OpenSSH per-connection server daemon (147.75.109.163:46558). Jul 7 00:19:35.071466 sshd[4982]: Accepted publickey for core from 147.75.109.163 port 46558 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:19:35.073564 sshd-session[4982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:19:35.082478 systemd-logind[1973]: New session 18 of user core. Jul 7 00:19:35.087834 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 00:19:35.677508 sshd[4984]: Connection closed by 147.75.109.163 port 46558 Jul 7 00:19:35.678208 sshd-session[4982]: pam_unix(sshd:session): session closed for user core Jul 7 00:19:35.687124 systemd[1]: sshd@17-172.31.31.140:22-147.75.109.163:46558.service: Deactivated successfully. Jul 7 00:19:35.694549 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 00:19:35.698422 systemd-logind[1973]: Session 18 logged out. Waiting for processes to exit. Jul 7 00:19:35.710678 systemd-logind[1973]: Removed session 18. Jul 7 00:19:35.712608 systemd[1]: Started sshd@18-172.31.31.140:22-147.75.109.163:46574.service - OpenSSH per-connection server daemon (147.75.109.163:46574). Jul 7 00:19:35.894110 sshd[4993]: Accepted publickey for core from 147.75.109.163 port 46574 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:19:35.895992 sshd-session[4993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:19:35.901429 systemd-logind[1973]: New session 19 of user core. Jul 7 00:19:35.911763 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 00:19:37.905218 sshd[4995]: Connection closed by 147.75.109.163 port 46574 Jul 7 00:19:37.906197 sshd-session[4993]: pam_unix(sshd:session): session closed for user core Jul 7 00:19:37.945010 systemd[1]: sshd@18-172.31.31.140:22-147.75.109.163:46574.service: Deactivated successfully. Jul 7 00:19:37.949805 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 00:19:37.952416 systemd-logind[1973]: Session 19 logged out. Waiting for processes to exit. Jul 7 00:19:37.957677 systemd[1]: Started sshd@19-172.31.31.140:22-147.75.109.163:46276.service - OpenSSH per-connection server daemon (147.75.109.163:46276). Jul 7 00:19:37.960088 systemd-logind[1973]: Removed session 19. Jul 7 00:19:38.160702 sshd[5012]: Accepted publickey for core from 147.75.109.163 port 46276 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:19:38.163913 sshd-session[5012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:19:38.171308 systemd-logind[1973]: New session 20 of user core. Jul 7 00:19:38.183854 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 00:19:38.642433 sshd[5014]: Connection closed by 147.75.109.163 port 46276 Jul 7 00:19:38.643160 sshd-session[5012]: pam_unix(sshd:session): session closed for user core Jul 7 00:19:38.648322 systemd-logind[1973]: Session 20 logged out. Waiting for processes to exit. Jul 7 00:19:38.648883 systemd[1]: sshd@19-172.31.31.140:22-147.75.109.163:46276.service: Deactivated successfully. Jul 7 00:19:38.651317 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 00:19:38.654145 systemd-logind[1973]: Removed session 20. Jul 7 00:19:38.680078 systemd[1]: Started sshd@20-172.31.31.140:22-147.75.109.163:46290.service - OpenSSH per-connection server daemon (147.75.109.163:46290). Jul 7 00:19:38.876931 sshd[5024]: Accepted publickey for core from 147.75.109.163 port 46290 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:19:38.878210 sshd-session[5024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:19:38.884381 systemd-logind[1973]: New session 21 of user core. Jul 7 00:19:38.894604 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 7 00:19:39.095174 sshd[5026]: Connection closed by 147.75.109.163 port 46290 Jul 7 00:19:39.096000 sshd-session[5024]: pam_unix(sshd:session): session closed for user core Jul 7 00:19:39.100882 systemd[1]: sshd@20-172.31.31.140:22-147.75.109.163:46290.service: Deactivated successfully. Jul 7 00:19:39.103286 systemd[1]: session-21.scope: Deactivated successfully. Jul 7 00:19:39.105439 systemd-logind[1973]: Session 21 logged out. Waiting for processes to exit. Jul 7 00:19:39.107766 systemd-logind[1973]: Removed session 21. Jul 7 00:19:44.134089 systemd[1]: Started sshd@21-172.31.31.140:22-147.75.109.163:46304.service - OpenSSH per-connection server daemon (147.75.109.163:46304). Jul 7 00:19:44.314185 sshd[5042]: Accepted publickey for core from 147.75.109.163 port 46304 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:19:44.315934 sshd-session[5042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:19:44.321416 systemd-logind[1973]: New session 22 of user core. Jul 7 00:19:44.326568 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 7 00:19:44.508959 sshd[5044]: Connection closed by 147.75.109.163 port 46304 Jul 7 00:19:44.509539 sshd-session[5042]: pam_unix(sshd:session): session closed for user core Jul 7 00:19:44.513388 systemd[1]: sshd@21-172.31.31.140:22-147.75.109.163:46304.service: Deactivated successfully. Jul 7 00:19:44.515729 systemd[1]: session-22.scope: Deactivated successfully. Jul 7 00:19:44.517308 systemd-logind[1973]: Session 22 logged out. Waiting for processes to exit. Jul 7 00:19:44.519737 systemd-logind[1973]: Removed session 22. Jul 7 00:19:49.556651 systemd[1]: Started sshd@22-172.31.31.140:22-147.75.109.163:60354.service - OpenSSH per-connection server daemon (147.75.109.163:60354). Jul 7 00:19:49.739009 sshd[5055]: Accepted publickey for core from 147.75.109.163 port 60354 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:19:49.740483 sshd-session[5055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:19:49.746156 systemd-logind[1973]: New session 23 of user core. Jul 7 00:19:49.751817 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 7 00:19:49.946588 sshd[5057]: Connection closed by 147.75.109.163 port 60354 Jul 7 00:19:49.947466 sshd-session[5055]: pam_unix(sshd:session): session closed for user core Jul 7 00:19:49.951712 systemd[1]: sshd@22-172.31.31.140:22-147.75.109.163:60354.service: Deactivated successfully. Jul 7 00:19:49.953582 systemd[1]: session-23.scope: Deactivated successfully. Jul 7 00:19:49.954742 systemd-logind[1973]: Session 23 logged out. Waiting for processes to exit. Jul 7 00:19:49.956427 systemd-logind[1973]: Removed session 23. Jul 7 00:19:54.980301 systemd[1]: Started sshd@23-172.31.31.140:22-147.75.109.163:60362.service - OpenSSH per-connection server daemon (147.75.109.163:60362). Jul 7 00:19:55.182000 sshd[5071]: Accepted publickey for core from 147.75.109.163 port 60362 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:19:55.183983 sshd-session[5071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:19:55.190625 systemd-logind[1973]: New session 24 of user core. Jul 7 00:19:55.196419 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 7 00:19:55.434333 sshd[5073]: Connection closed by 147.75.109.163 port 60362 Jul 7 00:19:55.435848 sshd-session[5071]: pam_unix(sshd:session): session closed for user core Jul 7 00:19:55.440476 systemd[1]: sshd@23-172.31.31.140:22-147.75.109.163:60362.service: Deactivated successfully. Jul 7 00:19:55.443126 systemd[1]: session-24.scope: Deactivated successfully. Jul 7 00:19:55.444663 systemd-logind[1973]: Session 24 logged out. Waiting for processes to exit. Jul 7 00:19:55.446500 systemd-logind[1973]: Removed session 24. Jul 7 00:19:55.469739 systemd[1]: Started sshd@24-172.31.31.140:22-147.75.109.163:60364.service - OpenSSH per-connection server daemon (147.75.109.163:60364). Jul 7 00:19:55.647797 sshd[5085]: Accepted publickey for core from 147.75.109.163 port 60364 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:19:55.649329 sshd-session[5085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:19:55.655002 systemd-logind[1973]: New session 25 of user core. Jul 7 00:19:55.663897 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 7 00:19:57.105288 containerd[1994]: time="2025-07-07T00:19:57.105230774Z" level=info msg="StopContainer for \"9a9c3a27b63c6fa07e80643a06b9a9fd23b9473818c8f8f320273e231b8fcd47\" with timeout 30 (s)" Jul 7 00:19:57.108316 containerd[1994]: time="2025-07-07T00:19:57.108275574Z" level=info msg="Stop container \"9a9c3a27b63c6fa07e80643a06b9a9fd23b9473818c8f8f320273e231b8fcd47\" with signal terminated" Jul 7 00:19:57.131862 systemd[1]: cri-containerd-9a9c3a27b63c6fa07e80643a06b9a9fd23b9473818c8f8f320273e231b8fcd47.scope: Deactivated successfully. Jul 7 00:19:57.135909 containerd[1994]: time="2025-07-07T00:19:57.135832417Z" level=info msg="received exit event container_id:\"9a9c3a27b63c6fa07e80643a06b9a9fd23b9473818c8f8f320273e231b8fcd47\" id:\"9a9c3a27b63c6fa07e80643a06b9a9fd23b9473818c8f8f320273e231b8fcd47\" pid:4120 exited_at:{seconds:1751847597 nanos:134237935}" Jul 7 00:19:57.136735 containerd[1994]: time="2025-07-07T00:19:57.136646342Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9a9c3a27b63c6fa07e80643a06b9a9fd23b9473818c8f8f320273e231b8fcd47\" id:\"9a9c3a27b63c6fa07e80643a06b9a9fd23b9473818c8f8f320273e231b8fcd47\" pid:4120 exited_at:{seconds:1751847597 nanos:134237935}" Jul 7 00:19:57.158453 containerd[1994]: time="2025-07-07T00:19:57.157719878Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 00:19:57.162970 containerd[1994]: time="2025-07-07T00:19:57.162923070Z" level=info msg="TaskExit event in podsandbox handler container_id:\"76852031269f0337640651d006dd506883642fb50642bfb64974c7909f6c8d0e\" id:\"b914f3ba465e79c0db556f8c58a0297020afcee0a1fa67207b34534ab1ee72d1\" pid:5114 exited_at:{seconds:1751847597 nanos:162564051}" Jul 7 00:19:57.168027 containerd[1994]: time="2025-07-07T00:19:57.167993025Z" level=info msg="StopContainer for \"76852031269f0337640651d006dd506883642fb50642bfb64974c7909f6c8d0e\" with timeout 2 (s)" Jul 7 00:19:57.169251 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a9c3a27b63c6fa07e80643a06b9a9fd23b9473818c8f8f320273e231b8fcd47-rootfs.mount: Deactivated successfully. Jul 7 00:19:57.169551 containerd[1994]: time="2025-07-07T00:19:57.169527217Z" level=info msg="Stop container \"76852031269f0337640651d006dd506883642fb50642bfb64974c7909f6c8d0e\" with signal terminated" Jul 7 00:19:57.181046 systemd-networkd[1825]: lxc_health: Link DOWN Jul 7 00:19:57.181168 systemd-networkd[1825]: lxc_health: Lost carrier Jul 7 00:19:57.197084 containerd[1994]: time="2025-07-07T00:19:57.196941305Z" level=info msg="StopContainer for \"9a9c3a27b63c6fa07e80643a06b9a9fd23b9473818c8f8f320273e231b8fcd47\" returns successfully" Jul 7 00:19:57.198644 containerd[1994]: time="2025-07-07T00:19:57.198021612Z" level=info msg="StopPodSandbox for \"90a9dc74bab96b31e554185441b1f013452257828b62691a1e7829aace710015\"" Jul 7 00:19:57.206899 containerd[1994]: time="2025-07-07T00:19:57.206835489Z" level=info msg="Container to stop \"9a9c3a27b63c6fa07e80643a06b9a9fd23b9473818c8f8f320273e231b8fcd47\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:19:57.209024 systemd[1]: cri-containerd-76852031269f0337640651d006dd506883642fb50642bfb64974c7909f6c8d0e.scope: Deactivated successfully. Jul 7 00:19:57.209427 systemd[1]: cri-containerd-76852031269f0337640651d006dd506883642fb50642bfb64974c7909f6c8d0e.scope: Consumed 8.256s CPU time, 220.9M memory peak, 97.8M read from disk, 13.3M written to disk. Jul 7 00:19:57.211691 containerd[1994]: time="2025-07-07T00:19:57.211433336Z" level=info msg="received exit event container_id:\"76852031269f0337640651d006dd506883642fb50642bfb64974c7909f6c8d0e\" id:\"76852031269f0337640651d006dd506883642fb50642bfb64974c7909f6c8d0e\" pid:4193 exited_at:{seconds:1751847597 nanos:210140195}" Jul 7 00:19:57.213881 containerd[1994]: time="2025-07-07T00:19:57.213776073Z" level=info msg="TaskExit event in podsandbox handler container_id:\"76852031269f0337640651d006dd506883642fb50642bfb64974c7909f6c8d0e\" id:\"76852031269f0337640651d006dd506883642fb50642bfb64974c7909f6c8d0e\" pid:4193 exited_at:{seconds:1751847597 nanos:210140195}" Jul 7 00:19:57.234212 systemd[1]: cri-containerd-90a9dc74bab96b31e554185441b1f013452257828b62691a1e7829aace710015.scope: Deactivated successfully. Jul 7 00:19:57.241587 containerd[1994]: time="2025-07-07T00:19:57.241483968Z" level=info msg="TaskExit event in podsandbox handler container_id:\"90a9dc74bab96b31e554185441b1f013452257828b62691a1e7829aace710015\" id:\"90a9dc74bab96b31e554185441b1f013452257828b62691a1e7829aace710015\" pid:3784 exit_status:137 exited_at:{seconds:1751847597 nanos:240973282}" Jul 7 00:19:57.254020 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76852031269f0337640651d006dd506883642fb50642bfb64974c7909f6c8d0e-rootfs.mount: Deactivated successfully. Jul 7 00:19:57.279047 containerd[1994]: time="2025-07-07T00:19:57.278920918Z" level=info msg="StopContainer for \"76852031269f0337640651d006dd506883642fb50642bfb64974c7909f6c8d0e\" returns successfully" Jul 7 00:19:57.280317 containerd[1994]: time="2025-07-07T00:19:57.280287085Z" level=info msg="StopPodSandbox for \"cbb9f6ba5860ef94d88bc7d495577ddedc551e19b8e51d1e5627ffb192cfce11\"" Jul 7 00:19:57.280825 containerd[1994]: time="2025-07-07T00:19:57.280428055Z" level=info msg="Container to stop \"669262df1f541d01ca0a37e2b10b07086c8bd813a91e08fb7a0eacff552e8e24\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:19:57.280825 containerd[1994]: time="2025-07-07T00:19:57.280447366Z" level=info msg="Container to stop \"e044b2bc92757a462c48dd9f2cfcc66f4507a6e6a129394fed984d72a2a7f2e8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:19:57.280825 containerd[1994]: time="2025-07-07T00:19:57.280463421Z" level=info msg="Container to stop \"b640b168589cd2f83051998a1d7a1e7225478235ebfdaf11242579bdd43c4fab\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:19:57.280825 containerd[1994]: time="2025-07-07T00:19:57.280476821Z" level=info msg="Container to stop \"6ef46c7b3ec76b58ac44a492d004ac93ad80225a6a9f891a1d8dd37a44e7b117\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:19:57.280825 containerd[1994]: time="2025-07-07T00:19:57.280490931Z" level=info msg="Container to stop \"76852031269f0337640651d006dd506883642fb50642bfb64974c7909f6c8d0e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 00:19:57.291014 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-90a9dc74bab96b31e554185441b1f013452257828b62691a1e7829aace710015-rootfs.mount: Deactivated successfully. Jul 7 00:19:57.294725 systemd[1]: cri-containerd-cbb9f6ba5860ef94d88bc7d495577ddedc551e19b8e51d1e5627ffb192cfce11.scope: Deactivated successfully. Jul 7 00:19:57.304777 containerd[1994]: time="2025-07-07T00:19:57.304505005Z" level=info msg="shim disconnected" id=90a9dc74bab96b31e554185441b1f013452257828b62691a1e7829aace710015 namespace=k8s.io Jul 7 00:19:57.305068 containerd[1994]: time="2025-07-07T00:19:57.305042675Z" level=warning msg="cleaning up after shim disconnected" id=90a9dc74bab96b31e554185441b1f013452257828b62691a1e7829aace710015 namespace=k8s.io Jul 7 00:19:57.305501 containerd[1994]: time="2025-07-07T00:19:57.305195218Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:19:57.334285 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cbb9f6ba5860ef94d88bc7d495577ddedc551e19b8e51d1e5627ffb192cfce11-rootfs.mount: Deactivated successfully. Jul 7 00:19:57.343464 containerd[1994]: time="2025-07-07T00:19:57.343415548Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cbb9f6ba5860ef94d88bc7d495577ddedc551e19b8e51d1e5627ffb192cfce11\" id:\"cbb9f6ba5860ef94d88bc7d495577ddedc551e19b8e51d1e5627ffb192cfce11\" pid:3715 exit_status:137 exited_at:{seconds:1751847597 nanos:296578617}" Jul 7 00:19:57.353487 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-90a9dc74bab96b31e554185441b1f013452257828b62691a1e7829aace710015-shm.mount: Deactivated successfully. Jul 7 00:19:57.359361 containerd[1994]: time="2025-07-07T00:19:57.357924019Z" level=info msg="TearDown network for sandbox \"90a9dc74bab96b31e554185441b1f013452257828b62691a1e7829aace710015\" successfully" Jul 7 00:19:57.359361 containerd[1994]: time="2025-07-07T00:19:57.357966495Z" level=info msg="StopPodSandbox for \"90a9dc74bab96b31e554185441b1f013452257828b62691a1e7829aace710015\" returns successfully" Jul 7 00:19:57.359361 containerd[1994]: time="2025-07-07T00:19:57.358191340Z" level=info msg="received exit event sandbox_id:\"90a9dc74bab96b31e554185441b1f013452257828b62691a1e7829aace710015\" exit_status:137 exited_at:{seconds:1751847597 nanos:240973282}" Jul 7 00:19:57.361373 containerd[1994]: time="2025-07-07T00:19:57.361115493Z" level=info msg="received exit event sandbox_id:\"cbb9f6ba5860ef94d88bc7d495577ddedc551e19b8e51d1e5627ffb192cfce11\" exit_status:137 exited_at:{seconds:1751847597 nanos:296578617}" Jul 7 00:19:57.366264 containerd[1994]: time="2025-07-07T00:19:57.366220428Z" level=info msg="TearDown network for sandbox \"cbb9f6ba5860ef94d88bc7d495577ddedc551e19b8e51d1e5627ffb192cfce11\" successfully" Jul 7 00:19:57.366264 containerd[1994]: time="2025-07-07T00:19:57.366254371Z" level=info msg="StopPodSandbox for \"cbb9f6ba5860ef94d88bc7d495577ddedc551e19b8e51d1e5627ffb192cfce11\" returns successfully" Jul 7 00:19:57.369375 containerd[1994]: time="2025-07-07T00:19:57.368011014Z" level=info msg="shim disconnected" id=cbb9f6ba5860ef94d88bc7d495577ddedc551e19b8e51d1e5627ffb192cfce11 namespace=k8s.io Jul 7 00:19:57.369375 containerd[1994]: time="2025-07-07T00:19:57.368043507Z" level=warning msg="cleaning up after shim disconnected" id=cbb9f6ba5860ef94d88bc7d495577ddedc551e19b8e51d1e5627ffb192cfce11 namespace=k8s.io Jul 7 00:19:57.369375 containerd[1994]: time="2025-07-07T00:19:57.368054055Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:19:57.524620 kubelet[3297]: I0707 00:19:57.524572 3297 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4da188ec-67ac-46c3-b7e5-db5d8349946a-etc-cni-netd\") pod \"4da188ec-67ac-46c3-b7e5-db5d8349946a\" (UID: \"4da188ec-67ac-46c3-b7e5-db5d8349946a\") " Jul 7 00:19:57.524620 kubelet[3297]: I0707 00:19:57.524618 3297 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4da188ec-67ac-46c3-b7e5-db5d8349946a-cilium-run\") pod \"4da188ec-67ac-46c3-b7e5-db5d8349946a\" (UID: \"4da188ec-67ac-46c3-b7e5-db5d8349946a\") " Jul 7 00:19:57.524620 kubelet[3297]: I0707 00:19:57.524637 3297 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4da188ec-67ac-46c3-b7e5-db5d8349946a-cni-path\") pod \"4da188ec-67ac-46c3-b7e5-db5d8349946a\" (UID: \"4da188ec-67ac-46c3-b7e5-db5d8349946a\") " Jul 7 00:19:57.524620 kubelet[3297]: I0707 00:19:57.524650 3297 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4da188ec-67ac-46c3-b7e5-db5d8349946a-xtables-lock\") pod \"4da188ec-67ac-46c3-b7e5-db5d8349946a\" (UID: \"4da188ec-67ac-46c3-b7e5-db5d8349946a\") " Jul 7 00:19:57.525127 kubelet[3297]: I0707 00:19:57.524675 3297 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h79bw\" (UniqueName: \"kubernetes.io/projected/a29ba126-2dd2-4518-86d9-d7cf6f445808-kube-api-access-h79bw\") pod \"a29ba126-2dd2-4518-86d9-d7cf6f445808\" (UID: \"a29ba126-2dd2-4518-86d9-d7cf6f445808\") " Jul 7 00:19:57.525127 kubelet[3297]: I0707 00:19:57.524691 3297 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4da188ec-67ac-46c3-b7e5-db5d8349946a-lib-modules\") pod \"4da188ec-67ac-46c3-b7e5-db5d8349946a\" (UID: \"4da188ec-67ac-46c3-b7e5-db5d8349946a\") " Jul 7 00:19:57.525127 kubelet[3297]: I0707 00:19:57.524705 3297 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4da188ec-67ac-46c3-b7e5-db5d8349946a-host-proc-sys-net\") pod \"4da188ec-67ac-46c3-b7e5-db5d8349946a\" (UID: \"4da188ec-67ac-46c3-b7e5-db5d8349946a\") " Jul 7 00:19:57.525127 kubelet[3297]: I0707 00:19:57.524720 3297 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4da188ec-67ac-46c3-b7e5-db5d8349946a-bpf-maps\") pod \"4da188ec-67ac-46c3-b7e5-db5d8349946a\" (UID: \"4da188ec-67ac-46c3-b7e5-db5d8349946a\") " Jul 7 00:19:57.525127 kubelet[3297]: I0707 00:19:57.524736 3297 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4da188ec-67ac-46c3-b7e5-db5d8349946a-clustermesh-secrets\") pod \"4da188ec-67ac-46c3-b7e5-db5d8349946a\" (UID: \"4da188ec-67ac-46c3-b7e5-db5d8349946a\") " Jul 7 00:19:57.525127 kubelet[3297]: I0707 00:19:57.524751 3297 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4da188ec-67ac-46c3-b7e5-db5d8349946a-cilium-cgroup\") pod \"4da188ec-67ac-46c3-b7e5-db5d8349946a\" (UID: \"4da188ec-67ac-46c3-b7e5-db5d8349946a\") " Jul 7 00:19:57.526522 kubelet[3297]: I0707 00:19:57.524766 3297 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4da188ec-67ac-46c3-b7e5-db5d8349946a-host-proc-sys-kernel\") pod \"4da188ec-67ac-46c3-b7e5-db5d8349946a\" (UID: \"4da188ec-67ac-46c3-b7e5-db5d8349946a\") " Jul 7 00:19:57.526522 kubelet[3297]: I0707 00:19:57.524783 3297 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a29ba126-2dd2-4518-86d9-d7cf6f445808-cilium-config-path\") pod \"a29ba126-2dd2-4518-86d9-d7cf6f445808\" (UID: \"a29ba126-2dd2-4518-86d9-d7cf6f445808\") " Jul 7 00:19:57.526522 kubelet[3297]: I0707 00:19:57.524800 3297 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4da188ec-67ac-46c3-b7e5-db5d8349946a-cilium-config-path\") pod \"4da188ec-67ac-46c3-b7e5-db5d8349946a\" (UID: \"4da188ec-67ac-46c3-b7e5-db5d8349946a\") " Jul 7 00:19:57.526522 kubelet[3297]: I0707 00:19:57.524815 3297 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57qbx\" (UniqueName: \"kubernetes.io/projected/4da188ec-67ac-46c3-b7e5-db5d8349946a-kube-api-access-57qbx\") pod \"4da188ec-67ac-46c3-b7e5-db5d8349946a\" (UID: \"4da188ec-67ac-46c3-b7e5-db5d8349946a\") " Jul 7 00:19:57.526522 kubelet[3297]: I0707 00:19:57.524836 3297 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4da188ec-67ac-46c3-b7e5-db5d8349946a-hubble-tls\") pod \"4da188ec-67ac-46c3-b7e5-db5d8349946a\" (UID: \"4da188ec-67ac-46c3-b7e5-db5d8349946a\") " Jul 7 00:19:57.526522 kubelet[3297]: I0707 00:19:57.524852 3297 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4da188ec-67ac-46c3-b7e5-db5d8349946a-hostproc\") pod \"4da188ec-67ac-46c3-b7e5-db5d8349946a\" (UID: \"4da188ec-67ac-46c3-b7e5-db5d8349946a\") " Jul 7 00:19:57.526678 kubelet[3297]: I0707 00:19:57.524920 3297 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4da188ec-67ac-46c3-b7e5-db5d8349946a-hostproc" (OuterVolumeSpecName: "hostproc") pod "4da188ec-67ac-46c3-b7e5-db5d8349946a" (UID: "4da188ec-67ac-46c3-b7e5-db5d8349946a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 00:19:57.526678 kubelet[3297]: I0707 00:19:57.524955 3297 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4da188ec-67ac-46c3-b7e5-db5d8349946a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4da188ec-67ac-46c3-b7e5-db5d8349946a" (UID: "4da188ec-67ac-46c3-b7e5-db5d8349946a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 00:19:57.526678 kubelet[3297]: I0707 00:19:57.524969 3297 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4da188ec-67ac-46c3-b7e5-db5d8349946a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4da188ec-67ac-46c3-b7e5-db5d8349946a" (UID: "4da188ec-67ac-46c3-b7e5-db5d8349946a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 00:19:57.526678 kubelet[3297]: I0707 00:19:57.524980 3297 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4da188ec-67ac-46c3-b7e5-db5d8349946a-cni-path" (OuterVolumeSpecName: "cni-path") pod "4da188ec-67ac-46c3-b7e5-db5d8349946a" (UID: "4da188ec-67ac-46c3-b7e5-db5d8349946a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 00:19:57.526678 kubelet[3297]: I0707 00:19:57.524991 3297 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4da188ec-67ac-46c3-b7e5-db5d8349946a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4da188ec-67ac-46c3-b7e5-db5d8349946a" (UID: "4da188ec-67ac-46c3-b7e5-db5d8349946a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 00:19:57.526808 kubelet[3297]: I0707 00:19:57.525276 3297 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4da188ec-67ac-46c3-b7e5-db5d8349946a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4da188ec-67ac-46c3-b7e5-db5d8349946a" (UID: "4da188ec-67ac-46c3-b7e5-db5d8349946a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 00:19:57.526808 kubelet[3297]: I0707 00:19:57.525316 3297 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4da188ec-67ac-46c3-b7e5-db5d8349946a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4da188ec-67ac-46c3-b7e5-db5d8349946a" (UID: "4da188ec-67ac-46c3-b7e5-db5d8349946a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 00:19:57.526808 kubelet[3297]: I0707 00:19:57.525334 3297 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4da188ec-67ac-46c3-b7e5-db5d8349946a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4da188ec-67ac-46c3-b7e5-db5d8349946a" (UID: "4da188ec-67ac-46c3-b7e5-db5d8349946a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 00:19:57.526808 kubelet[3297]: I0707 00:19:57.525381 3297 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4da188ec-67ac-46c3-b7e5-db5d8349946a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4da188ec-67ac-46c3-b7e5-db5d8349946a" (UID: "4da188ec-67ac-46c3-b7e5-db5d8349946a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 00:19:57.528212 kubelet[3297]: I0707 00:19:57.527841 3297 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a29ba126-2dd2-4518-86d9-d7cf6f445808-kube-api-access-h79bw" (OuterVolumeSpecName: "kube-api-access-h79bw") pod "a29ba126-2dd2-4518-86d9-d7cf6f445808" (UID: "a29ba126-2dd2-4518-86d9-d7cf6f445808"). InnerVolumeSpecName "kube-api-access-h79bw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 7 00:19:57.528212 kubelet[3297]: I0707 00:19:57.527900 3297 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4da188ec-67ac-46c3-b7e5-db5d8349946a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4da188ec-67ac-46c3-b7e5-db5d8349946a" (UID: "4da188ec-67ac-46c3-b7e5-db5d8349946a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 00:19:57.528777 kubelet[3297]: I0707 00:19:57.528751 3297 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4da188ec-67ac-46c3-b7e5-db5d8349946a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4da188ec-67ac-46c3-b7e5-db5d8349946a" (UID: "4da188ec-67ac-46c3-b7e5-db5d8349946a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 7 00:19:57.529659 kubelet[3297]: I0707 00:19:57.529634 3297 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a29ba126-2dd2-4518-86d9-d7cf6f445808-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a29ba126-2dd2-4518-86d9-d7cf6f445808" (UID: "a29ba126-2dd2-4518-86d9-d7cf6f445808"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 7 00:19:57.531187 kubelet[3297]: I0707 00:19:57.531156 3297 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4da188ec-67ac-46c3-b7e5-db5d8349946a-kube-api-access-57qbx" (OuterVolumeSpecName: "kube-api-access-57qbx") pod "4da188ec-67ac-46c3-b7e5-db5d8349946a" (UID: "4da188ec-67ac-46c3-b7e5-db5d8349946a"). InnerVolumeSpecName "kube-api-access-57qbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 7 00:19:57.532134 kubelet[3297]: I0707 00:19:57.532106 3297 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4da188ec-67ac-46c3-b7e5-db5d8349946a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4da188ec-67ac-46c3-b7e5-db5d8349946a" (UID: "4da188ec-67ac-46c3-b7e5-db5d8349946a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 7 00:19:57.533241 kubelet[3297]: I0707 00:19:57.533206 3297 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4da188ec-67ac-46c3-b7e5-db5d8349946a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4da188ec-67ac-46c3-b7e5-db5d8349946a" (UID: "4da188ec-67ac-46c3-b7e5-db5d8349946a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 7 00:19:57.625769 kubelet[3297]: I0707 00:19:57.625640 3297 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4da188ec-67ac-46c3-b7e5-db5d8349946a-etc-cni-netd\") on node \"ip-172-31-31-140\" DevicePath \"\"" Jul 7 00:19:57.625769 kubelet[3297]: I0707 00:19:57.625683 3297 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4da188ec-67ac-46c3-b7e5-db5d8349946a-cilium-run\") on node \"ip-172-31-31-140\" DevicePath \"\"" Jul 7 00:19:57.625769 kubelet[3297]: I0707 00:19:57.625697 3297 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4da188ec-67ac-46c3-b7e5-db5d8349946a-cni-path\") on node \"ip-172-31-31-140\" DevicePath \"\"" Jul 7 00:19:57.625769 kubelet[3297]: I0707 00:19:57.625709 3297 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4da188ec-67ac-46c3-b7e5-db5d8349946a-xtables-lock\") on node \"ip-172-31-31-140\" DevicePath \"\"" Jul 7 00:19:57.625769 kubelet[3297]: I0707 00:19:57.625729 3297 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h79bw\" (UniqueName: \"kubernetes.io/projected/a29ba126-2dd2-4518-86d9-d7cf6f445808-kube-api-access-h79bw\") on node \"ip-172-31-31-140\" DevicePath \"\"" Jul 7 00:19:57.625769 kubelet[3297]: I0707 00:19:57.625746 3297 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4da188ec-67ac-46c3-b7e5-db5d8349946a-lib-modules\") on node \"ip-172-31-31-140\" DevicePath \"\"" Jul 7 00:19:57.625769 kubelet[3297]: I0707 00:19:57.625759 3297 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4da188ec-67ac-46c3-b7e5-db5d8349946a-host-proc-sys-net\") on node \"ip-172-31-31-140\" DevicePath \"\"" Jul 7 00:19:57.626144 kubelet[3297]: I0707 00:19:57.625783 3297 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4da188ec-67ac-46c3-b7e5-db5d8349946a-bpf-maps\") on node \"ip-172-31-31-140\" DevicePath \"\"" Jul 7 00:19:57.626144 kubelet[3297]: I0707 00:19:57.625794 3297 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4da188ec-67ac-46c3-b7e5-db5d8349946a-clustermesh-secrets\") on node \"ip-172-31-31-140\" DevicePath \"\"" Jul 7 00:19:57.626144 kubelet[3297]: I0707 00:19:57.625805 3297 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4da188ec-67ac-46c3-b7e5-db5d8349946a-cilium-cgroup\") on node \"ip-172-31-31-140\" DevicePath \"\"" Jul 7 00:19:57.626144 kubelet[3297]: I0707 00:19:57.625816 3297 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4da188ec-67ac-46c3-b7e5-db5d8349946a-host-proc-sys-kernel\") on node \"ip-172-31-31-140\" DevicePath \"\"" Jul 7 00:19:57.626144 kubelet[3297]: I0707 00:19:57.625828 3297 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a29ba126-2dd2-4518-86d9-d7cf6f445808-cilium-config-path\") on node \"ip-172-31-31-140\" DevicePath \"\"" Jul 7 00:19:57.626144 kubelet[3297]: I0707 00:19:57.625839 3297 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4da188ec-67ac-46c3-b7e5-db5d8349946a-cilium-config-path\") on node \"ip-172-31-31-140\" DevicePath \"\"" Jul 7 00:19:57.626144 kubelet[3297]: I0707 00:19:57.625850 3297 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57qbx\" (UniqueName: \"kubernetes.io/projected/4da188ec-67ac-46c3-b7e5-db5d8349946a-kube-api-access-57qbx\") on node \"ip-172-31-31-140\" DevicePath \"\"" Jul 7 00:19:57.626144 kubelet[3297]: I0707 00:19:57.625862 3297 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4da188ec-67ac-46c3-b7e5-db5d8349946a-hubble-tls\") on node \"ip-172-31-31-140\" DevicePath \"\"" Jul 7 00:19:57.626497 kubelet[3297]: I0707 00:19:57.625874 3297 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4da188ec-67ac-46c3-b7e5-db5d8349946a-hostproc\") on node \"ip-172-31-31-140\" DevicePath \"\"" Jul 7 00:19:57.782896 kubelet[3297]: I0707 00:19:57.782798 3297 scope.go:117] "RemoveContainer" containerID="76852031269f0337640651d006dd506883642fb50642bfb64974c7909f6c8d0e" Jul 7 00:19:57.789608 containerd[1994]: time="2025-07-07T00:19:57.789105666Z" level=info msg="RemoveContainer for \"76852031269f0337640651d006dd506883642fb50642bfb64974c7909f6c8d0e\"" Jul 7 00:19:57.791413 systemd[1]: Removed slice kubepods-burstable-pod4da188ec_67ac_46c3_b7e5_db5d8349946a.slice - libcontainer container kubepods-burstable-pod4da188ec_67ac_46c3_b7e5_db5d8349946a.slice. Jul 7 00:19:57.791770 systemd[1]: kubepods-burstable-pod4da188ec_67ac_46c3_b7e5_db5d8349946a.slice: Consumed 8.365s CPU time, 221.2M memory peak, 98.8M read from disk, 13.3M written to disk. Jul 7 00:19:57.801996 systemd[1]: Removed slice kubepods-besteffort-poda29ba126_2dd2_4518_86d9_d7cf6f445808.slice - libcontainer container kubepods-besteffort-poda29ba126_2dd2_4518_86d9_d7cf6f445808.slice. Jul 7 00:19:57.808779 containerd[1994]: time="2025-07-07T00:19:57.808665772Z" level=info msg="RemoveContainer for \"76852031269f0337640651d006dd506883642fb50642bfb64974c7909f6c8d0e\" returns successfully" Jul 7 00:19:57.809776 kubelet[3297]: I0707 00:19:57.809663 3297 scope.go:117] "RemoveContainer" containerID="6ef46c7b3ec76b58ac44a492d004ac93ad80225a6a9f891a1d8dd37a44e7b117" Jul 7 00:19:57.812152 containerd[1994]: time="2025-07-07T00:19:57.812100723Z" level=info msg="RemoveContainer for \"6ef46c7b3ec76b58ac44a492d004ac93ad80225a6a9f891a1d8dd37a44e7b117\"" Jul 7 00:19:57.821764 containerd[1994]: time="2025-07-07T00:19:57.821701971Z" level=info msg="RemoveContainer for \"6ef46c7b3ec76b58ac44a492d004ac93ad80225a6a9f891a1d8dd37a44e7b117\" returns successfully" Jul 7 00:19:57.822403 kubelet[3297]: I0707 00:19:57.822164 3297 scope.go:117] "RemoveContainer" containerID="b640b168589cd2f83051998a1d7a1e7225478235ebfdaf11242579bdd43c4fab" Jul 7 00:19:57.826668 containerd[1994]: time="2025-07-07T00:19:57.826621678Z" level=info msg="RemoveContainer for \"b640b168589cd2f83051998a1d7a1e7225478235ebfdaf11242579bdd43c4fab\"" Jul 7 00:19:57.833273 containerd[1994]: time="2025-07-07T00:19:57.833228126Z" level=info msg="RemoveContainer for \"b640b168589cd2f83051998a1d7a1e7225478235ebfdaf11242579bdd43c4fab\" returns successfully" Jul 7 00:19:57.833515 kubelet[3297]: I0707 00:19:57.833489 3297 scope.go:117] "RemoveContainer" containerID="e044b2bc92757a462c48dd9f2cfcc66f4507a6e6a129394fed984d72a2a7f2e8" Jul 7 00:19:57.836888 containerd[1994]: time="2025-07-07T00:19:57.836321516Z" level=info msg="RemoveContainer for \"e044b2bc92757a462c48dd9f2cfcc66f4507a6e6a129394fed984d72a2a7f2e8\"" Jul 7 00:19:57.843495 containerd[1994]: time="2025-07-07T00:19:57.843452644Z" level=info msg="RemoveContainer for \"e044b2bc92757a462c48dd9f2cfcc66f4507a6e6a129394fed984d72a2a7f2e8\" returns successfully" Jul 7 00:19:57.843821 kubelet[3297]: I0707 00:19:57.843679 3297 scope.go:117] "RemoveContainer" containerID="669262df1f541d01ca0a37e2b10b07086c8bd813a91e08fb7a0eacff552e8e24" Jul 7 00:19:57.845395 containerd[1994]: time="2025-07-07T00:19:57.845336005Z" level=info msg="RemoveContainer for \"669262df1f541d01ca0a37e2b10b07086c8bd813a91e08fb7a0eacff552e8e24\"" Jul 7 00:19:57.851975 containerd[1994]: time="2025-07-07T00:19:57.851927983Z" level=info msg="RemoveContainer for \"669262df1f541d01ca0a37e2b10b07086c8bd813a91e08fb7a0eacff552e8e24\" returns successfully" Jul 7 00:19:57.852288 kubelet[3297]: I0707 00:19:57.852267 3297 scope.go:117] "RemoveContainer" containerID="76852031269f0337640651d006dd506883642fb50642bfb64974c7909f6c8d0e" Jul 7 00:19:57.862024 containerd[1994]: time="2025-07-07T00:19:57.853405542Z" level=error msg="ContainerStatus for \"76852031269f0337640651d006dd506883642fb50642bfb64974c7909f6c8d0e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"76852031269f0337640651d006dd506883642fb50642bfb64974c7909f6c8d0e\": not found" Jul 7 00:19:57.862314 kubelet[3297]: E0707 00:19:57.862287 3297 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"76852031269f0337640651d006dd506883642fb50642bfb64974c7909f6c8d0e\": not found" containerID="76852031269f0337640651d006dd506883642fb50642bfb64974c7909f6c8d0e" Jul 7 00:19:57.864227 kubelet[3297]: I0707 00:19:57.864112 3297 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"76852031269f0337640651d006dd506883642fb50642bfb64974c7909f6c8d0e"} err="failed to get container status \"76852031269f0337640651d006dd506883642fb50642bfb64974c7909f6c8d0e\": rpc error: code = NotFound desc = an error occurred when try to find container \"76852031269f0337640651d006dd506883642fb50642bfb64974c7909f6c8d0e\": not found" Jul 7 00:19:57.864227 kubelet[3297]: I0707 00:19:57.864228 3297 scope.go:117] "RemoveContainer" containerID="6ef46c7b3ec76b58ac44a492d004ac93ad80225a6a9f891a1d8dd37a44e7b117" Jul 7 00:19:57.864587 containerd[1994]: time="2025-07-07T00:19:57.864527751Z" level=error msg="ContainerStatus for \"6ef46c7b3ec76b58ac44a492d004ac93ad80225a6a9f891a1d8dd37a44e7b117\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6ef46c7b3ec76b58ac44a492d004ac93ad80225a6a9f891a1d8dd37a44e7b117\": not found" Jul 7 00:19:57.864761 kubelet[3297]: E0707 00:19:57.864678 3297 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6ef46c7b3ec76b58ac44a492d004ac93ad80225a6a9f891a1d8dd37a44e7b117\": not found" containerID="6ef46c7b3ec76b58ac44a492d004ac93ad80225a6a9f891a1d8dd37a44e7b117" Jul 7 00:19:57.864761 kubelet[3297]: I0707 00:19:57.864701 3297 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6ef46c7b3ec76b58ac44a492d004ac93ad80225a6a9f891a1d8dd37a44e7b117"} err="failed to get container status \"6ef46c7b3ec76b58ac44a492d004ac93ad80225a6a9f891a1d8dd37a44e7b117\": rpc error: code = NotFound desc = an error occurred when try to find container \"6ef46c7b3ec76b58ac44a492d004ac93ad80225a6a9f891a1d8dd37a44e7b117\": not found" Jul 7 00:19:57.864761 kubelet[3297]: I0707 00:19:57.864719 3297 scope.go:117] "RemoveContainer" containerID="b640b168589cd2f83051998a1d7a1e7225478235ebfdaf11242579bdd43c4fab" Jul 7 00:19:57.864962 containerd[1994]: time="2025-07-07T00:19:57.864930846Z" level=error msg="ContainerStatus for \"b640b168589cd2f83051998a1d7a1e7225478235ebfdaf11242579bdd43c4fab\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b640b168589cd2f83051998a1d7a1e7225478235ebfdaf11242579bdd43c4fab\": not found" Jul 7 00:19:57.865166 kubelet[3297]: E0707 00:19:57.865045 3297 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b640b168589cd2f83051998a1d7a1e7225478235ebfdaf11242579bdd43c4fab\": not found" containerID="b640b168589cd2f83051998a1d7a1e7225478235ebfdaf11242579bdd43c4fab" Jul 7 00:19:57.865166 kubelet[3297]: I0707 00:19:57.865093 3297 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b640b168589cd2f83051998a1d7a1e7225478235ebfdaf11242579bdd43c4fab"} err="failed to get container status \"b640b168589cd2f83051998a1d7a1e7225478235ebfdaf11242579bdd43c4fab\": rpc error: code = NotFound desc = an error occurred when try to find container \"b640b168589cd2f83051998a1d7a1e7225478235ebfdaf11242579bdd43c4fab\": not found" Jul 7 00:19:57.865166 kubelet[3297]: I0707 00:19:57.865107 3297 scope.go:117] "RemoveContainer" containerID="e044b2bc92757a462c48dd9f2cfcc66f4507a6e6a129394fed984d72a2a7f2e8" Jul 7 00:19:57.865300 containerd[1994]: time="2025-07-07T00:19:57.865263219Z" level=error msg="ContainerStatus for \"e044b2bc92757a462c48dd9f2cfcc66f4507a6e6a129394fed984d72a2a7f2e8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e044b2bc92757a462c48dd9f2cfcc66f4507a6e6a129394fed984d72a2a7f2e8\": not found" Jul 7 00:19:57.865436 kubelet[3297]: E0707 00:19:57.865403 3297 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e044b2bc92757a462c48dd9f2cfcc66f4507a6e6a129394fed984d72a2a7f2e8\": not found" containerID="e044b2bc92757a462c48dd9f2cfcc66f4507a6e6a129394fed984d72a2a7f2e8" Jul 7 00:19:57.865436 kubelet[3297]: I0707 00:19:57.865426 3297 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e044b2bc92757a462c48dd9f2cfcc66f4507a6e6a129394fed984d72a2a7f2e8"} err="failed to get container status \"e044b2bc92757a462c48dd9f2cfcc66f4507a6e6a129394fed984d72a2a7f2e8\": rpc error: code = NotFound desc = an error occurred when try to find container \"e044b2bc92757a462c48dd9f2cfcc66f4507a6e6a129394fed984d72a2a7f2e8\": not found" Jul 7 00:19:57.865502 kubelet[3297]: I0707 00:19:57.865441 3297 scope.go:117] "RemoveContainer" containerID="669262df1f541d01ca0a37e2b10b07086c8bd813a91e08fb7a0eacff552e8e24" Jul 7 00:19:57.865646 containerd[1994]: time="2025-07-07T00:19:57.865619294Z" level=error msg="ContainerStatus for \"669262df1f541d01ca0a37e2b10b07086c8bd813a91e08fb7a0eacff552e8e24\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"669262df1f541d01ca0a37e2b10b07086c8bd813a91e08fb7a0eacff552e8e24\": not found" Jul 7 00:19:57.865751 kubelet[3297]: E0707 00:19:57.865724 3297 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"669262df1f541d01ca0a37e2b10b07086c8bd813a91e08fb7a0eacff552e8e24\": not found" containerID="669262df1f541d01ca0a37e2b10b07086c8bd813a91e08fb7a0eacff552e8e24" Jul 7 00:19:57.865800 kubelet[3297]: I0707 00:19:57.865762 3297 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"669262df1f541d01ca0a37e2b10b07086c8bd813a91e08fb7a0eacff552e8e24"} err="failed to get container status \"669262df1f541d01ca0a37e2b10b07086c8bd813a91e08fb7a0eacff552e8e24\": rpc error: code = NotFound desc = an error occurred when try to find container \"669262df1f541d01ca0a37e2b10b07086c8bd813a91e08fb7a0eacff552e8e24\": not found" Jul 7 00:19:57.865800 kubelet[3297]: I0707 00:19:57.865776 3297 scope.go:117] "RemoveContainer" containerID="9a9c3a27b63c6fa07e80643a06b9a9fd23b9473818c8f8f320273e231b8fcd47" Jul 7 00:19:57.867188 containerd[1994]: time="2025-07-07T00:19:57.867159339Z" level=info msg="RemoveContainer for \"9a9c3a27b63c6fa07e80643a06b9a9fd23b9473818c8f8f320273e231b8fcd47\"" Jul 7 00:19:57.886074 containerd[1994]: time="2025-07-07T00:19:57.885961942Z" level=info msg="RemoveContainer for \"9a9c3a27b63c6fa07e80643a06b9a9fd23b9473818c8f8f320273e231b8fcd47\" returns successfully" Jul 7 00:19:57.887244 kubelet[3297]: I0707 00:19:57.887211 3297 scope.go:117] "RemoveContainer" containerID="9a9c3a27b63c6fa07e80643a06b9a9fd23b9473818c8f8f320273e231b8fcd47" Jul 7 00:19:57.887936 containerd[1994]: time="2025-07-07T00:19:57.887891107Z" level=error msg="ContainerStatus for \"9a9c3a27b63c6fa07e80643a06b9a9fd23b9473818c8f8f320273e231b8fcd47\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9a9c3a27b63c6fa07e80643a06b9a9fd23b9473818c8f8f320273e231b8fcd47\": not found" Jul 7 00:19:57.888295 kubelet[3297]: E0707 00:19:57.888268 3297 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9a9c3a27b63c6fa07e80643a06b9a9fd23b9473818c8f8f320273e231b8fcd47\": not found" containerID="9a9c3a27b63c6fa07e80643a06b9a9fd23b9473818c8f8f320273e231b8fcd47" Jul 7 00:19:57.888401 kubelet[3297]: I0707 00:19:57.888300 3297 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9a9c3a27b63c6fa07e80643a06b9a9fd23b9473818c8f8f320273e231b8fcd47"} err="failed to get container status \"9a9c3a27b63c6fa07e80643a06b9a9fd23b9473818c8f8f320273e231b8fcd47\": rpc error: code = NotFound desc = an error occurred when try to find container \"9a9c3a27b63c6fa07e80643a06b9a9fd23b9473818c8f8f320273e231b8fcd47\": not found" Jul 7 00:19:58.168616 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cbb9f6ba5860ef94d88bc7d495577ddedc551e19b8e51d1e5627ffb192cfce11-shm.mount: Deactivated successfully. Jul 7 00:19:58.168827 systemd[1]: var-lib-kubelet-pods-a29ba126\x2d2dd2\x2d4518\x2d86d9\x2dd7cf6f445808-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh79bw.mount: Deactivated successfully. Jul 7 00:19:58.168904 systemd[1]: var-lib-kubelet-pods-4da188ec\x2d67ac\x2d46c3\x2db7e5\x2ddb5d8349946a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d57qbx.mount: Deactivated successfully. Jul 7 00:19:58.169026 systemd[1]: var-lib-kubelet-pods-4da188ec\x2d67ac\x2d46c3\x2db7e5\x2ddb5d8349946a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 7 00:19:58.169095 systemd[1]: var-lib-kubelet-pods-4da188ec\x2d67ac\x2d46c3\x2db7e5\x2ddb5d8349946a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 7 00:19:58.327949 kubelet[3297]: I0707 00:19:58.327879 3297 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4da188ec-67ac-46c3-b7e5-db5d8349946a" path="/var/lib/kubelet/pods/4da188ec-67ac-46c3-b7e5-db5d8349946a/volumes" Jul 7 00:19:58.328497 kubelet[3297]: I0707 00:19:58.328477 3297 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a29ba126-2dd2-4518-86d9-d7cf6f445808" path="/var/lib/kubelet/pods/a29ba126-2dd2-4518-86d9-d7cf6f445808/volumes" Jul 7 00:19:58.486707 kubelet[3297]: E0707 00:19:58.486622 3297 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 7 00:19:59.045220 sshd[5087]: Connection closed by 147.75.109.163 port 60364 Jul 7 00:19:59.046259 sshd-session[5085]: pam_unix(sshd:session): session closed for user core Jul 7 00:19:59.050737 systemd[1]: sshd@24-172.31.31.140:22-147.75.109.163:60364.service: Deactivated successfully. Jul 7 00:19:59.052923 systemd[1]: session-25.scope: Deactivated successfully. Jul 7 00:19:59.054908 systemd-logind[1973]: Session 25 logged out. Waiting for processes to exit. Jul 7 00:19:59.057305 systemd-logind[1973]: Removed session 25. Jul 7 00:19:59.080438 systemd[1]: Started sshd@25-172.31.31.140:22-147.75.109.163:43072.service - OpenSSH per-connection server daemon (147.75.109.163:43072). Jul 7 00:19:59.268732 sshd[5243]: Accepted publickey for core from 147.75.109.163 port 43072 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:19:59.270219 sshd-session[5243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:19:59.276577 systemd-logind[1973]: New session 26 of user core. Jul 7 00:19:59.285599 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 7 00:19:59.576469 ntpd[1966]: Deleting interface #11 lxc_health, fe80::47d:4aff:fe72:2538%8#123, interface stats: received=0, sent=0, dropped=0, active_time=73 secs Jul 7 00:19:59.576922 ntpd[1966]: 7 Jul 00:19:59 ntpd[1966]: Deleting interface #11 lxc_health, fe80::47d:4aff:fe72:2538%8#123, interface stats: received=0, sent=0, dropped=0, active_time=73 secs Jul 7 00:20:00.462362 kubelet[3297]: I0707 00:20:00.462296 3297 setters.go:600] "Node became not ready" node="ip-172-31-31-140" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-07T00:20:00Z","lastTransitionTime":"2025-07-07T00:20:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 7 00:20:00.566614 sshd[5245]: Connection closed by 147.75.109.163 port 43072 Jul 7 00:20:00.567426 sshd-session[5243]: pam_unix(sshd:session): session closed for user core Jul 7 00:20:00.575221 systemd[1]: sshd@25-172.31.31.140:22-147.75.109.163:43072.service: Deactivated successfully. Jul 7 00:20:00.575275 systemd-logind[1973]: Session 26 logged out. Waiting for processes to exit. Jul 7 00:20:00.582224 systemd[1]: session-26.scope: Deactivated successfully. Jul 7 00:20:00.586587 kubelet[3297]: E0707 00:20:00.586553 3297 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4da188ec-67ac-46c3-b7e5-db5d8349946a" containerName="clean-cilium-state" Jul 7 00:20:00.586587 kubelet[3297]: E0707 00:20:00.586587 3297 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4da188ec-67ac-46c3-b7e5-db5d8349946a" containerName="cilium-agent" Jul 7 00:20:00.586755 kubelet[3297]: E0707 00:20:00.586598 3297 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4da188ec-67ac-46c3-b7e5-db5d8349946a" containerName="mount-cgroup" Jul 7 00:20:00.586755 kubelet[3297]: E0707 00:20:00.586606 3297 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4da188ec-67ac-46c3-b7e5-db5d8349946a" containerName="apply-sysctl-overwrites" Jul 7 00:20:00.586755 kubelet[3297]: E0707 00:20:00.586614 3297 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4da188ec-67ac-46c3-b7e5-db5d8349946a" containerName="mount-bpf-fs" Jul 7 00:20:00.586755 kubelet[3297]: E0707 00:20:00.586622 3297 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a29ba126-2dd2-4518-86d9-d7cf6f445808" containerName="cilium-operator" Jul 7 00:20:00.586755 kubelet[3297]: I0707 00:20:00.586664 3297 memory_manager.go:354] "RemoveStaleState removing state" podUID="4da188ec-67ac-46c3-b7e5-db5d8349946a" containerName="cilium-agent" Jul 7 00:20:00.586755 kubelet[3297]: I0707 00:20:00.586674 3297 memory_manager.go:354] "RemoveStaleState removing state" podUID="a29ba126-2dd2-4518-86d9-d7cf6f445808" containerName="cilium-operator" Jul 7 00:20:00.588595 systemd-logind[1973]: Removed session 26. Jul 7 00:20:00.605814 systemd[1]: Started sshd@26-172.31.31.140:22-147.75.109.163:43086.service - OpenSSH per-connection server daemon (147.75.109.163:43086). Jul 7 00:20:00.624413 systemd[1]: Created slice kubepods-burstable-pod43a72664_8909_44af_9f3e_d9f0025e6024.slice - libcontainer container kubepods-burstable-pod43a72664_8909_44af_9f3e_d9f0025e6024.slice. Jul 7 00:20:00.647136 kubelet[3297]: I0707 00:20:00.647048 3297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43a72664-8909-44af-9f3e-d9f0025e6024-lib-modules\") pod \"cilium-7k8ds\" (UID: \"43a72664-8909-44af-9f3e-d9f0025e6024\") " pod="kube-system/cilium-7k8ds" Jul 7 00:20:00.647273 kubelet[3297]: I0707 00:20:00.647173 3297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/43a72664-8909-44af-9f3e-d9f0025e6024-etc-cni-netd\") pod \"cilium-7k8ds\" (UID: \"43a72664-8909-44af-9f3e-d9f0025e6024\") " pod="kube-system/cilium-7k8ds" Jul 7 00:20:00.647273 kubelet[3297]: I0707 00:20:00.647197 3297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddx49\" (UniqueName: \"kubernetes.io/projected/43a72664-8909-44af-9f3e-d9f0025e6024-kube-api-access-ddx49\") pod \"cilium-7k8ds\" (UID: \"43a72664-8909-44af-9f3e-d9f0025e6024\") " pod="kube-system/cilium-7k8ds" Jul 7 00:20:00.647273 kubelet[3297]: I0707 00:20:00.647249 3297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/43a72664-8909-44af-9f3e-d9f0025e6024-cilium-cgroup\") pod \"cilium-7k8ds\" (UID: \"43a72664-8909-44af-9f3e-d9f0025e6024\") " pod="kube-system/cilium-7k8ds" Jul 7 00:20:00.647457 kubelet[3297]: I0707 00:20:00.647272 3297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/43a72664-8909-44af-9f3e-d9f0025e6024-bpf-maps\") pod \"cilium-7k8ds\" (UID: \"43a72664-8909-44af-9f3e-d9f0025e6024\") " pod="kube-system/cilium-7k8ds" Jul 7 00:20:00.647457 kubelet[3297]: I0707 00:20:00.647328 3297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/43a72664-8909-44af-9f3e-d9f0025e6024-clustermesh-secrets\") pod \"cilium-7k8ds\" (UID: \"43a72664-8909-44af-9f3e-d9f0025e6024\") " pod="kube-system/cilium-7k8ds" Jul 7 00:20:00.647457 kubelet[3297]: I0707 00:20:00.647384 3297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/43a72664-8909-44af-9f3e-d9f0025e6024-cilium-run\") pod \"cilium-7k8ds\" (UID: \"43a72664-8909-44af-9f3e-d9f0025e6024\") " pod="kube-system/cilium-7k8ds" Jul 7 00:20:00.647457 kubelet[3297]: I0707 00:20:00.647410 3297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/43a72664-8909-44af-9f3e-d9f0025e6024-cni-path\") pod \"cilium-7k8ds\" (UID: \"43a72664-8909-44af-9f3e-d9f0025e6024\") " pod="kube-system/cilium-7k8ds" Jul 7 00:20:00.647635 kubelet[3297]: I0707 00:20:00.647466 3297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/43a72664-8909-44af-9f3e-d9f0025e6024-xtables-lock\") pod \"cilium-7k8ds\" (UID: \"43a72664-8909-44af-9f3e-d9f0025e6024\") " pod="kube-system/cilium-7k8ds" Jul 7 00:20:00.647635 kubelet[3297]: I0707 00:20:00.647491 3297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/43a72664-8909-44af-9f3e-d9f0025e6024-cilium-ipsec-secrets\") pod \"cilium-7k8ds\" (UID: \"43a72664-8909-44af-9f3e-d9f0025e6024\") " pod="kube-system/cilium-7k8ds" Jul 7 00:20:00.647635 kubelet[3297]: I0707 00:20:00.647555 3297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/43a72664-8909-44af-9f3e-d9f0025e6024-hubble-tls\") pod \"cilium-7k8ds\" (UID: \"43a72664-8909-44af-9f3e-d9f0025e6024\") " pod="kube-system/cilium-7k8ds" Jul 7 00:20:00.647635 kubelet[3297]: I0707 00:20:00.647610 3297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/43a72664-8909-44af-9f3e-d9f0025e6024-host-proc-sys-net\") pod \"cilium-7k8ds\" (UID: \"43a72664-8909-44af-9f3e-d9f0025e6024\") " pod="kube-system/cilium-7k8ds" Jul 7 00:20:00.647786 kubelet[3297]: I0707 00:20:00.647635 3297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/43a72664-8909-44af-9f3e-d9f0025e6024-hostproc\") pod \"cilium-7k8ds\" (UID: \"43a72664-8909-44af-9f3e-d9f0025e6024\") " pod="kube-system/cilium-7k8ds" Jul 7 00:20:00.647786 kubelet[3297]: I0707 00:20:00.647698 3297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/43a72664-8909-44af-9f3e-d9f0025e6024-cilium-config-path\") pod \"cilium-7k8ds\" (UID: \"43a72664-8909-44af-9f3e-d9f0025e6024\") " pod="kube-system/cilium-7k8ds" Jul 7 00:20:00.647786 kubelet[3297]: I0707 00:20:00.647722 3297 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/43a72664-8909-44af-9f3e-d9f0025e6024-host-proc-sys-kernel\") pod \"cilium-7k8ds\" (UID: \"43a72664-8909-44af-9f3e-d9f0025e6024\") " pod="kube-system/cilium-7k8ds" Jul 7 00:20:00.831451 sshd[5257]: Accepted publickey for core from 147.75.109.163 port 43086 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:20:00.834514 sshd-session[5257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:20:00.846193 systemd-logind[1973]: New session 27 of user core. Jul 7 00:20:00.848562 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 7 00:20:00.937830 containerd[1994]: time="2025-07-07T00:20:00.937780299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7k8ds,Uid:43a72664-8909-44af-9f3e-d9f0025e6024,Namespace:kube-system,Attempt:0,}" Jul 7 00:20:00.971681 sshd[5263]: Connection closed by 147.75.109.163 port 43086 Jul 7 00:20:00.971036 sshd-session[5257]: pam_unix(sshd:session): session closed for user core Jul 7 00:20:00.977735 systemd[1]: sshd@26-172.31.31.140:22-147.75.109.163:43086.service: Deactivated successfully. Jul 7 00:20:00.985268 systemd[1]: session-27.scope: Deactivated successfully. Jul 7 00:20:00.987678 systemd-logind[1973]: Session 27 logged out. Waiting for processes to exit. Jul 7 00:20:00.990325 systemd-logind[1973]: Removed session 27. Jul 7 00:20:00.990769 containerd[1994]: time="2025-07-07T00:20:00.990728875Z" level=info msg="connecting to shim 015ef8e04bd79f4024caff2f6c82da3be4c63f95800c64789447c89e47d01499" address="unix:///run/containerd/s/4f3b5738d88de292e3c6c20a1a83dabf2e47145438c1dbbf33ff0bd20cc7c53d" namespace=k8s.io protocol=ttrpc version=3 Jul 7 00:20:01.007751 systemd[1]: Started sshd@27-172.31.31.140:22-147.75.109.163:43102.service - OpenSSH per-connection server daemon (147.75.109.163:43102). Jul 7 00:20:01.035006 systemd[1]: Started cri-containerd-015ef8e04bd79f4024caff2f6c82da3be4c63f95800c64789447c89e47d01499.scope - libcontainer container 015ef8e04bd79f4024caff2f6c82da3be4c63f95800c64789447c89e47d01499. Jul 7 00:20:01.080192 containerd[1994]: time="2025-07-07T00:20:01.080135744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7k8ds,Uid:43a72664-8909-44af-9f3e-d9f0025e6024,Namespace:kube-system,Attempt:0,} returns sandbox id \"015ef8e04bd79f4024caff2f6c82da3be4c63f95800c64789447c89e47d01499\"" Jul 7 00:20:01.085049 containerd[1994]: time="2025-07-07T00:20:01.084105619Z" level=info msg="CreateContainer within sandbox \"015ef8e04bd79f4024caff2f6c82da3be4c63f95800c64789447c89e47d01499\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 7 00:20:01.099376 containerd[1994]: time="2025-07-07T00:20:01.099242697Z" level=info msg="Container b6cb8f68775323746357a4c30704e11e3af4c360fd94e5385c21d59ac97cb6c2: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:20:01.135051 containerd[1994]: time="2025-07-07T00:20:01.134983288Z" level=info msg="CreateContainer within sandbox \"015ef8e04bd79f4024caff2f6c82da3be4c63f95800c64789447c89e47d01499\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b6cb8f68775323746357a4c30704e11e3af4c360fd94e5385c21d59ac97cb6c2\"" Jul 7 00:20:01.140653 containerd[1994]: time="2025-07-07T00:20:01.140610952Z" level=info msg="StartContainer for \"b6cb8f68775323746357a4c30704e11e3af4c360fd94e5385c21d59ac97cb6c2\"" Jul 7 00:20:01.145041 containerd[1994]: time="2025-07-07T00:20:01.145001990Z" level=info msg="connecting to shim b6cb8f68775323746357a4c30704e11e3af4c360fd94e5385c21d59ac97cb6c2" address="unix:///run/containerd/s/4f3b5738d88de292e3c6c20a1a83dabf2e47145438c1dbbf33ff0bd20cc7c53d" protocol=ttrpc version=3 Jul 7 00:20:01.172670 systemd[1]: Started cri-containerd-b6cb8f68775323746357a4c30704e11e3af4c360fd94e5385c21d59ac97cb6c2.scope - libcontainer container b6cb8f68775323746357a4c30704e11e3af4c360fd94e5385c21d59ac97cb6c2. Jul 7 00:20:01.217488 sshd[5291]: Accepted publickey for core from 147.75.109.163 port 43102 ssh2: RSA SHA256:E/SRBqimxlLE3eX7n/Q1UlDR6MFr+oR3VAz7Mg10aAM Jul 7 00:20:01.221150 sshd-session[5291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:20:01.241765 systemd-logind[1973]: New session 28 of user core. Jul 7 00:20:01.250514 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 7 00:20:01.268179 containerd[1994]: time="2025-07-07T00:20:01.268146610Z" level=info msg="StartContainer for \"b6cb8f68775323746357a4c30704e11e3af4c360fd94e5385c21d59ac97cb6c2\" returns successfully" Jul 7 00:20:01.312885 systemd[1]: cri-containerd-b6cb8f68775323746357a4c30704e11e3af4c360fd94e5385c21d59ac97cb6c2.scope: Deactivated successfully. Jul 7 00:20:01.313573 systemd[1]: cri-containerd-b6cb8f68775323746357a4c30704e11e3af4c360fd94e5385c21d59ac97cb6c2.scope: Consumed 30ms CPU time, 9.5M memory peak, 2.9M read from disk. Jul 7 00:20:01.318209 containerd[1994]: time="2025-07-07T00:20:01.317312171Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b6cb8f68775323746357a4c30704e11e3af4c360fd94e5385c21d59ac97cb6c2\" id:\"b6cb8f68775323746357a4c30704e11e3af4c360fd94e5385c21d59ac97cb6c2\" pid:5331 exited_at:{seconds:1751847601 nanos:316822115}" Jul 7 00:20:01.318613 containerd[1994]: time="2025-07-07T00:20:01.317924076Z" level=info msg="received exit event container_id:\"b6cb8f68775323746357a4c30704e11e3af4c360fd94e5385c21d59ac97cb6c2\" id:\"b6cb8f68775323746357a4c30704e11e3af4c360fd94e5385c21d59ac97cb6c2\" pid:5331 exited_at:{seconds:1751847601 nanos:316822115}" Jul 7 00:20:01.857163 containerd[1994]: time="2025-07-07T00:20:01.856317629Z" level=info msg="CreateContainer within sandbox \"015ef8e04bd79f4024caff2f6c82da3be4c63f95800c64789447c89e47d01499\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 7 00:20:01.899835 containerd[1994]: time="2025-07-07T00:20:01.899786825Z" level=info msg="Container 2dede69d3a3e3d0bcb0c65afd86e7742d7c9e7b82584d5a787ff7e5f56a67acd: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:20:01.941276 containerd[1994]: time="2025-07-07T00:20:01.941215605Z" level=info msg="CreateContainer within sandbox \"015ef8e04bd79f4024caff2f6c82da3be4c63f95800c64789447c89e47d01499\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2dede69d3a3e3d0bcb0c65afd86e7742d7c9e7b82584d5a787ff7e5f56a67acd\"" Jul 7 00:20:01.945157 containerd[1994]: time="2025-07-07T00:20:01.945119662Z" level=info msg="StartContainer for \"2dede69d3a3e3d0bcb0c65afd86e7742d7c9e7b82584d5a787ff7e5f56a67acd\"" Jul 7 00:20:01.948008 containerd[1994]: time="2025-07-07T00:20:01.947849367Z" level=info msg="connecting to shim 2dede69d3a3e3d0bcb0c65afd86e7742d7c9e7b82584d5a787ff7e5f56a67acd" address="unix:///run/containerd/s/4f3b5738d88de292e3c6c20a1a83dabf2e47145438c1dbbf33ff0bd20cc7c53d" protocol=ttrpc version=3 Jul 7 00:20:02.000032 systemd[1]: Started cri-containerd-2dede69d3a3e3d0bcb0c65afd86e7742d7c9e7b82584d5a787ff7e5f56a67acd.scope - libcontainer container 2dede69d3a3e3d0bcb0c65afd86e7742d7c9e7b82584d5a787ff7e5f56a67acd. Jul 7 00:20:02.168910 containerd[1994]: time="2025-07-07T00:20:02.168005755Z" level=info msg="StartContainer for \"2dede69d3a3e3d0bcb0c65afd86e7742d7c9e7b82584d5a787ff7e5f56a67acd\" returns successfully" Jul 7 00:20:02.207429 systemd[1]: cri-containerd-2dede69d3a3e3d0bcb0c65afd86e7742d7c9e7b82584d5a787ff7e5f56a67acd.scope: Deactivated successfully. Jul 7 00:20:02.213081 systemd[1]: cri-containerd-2dede69d3a3e3d0bcb0c65afd86e7742d7c9e7b82584d5a787ff7e5f56a67acd.scope: Consumed 31ms CPU time, 7.1M memory peak, 1.8M read from disk. Jul 7 00:20:02.217602 containerd[1994]: time="2025-07-07T00:20:02.215578966Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2dede69d3a3e3d0bcb0c65afd86e7742d7c9e7b82584d5a787ff7e5f56a67acd\" id:\"2dede69d3a3e3d0bcb0c65afd86e7742d7c9e7b82584d5a787ff7e5f56a67acd\" pid:5381 exited_at:{seconds:1751847602 nanos:213381872}" Jul 7 00:20:02.217602 containerd[1994]: time="2025-07-07T00:20:02.216975312Z" level=info msg="received exit event container_id:\"2dede69d3a3e3d0bcb0c65afd86e7742d7c9e7b82584d5a787ff7e5f56a67acd\" id:\"2dede69d3a3e3d0bcb0c65afd86e7742d7c9e7b82584d5a787ff7e5f56a67acd\" pid:5381 exited_at:{seconds:1751847602 nanos:213381872}" Jul 7 00:20:02.359831 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2dede69d3a3e3d0bcb0c65afd86e7742d7c9e7b82584d5a787ff7e5f56a67acd-rootfs.mount: Deactivated successfully. Jul 7 00:20:02.849244 containerd[1994]: time="2025-07-07T00:20:02.847388752Z" level=info msg="CreateContainer within sandbox \"015ef8e04bd79f4024caff2f6c82da3be4c63f95800c64789447c89e47d01499\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 7 00:20:02.864815 containerd[1994]: time="2025-07-07T00:20:02.864768635Z" level=info msg="Container 46327f264b81e5dba8d64f11d4f3da6cd932a987024ee8f26c5d89b5c85ca5fa: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:20:02.887751 containerd[1994]: time="2025-07-07T00:20:02.887178398Z" level=info msg="CreateContainer within sandbox \"015ef8e04bd79f4024caff2f6c82da3be4c63f95800c64789447c89e47d01499\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"46327f264b81e5dba8d64f11d4f3da6cd932a987024ee8f26c5d89b5c85ca5fa\"" Jul 7 00:20:02.889722 containerd[1994]: time="2025-07-07T00:20:02.889683452Z" level=info msg="StartContainer for \"46327f264b81e5dba8d64f11d4f3da6cd932a987024ee8f26c5d89b5c85ca5fa\"" Jul 7 00:20:02.893987 containerd[1994]: time="2025-07-07T00:20:02.893940330Z" level=info msg="connecting to shim 46327f264b81e5dba8d64f11d4f3da6cd932a987024ee8f26c5d89b5c85ca5fa" address="unix:///run/containerd/s/4f3b5738d88de292e3c6c20a1a83dabf2e47145438c1dbbf33ff0bd20cc7c53d" protocol=ttrpc version=3 Jul 7 00:20:02.924692 systemd[1]: Started cri-containerd-46327f264b81e5dba8d64f11d4f3da6cd932a987024ee8f26c5d89b5c85ca5fa.scope - libcontainer container 46327f264b81e5dba8d64f11d4f3da6cd932a987024ee8f26c5d89b5c85ca5fa. Jul 7 00:20:03.061122 containerd[1994]: time="2025-07-07T00:20:03.061078505Z" level=info msg="StartContainer for \"46327f264b81e5dba8d64f11d4f3da6cd932a987024ee8f26c5d89b5c85ca5fa\" returns successfully" Jul 7 00:20:03.078921 systemd[1]: cri-containerd-46327f264b81e5dba8d64f11d4f3da6cd932a987024ee8f26c5d89b5c85ca5fa.scope: Deactivated successfully. Jul 7 00:20:03.084238 containerd[1994]: time="2025-07-07T00:20:03.084186877Z" level=info msg="received exit event container_id:\"46327f264b81e5dba8d64f11d4f3da6cd932a987024ee8f26c5d89b5c85ca5fa\" id:\"46327f264b81e5dba8d64f11d4f3da6cd932a987024ee8f26c5d89b5c85ca5fa\" pid:5424 exited_at:{seconds:1751847603 nanos:83930611}" Jul 7 00:20:03.086442 containerd[1994]: time="2025-07-07T00:20:03.085074093Z" level=info msg="TaskExit event in podsandbox handler container_id:\"46327f264b81e5dba8d64f11d4f3da6cd932a987024ee8f26c5d89b5c85ca5fa\" id:\"46327f264b81e5dba8d64f11d4f3da6cd932a987024ee8f26c5d89b5c85ca5fa\" pid:5424 exited_at:{seconds:1751847603 nanos:83930611}" Jul 7 00:20:03.149608 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46327f264b81e5dba8d64f11d4f3da6cd932a987024ee8f26c5d89b5c85ca5fa-rootfs.mount: Deactivated successfully. Jul 7 00:20:03.488034 kubelet[3297]: E0707 00:20:03.487979 3297 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 7 00:20:03.853771 containerd[1994]: time="2025-07-07T00:20:03.853544579Z" level=info msg="CreateContainer within sandbox \"015ef8e04bd79f4024caff2f6c82da3be4c63f95800c64789447c89e47d01499\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 7 00:20:03.892899 containerd[1994]: time="2025-07-07T00:20:03.892835497Z" level=info msg="Container 83b5f15e413f0fbdf1c4b280674e84a42dfe21e502fe362e4a7436ce77f0b3f5: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:20:03.900625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2876101231.mount: Deactivated successfully. Jul 7 00:20:03.907969 containerd[1994]: time="2025-07-07T00:20:03.907924292Z" level=info msg="CreateContainer within sandbox \"015ef8e04bd79f4024caff2f6c82da3be4c63f95800c64789447c89e47d01499\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"83b5f15e413f0fbdf1c4b280674e84a42dfe21e502fe362e4a7436ce77f0b3f5\"" Jul 7 00:20:03.909443 containerd[1994]: time="2025-07-07T00:20:03.909402432Z" level=info msg="StartContainer for \"83b5f15e413f0fbdf1c4b280674e84a42dfe21e502fe362e4a7436ce77f0b3f5\"" Jul 7 00:20:03.910751 containerd[1994]: time="2025-07-07T00:20:03.910707281Z" level=info msg="connecting to shim 83b5f15e413f0fbdf1c4b280674e84a42dfe21e502fe362e4a7436ce77f0b3f5" address="unix:///run/containerd/s/4f3b5738d88de292e3c6c20a1a83dabf2e47145438c1dbbf33ff0bd20cc7c53d" protocol=ttrpc version=3 Jul 7 00:20:03.945603 systemd[1]: Started cri-containerd-83b5f15e413f0fbdf1c4b280674e84a42dfe21e502fe362e4a7436ce77f0b3f5.scope - libcontainer container 83b5f15e413f0fbdf1c4b280674e84a42dfe21e502fe362e4a7436ce77f0b3f5. Jul 7 00:20:03.979140 systemd[1]: cri-containerd-83b5f15e413f0fbdf1c4b280674e84a42dfe21e502fe362e4a7436ce77f0b3f5.scope: Deactivated successfully. Jul 7 00:20:03.980975 containerd[1994]: time="2025-07-07T00:20:03.980932691Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83b5f15e413f0fbdf1c4b280674e84a42dfe21e502fe362e4a7436ce77f0b3f5\" id:\"83b5f15e413f0fbdf1c4b280674e84a42dfe21e502fe362e4a7436ce77f0b3f5\" pid:5466 exited_at:{seconds:1751847603 nanos:980629745}" Jul 7 00:20:03.983216 containerd[1994]: time="2025-07-07T00:20:03.983039026Z" level=info msg="received exit event container_id:\"83b5f15e413f0fbdf1c4b280674e84a42dfe21e502fe362e4a7436ce77f0b3f5\" id:\"83b5f15e413f0fbdf1c4b280674e84a42dfe21e502fe362e4a7436ce77f0b3f5\" pid:5466 exited_at:{seconds:1751847603 nanos:980629745}" Jul 7 00:20:03.995821 containerd[1994]: time="2025-07-07T00:20:03.995778542Z" level=info msg="StartContainer for \"83b5f15e413f0fbdf1c4b280674e84a42dfe21e502fe362e4a7436ce77f0b3f5\" returns successfully" Jul 7 00:20:04.012230 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-83b5f15e413f0fbdf1c4b280674e84a42dfe21e502fe362e4a7436ce77f0b3f5-rootfs.mount: Deactivated successfully. Jul 7 00:20:04.859512 containerd[1994]: time="2025-07-07T00:20:04.859462923Z" level=info msg="CreateContainer within sandbox \"015ef8e04bd79f4024caff2f6c82da3be4c63f95800c64789447c89e47d01499\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 7 00:20:04.877893 containerd[1994]: time="2025-07-07T00:20:04.877453792Z" level=info msg="Container 0e6aa54e1122b39620167b78be1efd833a2323d479056de9eae7da242c9c4a61: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:20:04.891823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1292803179.mount: Deactivated successfully. Jul 7 00:20:04.897879 containerd[1994]: time="2025-07-07T00:20:04.897796973Z" level=info msg="CreateContainer within sandbox \"015ef8e04bd79f4024caff2f6c82da3be4c63f95800c64789447c89e47d01499\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0e6aa54e1122b39620167b78be1efd833a2323d479056de9eae7da242c9c4a61\"" Jul 7 00:20:04.899423 containerd[1994]: time="2025-07-07T00:20:04.899391653Z" level=info msg="StartContainer for \"0e6aa54e1122b39620167b78be1efd833a2323d479056de9eae7da242c9c4a61\"" Jul 7 00:20:04.901013 containerd[1994]: time="2025-07-07T00:20:04.900965942Z" level=info msg="connecting to shim 0e6aa54e1122b39620167b78be1efd833a2323d479056de9eae7da242c9c4a61" address="unix:///run/containerd/s/4f3b5738d88de292e3c6c20a1a83dabf2e47145438c1dbbf33ff0bd20cc7c53d" protocol=ttrpc version=3 Jul 7 00:20:04.935740 systemd[1]: Started cri-containerd-0e6aa54e1122b39620167b78be1efd833a2323d479056de9eae7da242c9c4a61.scope - libcontainer container 0e6aa54e1122b39620167b78be1efd833a2323d479056de9eae7da242c9c4a61. Jul 7 00:20:04.979717 containerd[1994]: time="2025-07-07T00:20:04.979678117Z" level=info msg="StartContainer for \"0e6aa54e1122b39620167b78be1efd833a2323d479056de9eae7da242c9c4a61\" returns successfully" Jul 7 00:20:05.120150 containerd[1994]: time="2025-07-07T00:20:05.119852519Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0e6aa54e1122b39620167b78be1efd833a2323d479056de9eae7da242c9c4a61\" id:\"8a75272dec82ef2e3767b6e980dff6c4cd850319081baad11ba695b2ec1f75ca\" pid:5535 exited_at:{seconds:1751847605 nanos:119117153}" Jul 7 00:20:05.867461 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jul 7 00:20:05.928063 kubelet[3297]: I0707 00:20:05.927291 3297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7k8ds" podStartSLOduration=5.927264747 podStartE2EDuration="5.927264747s" podCreationTimestamp="2025-07-07 00:20:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:20:05.925619394 +0000 UTC m=+107.817062498" watchObservedRunningTime="2025-07-07 00:20:05.927264747 +0000 UTC m=+107.818707846" Jul 7 00:20:08.179882 containerd[1994]: time="2025-07-07T00:20:08.179831711Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0e6aa54e1122b39620167b78be1efd833a2323d479056de9eae7da242c9c4a61\" id:\"640665b1a2ca011d74cebde1a77ae46dfbd196d7762073d99498e4d2b92db8b7\" pid:5710 exit_status:1 exited_at:{seconds:1751847608 nanos:179338001}" Jul 7 00:20:09.186048 (udev-worker)[6018]: Network interface NamePolicy= disabled on kernel command line. Jul 7 00:20:09.186417 (udev-worker)[6019]: Network interface NamePolicy= disabled on kernel command line. Jul 7 00:20:09.192478 systemd-networkd[1825]: lxc_health: Link UP Jul 7 00:20:09.208824 systemd-networkd[1825]: lxc_health: Gained carrier Jul 7 00:20:10.520209 containerd[1994]: time="2025-07-07T00:20:10.520143949Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0e6aa54e1122b39620167b78be1efd833a2323d479056de9eae7da242c9c4a61\" id:\"b8fd8eb6e34048e1351686d53f4f0a3b95d43c0f7f5d32745f5f3de239b3a9b7\" pid:6070 exited_at:{seconds:1751847610 nanos:519482941}" Jul 7 00:20:10.636703 systemd-networkd[1825]: lxc_health: Gained IPv6LL Jul 7 00:20:12.728773 containerd[1994]: time="2025-07-07T00:20:12.728649056Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0e6aa54e1122b39620167b78be1efd833a2323d479056de9eae7da242c9c4a61\" id:\"7585675f1493e344b76fa4131d2b9286c4bf81d672b2a049d0c3022a244402f9\" pid:6103 exited_at:{seconds:1751847612 nanos:727826151}" Jul 7 00:20:13.576570 ntpd[1966]: Listen normally on 14 lxc_health [fe80::58be:89ff:fe10:f7b9%14]:123 Jul 7 00:20:13.577059 ntpd[1966]: 7 Jul 00:20:13 ntpd[1966]: Listen normally on 14 lxc_health [fe80::58be:89ff:fe10:f7b9%14]:123 Jul 7 00:20:14.857021 containerd[1994]: time="2025-07-07T00:20:14.856856347Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0e6aa54e1122b39620167b78be1efd833a2323d479056de9eae7da242c9c4a61\" id:\"d5ca5e37c43a523e62b0be051fdc916995e8ed2032471f975c208a7bc2301385\" pid:6135 exited_at:{seconds:1751847614 nanos:855881066}" Jul 7 00:20:14.886175 sshd[5344]: Connection closed by 147.75.109.163 port 43102 Jul 7 00:20:14.887359 sshd-session[5291]: pam_unix(sshd:session): session closed for user core Jul 7 00:20:14.900205 systemd[1]: sshd@27-172.31.31.140:22-147.75.109.163:43102.service: Deactivated successfully. Jul 7 00:20:14.900509 systemd-logind[1973]: Session 28 logged out. Waiting for processes to exit. Jul 7 00:20:14.903074 systemd[1]: session-28.scope: Deactivated successfully. Jul 7 00:20:14.905871 systemd-logind[1973]: Removed session 28. Jul 7 00:20:18.366521 containerd[1994]: time="2025-07-07T00:20:18.366482218Z" level=info msg="StopPodSandbox for \"cbb9f6ba5860ef94d88bc7d495577ddedc551e19b8e51d1e5627ffb192cfce11\"" Jul 7 00:20:18.367119 containerd[1994]: time="2025-07-07T00:20:18.366608989Z" level=info msg="TearDown network for sandbox \"cbb9f6ba5860ef94d88bc7d495577ddedc551e19b8e51d1e5627ffb192cfce11\" successfully" Jul 7 00:20:18.367119 containerd[1994]: time="2025-07-07T00:20:18.366620177Z" level=info msg="StopPodSandbox for \"cbb9f6ba5860ef94d88bc7d495577ddedc551e19b8e51d1e5627ffb192cfce11\" returns successfully" Jul 7 00:20:18.367119 containerd[1994]: time="2025-07-07T00:20:18.367035287Z" level=info msg="RemovePodSandbox for \"cbb9f6ba5860ef94d88bc7d495577ddedc551e19b8e51d1e5627ffb192cfce11\"" Jul 7 00:20:18.374053 containerd[1994]: time="2025-07-07T00:20:18.374004581Z" level=info msg="Forcibly stopping sandbox \"cbb9f6ba5860ef94d88bc7d495577ddedc551e19b8e51d1e5627ffb192cfce11\"" Jul 7 00:20:18.374222 containerd[1994]: time="2025-07-07T00:20:18.374172567Z" level=info msg="TearDown network for sandbox \"cbb9f6ba5860ef94d88bc7d495577ddedc551e19b8e51d1e5627ffb192cfce11\" successfully" Jul 7 00:20:18.377776 containerd[1994]: time="2025-07-07T00:20:18.377738769Z" level=info msg="Ensure that sandbox cbb9f6ba5860ef94d88bc7d495577ddedc551e19b8e51d1e5627ffb192cfce11 in task-service has been cleanup successfully" Jul 7 00:20:18.383795 containerd[1994]: time="2025-07-07T00:20:18.383722690Z" level=info msg="RemovePodSandbox \"cbb9f6ba5860ef94d88bc7d495577ddedc551e19b8e51d1e5627ffb192cfce11\" returns successfully" Jul 7 00:20:18.384252 containerd[1994]: time="2025-07-07T00:20:18.384228888Z" level=info msg="StopPodSandbox for \"90a9dc74bab96b31e554185441b1f013452257828b62691a1e7829aace710015\"" Jul 7 00:20:18.384428 containerd[1994]: time="2025-07-07T00:20:18.384365748Z" level=info msg="TearDown network for sandbox \"90a9dc74bab96b31e554185441b1f013452257828b62691a1e7829aace710015\" successfully" Jul 7 00:20:18.384428 containerd[1994]: time="2025-07-07T00:20:18.384393464Z" level=info msg="StopPodSandbox for \"90a9dc74bab96b31e554185441b1f013452257828b62691a1e7829aace710015\" returns successfully" Jul 7 00:20:18.384674 containerd[1994]: time="2025-07-07T00:20:18.384651168Z" level=info msg="RemovePodSandbox for \"90a9dc74bab96b31e554185441b1f013452257828b62691a1e7829aace710015\"" Jul 7 00:20:18.384808 containerd[1994]: time="2025-07-07T00:20:18.384680847Z" level=info msg="Forcibly stopping sandbox \"90a9dc74bab96b31e554185441b1f013452257828b62691a1e7829aace710015\"" Jul 7 00:20:18.384808 containerd[1994]: time="2025-07-07T00:20:18.384764964Z" level=info msg="TearDown network for sandbox \"90a9dc74bab96b31e554185441b1f013452257828b62691a1e7829aace710015\" successfully" Jul 7 00:20:18.387054 containerd[1994]: time="2025-07-07T00:20:18.387001558Z" level=info msg="Ensure that sandbox 90a9dc74bab96b31e554185441b1f013452257828b62691a1e7829aace710015 in task-service has been cleanup successfully" Jul 7 00:20:18.393210 containerd[1994]: time="2025-07-07T00:20:18.393167177Z" level=info msg="RemovePodSandbox \"90a9dc74bab96b31e554185441b1f013452257828b62691a1e7829aace710015\" returns successfully" Jul 7 00:20:29.577126 systemd[1]: cri-containerd-6e74632f8b3cff993f22d8f7237bfdb9deeababb02257f9299b19ca299ec6b0f.scope: Deactivated successfully. Jul 7 00:20:29.577543 systemd[1]: cri-containerd-6e74632f8b3cff993f22d8f7237bfdb9deeababb02257f9299b19ca299ec6b0f.scope: Consumed 2.970s CPU time, 70.9M memory peak, 22.2M read from disk. Jul 7 00:20:29.582081 containerd[1994]: time="2025-07-07T00:20:29.582041066Z" level=info msg="received exit event container_id:\"6e74632f8b3cff993f22d8f7237bfdb9deeababb02257f9299b19ca299ec6b0f\" id:\"6e74632f8b3cff993f22d8f7237bfdb9deeababb02257f9299b19ca299ec6b0f\" pid:3117 exit_status:1 exited_at:{seconds:1751847629 nanos:581549999}" Jul 7 00:20:29.583431 containerd[1994]: time="2025-07-07T00:20:29.583382672Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6e74632f8b3cff993f22d8f7237bfdb9deeababb02257f9299b19ca299ec6b0f\" id:\"6e74632f8b3cff993f22d8f7237bfdb9deeababb02257f9299b19ca299ec6b0f\" pid:3117 exit_status:1 exited_at:{seconds:1751847629 nanos:581549999}" Jul 7 00:20:29.609765 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e74632f8b3cff993f22d8f7237bfdb9deeababb02257f9299b19ca299ec6b0f-rootfs.mount: Deactivated successfully. Jul 7 00:20:29.951677 kubelet[3297]: I0707 00:20:29.951032 3297 scope.go:117] "RemoveContainer" containerID="6e74632f8b3cff993f22d8f7237bfdb9deeababb02257f9299b19ca299ec6b0f" Jul 7 00:20:29.954118 containerd[1994]: time="2025-07-07T00:20:29.954054077Z" level=info msg="CreateContainer within sandbox \"27aff665c929a6902a5766bd5a381cb51b35876c7c8f32cbf36cc1b362b4f97b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 7 00:20:29.972008 containerd[1994]: time="2025-07-07T00:20:29.971664416Z" level=info msg="Container 52192530fd92aef77d8462cd0f6bcd740231c727012425d7ce7569fc97f487af: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:20:29.974887 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount158026126.mount: Deactivated successfully. Jul 7 00:20:29.986163 containerd[1994]: time="2025-07-07T00:20:29.986109910Z" level=info msg="CreateContainer within sandbox \"27aff665c929a6902a5766bd5a381cb51b35876c7c8f32cbf36cc1b362b4f97b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"52192530fd92aef77d8462cd0f6bcd740231c727012425d7ce7569fc97f487af\"" Jul 7 00:20:29.988207 containerd[1994]: time="2025-07-07T00:20:29.986810442Z" level=info msg="StartContainer for \"52192530fd92aef77d8462cd0f6bcd740231c727012425d7ce7569fc97f487af\"" Jul 7 00:20:29.988207 containerd[1994]: time="2025-07-07T00:20:29.988112254Z" level=info msg="connecting to shim 52192530fd92aef77d8462cd0f6bcd740231c727012425d7ce7569fc97f487af" address="unix:///run/containerd/s/8e32b425b7ff0a427dc87ce48fb07f77973407e574281cf1d881effd880c2b9e" protocol=ttrpc version=3 Jul 7 00:20:30.023856 systemd[1]: Started cri-containerd-52192530fd92aef77d8462cd0f6bcd740231c727012425d7ce7569fc97f487af.scope - libcontainer container 52192530fd92aef77d8462cd0f6bcd740231c727012425d7ce7569fc97f487af. Jul 7 00:20:30.105669 containerd[1994]: time="2025-07-07T00:20:30.105605402Z" level=info msg="StartContainer for \"52192530fd92aef77d8462cd0f6bcd740231c727012425d7ce7569fc97f487af\" returns successfully" Jul 7 00:20:31.152576 kubelet[3297]: E0707 00:20:31.151688 3297 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-140?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 7 00:20:34.156558 systemd[1]: cri-containerd-df632b625a5d9a2b70e45fafa641486bd92d98c3a802a9e2f921e349374cd26c.scope: Deactivated successfully. Jul 7 00:20:34.157610 systemd[1]: cri-containerd-df632b625a5d9a2b70e45fafa641486bd92d98c3a802a9e2f921e349374cd26c.scope: Consumed 1.759s CPU time, 31.4M memory peak, 12.5M read from disk. Jul 7 00:20:34.160146 containerd[1994]: time="2025-07-07T00:20:34.160098004Z" level=info msg="received exit event container_id:\"df632b625a5d9a2b70e45fafa641486bd92d98c3a802a9e2f921e349374cd26c\" id:\"df632b625a5d9a2b70e45fafa641486bd92d98c3a802a9e2f921e349374cd26c\" pid:3147 exit_status:1 exited_at:{seconds:1751847634 nanos:159849902}" Jul 7 00:20:34.160805 containerd[1994]: time="2025-07-07T00:20:34.160438104Z" level=info msg="TaskExit event in podsandbox handler container_id:\"df632b625a5d9a2b70e45fafa641486bd92d98c3a802a9e2f921e349374cd26c\" id:\"df632b625a5d9a2b70e45fafa641486bd92d98c3a802a9e2f921e349374cd26c\" pid:3147 exit_status:1 exited_at:{seconds:1751847634 nanos:159849902}" Jul 7 00:20:34.187452 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df632b625a5d9a2b70e45fafa641486bd92d98c3a802a9e2f921e349374cd26c-rootfs.mount: Deactivated successfully. Jul 7 00:20:34.967837 kubelet[3297]: I0707 00:20:34.967800 3297 scope.go:117] "RemoveContainer" containerID="df632b625a5d9a2b70e45fafa641486bd92d98c3a802a9e2f921e349374cd26c" Jul 7 00:20:34.970011 containerd[1994]: time="2025-07-07T00:20:34.969957560Z" level=info msg="CreateContainer within sandbox \"3bb5e78755ef39b5634830bc32c7aeaaeed0a1b1ca575982bb604fa61004a171\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 7 00:20:34.992369 containerd[1994]: time="2025-07-07T00:20:34.990369367Z" level=info msg="Container 87db5812625d4938f7543b8e63ca70d612d0acd456edac06ac8a8ed23e8a1c41: CDI devices from CRI Config.CDIDevices: []" Jul 7 00:20:35.009134 containerd[1994]: time="2025-07-07T00:20:35.009083176Z" level=info msg="CreateContainer within sandbox \"3bb5e78755ef39b5634830bc32c7aeaaeed0a1b1ca575982bb604fa61004a171\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"87db5812625d4938f7543b8e63ca70d612d0acd456edac06ac8a8ed23e8a1c41\"" Jul 7 00:20:35.009821 containerd[1994]: time="2025-07-07T00:20:35.009783897Z" level=info msg="StartContainer for \"87db5812625d4938f7543b8e63ca70d612d0acd456edac06ac8a8ed23e8a1c41\"" Jul 7 00:20:35.011084 containerd[1994]: time="2025-07-07T00:20:35.011040639Z" level=info msg="connecting to shim 87db5812625d4938f7543b8e63ca70d612d0acd456edac06ac8a8ed23e8a1c41" address="unix:///run/containerd/s/7bd5b2e2f21916edb5a600a15c7681485e0f46fd013681b382255f82c4450ca3" protocol=ttrpc version=3 Jul 7 00:20:35.040621 systemd[1]: Started cri-containerd-87db5812625d4938f7543b8e63ca70d612d0acd456edac06ac8a8ed23e8a1c41.scope - libcontainer container 87db5812625d4938f7543b8e63ca70d612d0acd456edac06ac8a8ed23e8a1c41. Jul 7 00:20:35.102070 containerd[1994]: time="2025-07-07T00:20:35.102034942Z" level=info msg="StartContainer for \"87db5812625d4938f7543b8e63ca70d612d0acd456edac06ac8a8ed23e8a1c41\" returns successfully" Jul 7 00:20:41.154233 kubelet[3297]: E0707 00:20:41.154174 3297 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-140?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"