Jul 7 06:15:33.903101 kernel: Linux version 6.12.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 21:56:00 -00 2025 Jul 7 06:15:33.903145 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2e0b2c30526b1d273b6d599d4c30389a93a14ce36aaa5af83a05b11c5ea5ae50 Jul 7 06:15:33.903158 kernel: BIOS-provided physical RAM map: Jul 7 06:15:33.903167 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 7 06:15:33.903177 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Jul 7 06:15:33.903186 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Jul 7 06:15:33.903199 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jul 7 06:15:33.903210 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jul 7 06:15:33.903224 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Jul 7 06:15:33.903235 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jul 7 06:15:33.904297 kernel: NX (Execute Disable) protection: active Jul 7 06:15:33.904310 kernel: APIC: Static calls initialized Jul 7 06:15:33.904322 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Jul 7 06:15:33.904333 kernel: extended physical RAM map: Jul 7 06:15:33.904352 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 7 06:15:33.904364 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Jul 7 06:15:33.904377 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Jul 7 06:15:33.904389 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Jul 7 06:15:33.904401 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Jul 7 06:15:33.904413 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jul 7 06:15:33.904426 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jul 7 06:15:33.904437 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Jul 7 06:15:33.904449 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jul 7 06:15:33.904461 kernel: efi: EFI v2.7 by EDK II Jul 7 06:15:33.904476 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 Jul 7 06:15:33.904488 kernel: secureboot: Secure boot disabled Jul 7 06:15:33.904499 kernel: SMBIOS 2.7 present. Jul 7 06:15:33.904511 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jul 7 06:15:33.904523 kernel: DMI: Memory slots populated: 1/1 Jul 7 06:15:33.904535 kernel: Hypervisor detected: KVM Jul 7 06:15:33.904546 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 7 06:15:33.904558 kernel: kvm-clock: using sched offset of 5110477835 cycles Jul 7 06:15:33.904572 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 7 06:15:33.904585 kernel: tsc: Detected 2499.996 MHz processor Jul 7 06:15:33.904597 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 7 06:15:33.904613 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 7 06:15:33.904625 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Jul 7 06:15:33.904637 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 7 06:15:33.904650 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 7 06:15:33.904663 kernel: Using GB pages for direct mapping Jul 7 06:15:33.904680 kernel: ACPI: Early table checksum verification disabled Jul 7 06:15:33.904696 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Jul 7 06:15:33.904710 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Jul 7 06:15:33.904722 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 7 06:15:33.904736 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jul 7 06:15:33.904749 kernel: ACPI: FACS 0x00000000789D0000 000040 Jul 7 06:15:33.904762 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jul 7 06:15:33.904775 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 7 06:15:33.904788 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 7 06:15:33.904804 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jul 7 06:15:33.904817 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jul 7 06:15:33.904831 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jul 7 06:15:33.904844 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jul 7 06:15:33.904857 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Jul 7 06:15:33.904870 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Jul 7 06:15:33.904883 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Jul 7 06:15:33.904896 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Jul 7 06:15:33.904909 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Jul 7 06:15:33.904925 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Jul 7 06:15:33.904938 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Jul 7 06:15:33.904950 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Jul 7 06:15:33.905277 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Jul 7 06:15:33.905301 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Jul 7 06:15:33.905317 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Jul 7 06:15:33.905332 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Jul 7 06:15:33.905347 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jul 7 06:15:33.905363 kernel: NUMA: Initialized distance table, cnt=1 Jul 7 06:15:33.905382 kernel: NODE_DATA(0) allocated [mem 0x7a8eddc0-0x7a8f4fff] Jul 7 06:15:33.905398 kernel: Zone ranges: Jul 7 06:15:33.905413 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 7 06:15:33.905428 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Jul 7 06:15:33.905444 kernel: Normal empty Jul 7 06:15:33.905459 kernel: Device empty Jul 7 06:15:33.905474 kernel: Movable zone start for each node Jul 7 06:15:33.905490 kernel: Early memory node ranges Jul 7 06:15:33.905506 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 7 06:15:33.905524 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Jul 7 06:15:33.905540 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Jul 7 06:15:33.905554 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Jul 7 06:15:33.905568 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 7 06:15:33.905582 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 7 06:15:33.905597 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jul 7 06:15:33.905612 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Jul 7 06:15:33.905627 kernel: ACPI: PM-Timer IO Port: 0xb008 Jul 7 06:15:33.905642 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 7 06:15:33.905658 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jul 7 06:15:33.905671 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 7 06:15:33.905685 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 7 06:15:33.905699 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 7 06:15:33.905712 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 7 06:15:33.905726 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 7 06:15:33.905741 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 7 06:15:33.905755 kernel: TSC deadline timer available Jul 7 06:15:33.905768 kernel: CPU topo: Max. logical packages: 1 Jul 7 06:15:33.905785 kernel: CPU topo: Max. logical dies: 1 Jul 7 06:15:33.905798 kernel: CPU topo: Max. dies per package: 1 Jul 7 06:15:33.905812 kernel: CPU topo: Max. threads per core: 2 Jul 7 06:15:33.905826 kernel: CPU topo: Num. cores per package: 1 Jul 7 06:15:33.905839 kernel: CPU topo: Num. threads per package: 2 Jul 7 06:15:33.905853 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jul 7 06:15:33.905867 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 7 06:15:33.905881 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Jul 7 06:15:33.905895 kernel: Booting paravirtualized kernel on KVM Jul 7 06:15:33.905911 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 7 06:15:33.905931 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 7 06:15:33.905947 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jul 7 06:15:33.905962 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jul 7 06:15:33.905977 kernel: pcpu-alloc: [0] 0 1 Jul 7 06:15:33.905992 kernel: kvm-guest: PV spinlocks enabled Jul 7 06:15:33.906008 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 7 06:15:33.906027 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2e0b2c30526b1d273b6d599d4c30389a93a14ce36aaa5af83a05b11c5ea5ae50 Jul 7 06:15:33.906043 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 06:15:33.906062 kernel: random: crng init done Jul 7 06:15:33.906078 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 06:15:33.906094 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 7 06:15:33.906110 kernel: Fallback order for Node 0: 0 Jul 7 06:15:33.906125 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Jul 7 06:15:33.906141 kernel: Policy zone: DMA32 Jul 7 06:15:33.906172 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 06:15:33.906188 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 7 06:15:33.906205 kernel: Kernel/User page tables isolation: enabled Jul 7 06:15:33.906222 kernel: ftrace: allocating 40095 entries in 157 pages Jul 7 06:15:33.907279 kernel: ftrace: allocated 157 pages with 5 groups Jul 7 06:15:33.907313 kernel: Dynamic Preempt: voluntary Jul 7 06:15:33.907328 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 06:15:33.907344 kernel: rcu: RCU event tracing is enabled. Jul 7 06:15:33.907357 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 7 06:15:33.907370 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 06:15:33.907385 kernel: Rude variant of Tasks RCU enabled. Jul 7 06:15:33.907403 kernel: Tracing variant of Tasks RCU enabled. Jul 7 06:15:33.907417 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 06:15:33.907432 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 7 06:15:33.907447 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 06:15:33.907461 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 06:15:33.907476 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 06:15:33.907490 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 7 06:15:33.907505 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 06:15:33.907523 kernel: Console: colour dummy device 80x25 Jul 7 06:15:33.907537 kernel: printk: legacy console [tty0] enabled Jul 7 06:15:33.907552 kernel: printk: legacy console [ttyS0] enabled Jul 7 06:15:33.907567 kernel: ACPI: Core revision 20240827 Jul 7 06:15:33.907582 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jul 7 06:15:33.907597 kernel: APIC: Switch to symmetric I/O mode setup Jul 7 06:15:33.907612 kernel: x2apic enabled Jul 7 06:15:33.907627 kernel: APIC: Switched APIC routing to: physical x2apic Jul 7 06:15:33.907642 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jul 7 06:15:33.907657 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Jul 7 06:15:33.907676 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 7 06:15:33.907690 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jul 7 06:15:33.907705 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 7 06:15:33.907720 kernel: Spectre V2 : Mitigation: Retpolines Jul 7 06:15:33.907734 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 7 06:15:33.907749 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jul 7 06:15:33.907765 kernel: RETBleed: Vulnerable Jul 7 06:15:33.907779 kernel: Speculative Store Bypass: Vulnerable Jul 7 06:15:33.907793 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jul 7 06:15:33.907807 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 7 06:15:33.907825 kernel: GDS: Unknown: Dependent on hypervisor status Jul 7 06:15:33.907840 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 7 06:15:33.907855 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 7 06:15:33.907868 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 7 06:15:33.907884 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 7 06:15:33.907898 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jul 7 06:15:33.907911 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jul 7 06:15:33.907926 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jul 7 06:15:33.907941 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jul 7 06:15:33.907955 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jul 7 06:15:33.907969 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jul 7 06:15:33.907987 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 7 06:15:33.908000 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jul 7 06:15:33.908015 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jul 7 06:15:33.908028 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jul 7 06:15:33.908042 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jul 7 06:15:33.908056 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jul 7 06:15:33.908069 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jul 7 06:15:33.908084 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jul 7 06:15:33.908098 kernel: Freeing SMP alternatives memory: 32K Jul 7 06:15:33.908112 kernel: pid_max: default: 32768 minimum: 301 Jul 7 06:15:33.908126 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 7 06:15:33.908143 kernel: landlock: Up and running. Jul 7 06:15:33.908157 kernel: SELinux: Initializing. Jul 7 06:15:33.908172 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 7 06:15:33.908187 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 7 06:15:33.908203 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jul 7 06:15:33.908217 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jul 7 06:15:33.908232 kernel: signal: max sigframe size: 3632 Jul 7 06:15:33.909288 kernel: rcu: Hierarchical SRCU implementation. Jul 7 06:15:33.909307 kernel: rcu: Max phase no-delay instances is 400. Jul 7 06:15:33.909323 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 7 06:15:33.909343 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 7 06:15:33.909358 kernel: smp: Bringing up secondary CPUs ... Jul 7 06:15:33.909373 kernel: smpboot: x86: Booting SMP configuration: Jul 7 06:15:33.909388 kernel: .... node #0, CPUs: #1 Jul 7 06:15:33.909405 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jul 7 06:15:33.909422 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 7 06:15:33.909436 kernel: smp: Brought up 1 node, 2 CPUs Jul 7 06:15:33.909451 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Jul 7 06:15:33.909466 kernel: Memory: 1908048K/2037804K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54432K init, 2536K bss, 125192K reserved, 0K cma-reserved) Jul 7 06:15:33.909481 kernel: devtmpfs: initialized Jul 7 06:15:33.909493 kernel: x86/mm: Memory block size: 128MB Jul 7 06:15:33.909507 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Jul 7 06:15:33.909520 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 06:15:33.909533 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 7 06:15:33.909546 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 06:15:33.909559 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 06:15:33.909572 kernel: audit: initializing netlink subsys (disabled) Jul 7 06:15:33.909588 kernel: audit: type=2000 audit(1751868930.924:1): state=initialized audit_enabled=0 res=1 Jul 7 06:15:33.909601 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 06:15:33.909616 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 7 06:15:33.909629 kernel: cpuidle: using governor menu Jul 7 06:15:33.909643 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 06:15:33.909658 kernel: dca service started, version 1.12.1 Jul 7 06:15:33.909672 kernel: PCI: Using configuration type 1 for base access Jul 7 06:15:33.909686 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 7 06:15:33.909699 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 06:15:33.909718 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 06:15:33.909732 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 06:15:33.909747 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 06:15:33.909760 kernel: ACPI: Added _OSI(Module Device) Jul 7 06:15:33.909774 kernel: ACPI: Added _OSI(Processor Device) Jul 7 06:15:33.912297 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 06:15:33.912318 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jul 7 06:15:33.912332 kernel: ACPI: Interpreter enabled Jul 7 06:15:33.912346 kernel: ACPI: PM: (supports S0 S5) Jul 7 06:15:33.912360 kernel: ACPI: Using IOAPIC for interrupt routing Jul 7 06:15:33.912378 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 7 06:15:33.912391 kernel: PCI: Using E820 reservations for host bridge windows Jul 7 06:15:33.912405 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 7 06:15:33.912419 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 7 06:15:33.912633 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 7 06:15:33.912771 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jul 7 06:15:33.912908 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jul 7 06:15:33.912933 kernel: acpiphp: Slot [3] registered Jul 7 06:15:33.912947 kernel: acpiphp: Slot [4] registered Jul 7 06:15:33.912961 kernel: acpiphp: Slot [5] registered Jul 7 06:15:33.912974 kernel: acpiphp: Slot [6] registered Jul 7 06:15:33.912987 kernel: acpiphp: Slot [7] registered Jul 7 06:15:33.913000 kernel: acpiphp: Slot [8] registered Jul 7 06:15:33.913014 kernel: acpiphp: Slot [9] registered Jul 7 06:15:33.913028 kernel: acpiphp: Slot [10] registered Jul 7 06:15:33.913042 kernel: acpiphp: Slot [11] registered Jul 7 06:15:33.913061 kernel: acpiphp: Slot [12] registered Jul 7 06:15:33.913075 kernel: acpiphp: Slot [13] registered Jul 7 06:15:33.913088 kernel: acpiphp: Slot [14] registered Jul 7 06:15:33.913100 kernel: acpiphp: Slot [15] registered Jul 7 06:15:33.913114 kernel: acpiphp: Slot [16] registered Jul 7 06:15:33.913127 kernel: acpiphp: Slot [17] registered Jul 7 06:15:33.913141 kernel: acpiphp: Slot [18] registered Jul 7 06:15:33.913154 kernel: acpiphp: Slot [19] registered Jul 7 06:15:33.913166 kernel: acpiphp: Slot [20] registered Jul 7 06:15:33.913185 kernel: acpiphp: Slot [21] registered Jul 7 06:15:33.913200 kernel: acpiphp: Slot [22] registered Jul 7 06:15:33.913215 kernel: acpiphp: Slot [23] registered Jul 7 06:15:33.913231 kernel: acpiphp: Slot [24] registered Jul 7 06:15:33.914308 kernel: acpiphp: Slot [25] registered Jul 7 06:15:33.914331 kernel: acpiphp: Slot [26] registered Jul 7 06:15:33.914347 kernel: acpiphp: Slot [27] registered Jul 7 06:15:33.914363 kernel: acpiphp: Slot [28] registered Jul 7 06:15:33.914378 kernel: acpiphp: Slot [29] registered Jul 7 06:15:33.914393 kernel: acpiphp: Slot [30] registered Jul 7 06:15:33.914412 kernel: acpiphp: Slot [31] registered Jul 7 06:15:33.914427 kernel: PCI host bridge to bus 0000:00 Jul 7 06:15:33.914653 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 7 06:15:33.914800 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 7 06:15:33.914932 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 7 06:15:33.915059 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jul 7 06:15:33.915181 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Jul 7 06:15:33.916348 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 7 06:15:33.916538 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jul 7 06:15:33.916702 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Jul 7 06:15:33.916863 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Jul 7 06:15:33.917012 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jul 7 06:15:33.917159 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jul 7 06:15:33.918358 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jul 7 06:15:33.918513 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jul 7 06:15:33.918651 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jul 7 06:15:33.918797 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jul 7 06:15:33.918930 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jul 7 06:15:33.919081 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Jul 7 06:15:33.919216 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Jul 7 06:15:33.919376 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Jul 7 06:15:33.919507 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 7 06:15:33.919647 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Jul 7 06:15:33.919795 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Jul 7 06:15:33.919932 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Jul 7 06:15:33.920060 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Jul 7 06:15:33.920078 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 7 06:15:33.920097 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 7 06:15:33.920111 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 7 06:15:33.920125 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 7 06:15:33.920139 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 7 06:15:33.920154 kernel: iommu: Default domain type: Translated Jul 7 06:15:33.920168 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 7 06:15:33.920183 kernel: efivars: Registered efivars operations Jul 7 06:15:33.920197 kernel: PCI: Using ACPI for IRQ routing Jul 7 06:15:33.920210 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 7 06:15:33.920227 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Jul 7 06:15:33.925974 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Jul 7 06:15:33.926004 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Jul 7 06:15:33.926218 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jul 7 06:15:33.926380 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jul 7 06:15:33.926523 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 7 06:15:33.926546 kernel: vgaarb: loaded Jul 7 06:15:33.926565 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jul 7 06:15:33.926589 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jul 7 06:15:33.926605 kernel: clocksource: Switched to clocksource kvm-clock Jul 7 06:15:33.926620 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 06:15:33.926635 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 06:15:33.926649 kernel: pnp: PnP ACPI init Jul 7 06:15:33.926664 kernel: pnp: PnP ACPI: found 5 devices Jul 7 06:15:33.926678 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 7 06:15:33.926693 kernel: NET: Registered PF_INET protocol family Jul 7 06:15:33.926708 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 7 06:15:33.926726 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 7 06:15:33.926752 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 06:15:33.926767 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 7 06:15:33.926782 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 7 06:15:33.926796 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 7 06:15:33.926811 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 7 06:15:33.926826 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 7 06:15:33.926840 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 06:15:33.926854 kernel: NET: Registered PF_XDP protocol family Jul 7 06:15:33.926988 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 7 06:15:33.927105 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 7 06:15:33.927220 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 7 06:15:33.927350 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jul 7 06:15:33.927464 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Jul 7 06:15:33.927601 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 7 06:15:33.927621 kernel: PCI: CLS 0 bytes, default 64 Jul 7 06:15:33.927637 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 7 06:15:33.927657 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jul 7 06:15:33.927673 kernel: clocksource: Switched to clocksource tsc Jul 7 06:15:33.927687 kernel: Initialise system trusted keyrings Jul 7 06:15:33.927702 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 7 06:15:33.927717 kernel: Key type asymmetric registered Jul 7 06:15:33.927732 kernel: Asymmetric key parser 'x509' registered Jul 7 06:15:33.927747 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 7 06:15:33.927763 kernel: io scheduler mq-deadline registered Jul 7 06:15:33.927778 kernel: io scheduler kyber registered Jul 7 06:15:33.927796 kernel: io scheduler bfq registered Jul 7 06:15:33.927812 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 7 06:15:33.927826 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 06:15:33.927842 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 7 06:15:33.927858 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 7 06:15:33.927875 kernel: i8042: Warning: Keylock active Jul 7 06:15:33.927891 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 7 06:15:33.927908 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 7 06:15:33.928086 kernel: rtc_cmos 00:00: RTC can wake from S4 Jul 7 06:15:33.928222 kernel: rtc_cmos 00:00: registered as rtc0 Jul 7 06:15:33.930442 kernel: rtc_cmos 00:00: setting system clock to 2025-07-07T06:15:33 UTC (1751868933) Jul 7 06:15:33.930586 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jul 7 06:15:33.930608 kernel: intel_pstate: CPU model not supported Jul 7 06:15:33.930654 kernel: efifb: probing for efifb Jul 7 06:15:33.930676 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Jul 7 06:15:33.930694 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Jul 7 06:15:33.930714 kernel: efifb: scrolling: redraw Jul 7 06:15:33.930740 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 7 06:15:33.930757 kernel: Console: switching to colour frame buffer device 100x37 Jul 7 06:15:33.930772 kernel: fb0: EFI VGA frame buffer device Jul 7 06:15:33.930787 kernel: pstore: Using crash dump compression: deflate Jul 7 06:15:33.930803 kernel: pstore: Registered efi_pstore as persistent store backend Jul 7 06:15:33.930819 kernel: NET: Registered PF_INET6 protocol family Jul 7 06:15:33.930833 kernel: Segment Routing with IPv6 Jul 7 06:15:33.930848 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 06:15:33.930862 kernel: NET: Registered PF_PACKET protocol family Jul 7 06:15:33.930881 kernel: Key type dns_resolver registered Jul 7 06:15:33.930896 kernel: IPI shorthand broadcast: enabled Jul 7 06:15:33.930910 kernel: sched_clock: Marking stable (2657001766, 147204676)->(2890169674, -85963232) Jul 7 06:15:33.930925 kernel: registered taskstats version 1 Jul 7 06:15:33.930941 kernel: Loading compiled-in X.509 certificates Jul 7 06:15:33.930956 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.35-flatcar: b8e96f4c6a9e663230fc9c12b186cf91fcc7a64e' Jul 7 06:15:33.930972 kernel: Demotion targets for Node 0: null Jul 7 06:15:33.930988 kernel: Key type .fscrypt registered Jul 7 06:15:33.931003 kernel: Key type fscrypt-provisioning registered Jul 7 06:15:33.931022 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 06:15:33.931037 kernel: ima: Allocated hash algorithm: sha1 Jul 7 06:15:33.931052 kernel: ima: No architecture policies found Jul 7 06:15:33.931068 kernel: clk: Disabling unused clocks Jul 7 06:15:33.931083 kernel: Warning: unable to open an initial console. Jul 7 06:15:33.931099 kernel: Freeing unused kernel image (initmem) memory: 54432K Jul 7 06:15:33.931119 kernel: Write protecting the kernel read-only data: 24576k Jul 7 06:15:33.931134 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 7 06:15:33.931153 kernel: Run /init as init process Jul 7 06:15:33.931171 kernel: with arguments: Jul 7 06:15:33.931186 kernel: /init Jul 7 06:15:33.931202 kernel: with environment: Jul 7 06:15:33.931216 kernel: HOME=/ Jul 7 06:15:33.931232 kernel: TERM=linux Jul 7 06:15:33.931265 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 06:15:33.931282 systemd[1]: Successfully made /usr/ read-only. Jul 7 06:15:33.931303 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 06:15:33.931320 systemd[1]: Detected virtualization amazon. Jul 7 06:15:33.931336 systemd[1]: Detected architecture x86-64. Jul 7 06:15:33.931352 systemd[1]: Running in initrd. Jul 7 06:15:33.931367 systemd[1]: No hostname configured, using default hostname. Jul 7 06:15:33.931387 systemd[1]: Hostname set to . Jul 7 06:15:33.931403 systemd[1]: Initializing machine ID from VM UUID. Jul 7 06:15:33.931419 systemd[1]: Queued start job for default target initrd.target. Jul 7 06:15:33.931436 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:15:33.931452 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:15:33.931469 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 06:15:33.931486 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 06:15:33.931502 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 06:15:33.931523 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 06:15:33.931540 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 06:15:33.931557 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 06:15:33.931574 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:15:33.931590 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:15:33.931607 systemd[1]: Reached target paths.target - Path Units. Jul 7 06:15:33.931623 systemd[1]: Reached target slices.target - Slice Units. Jul 7 06:15:33.931642 systemd[1]: Reached target swap.target - Swaps. Jul 7 06:15:33.931658 systemd[1]: Reached target timers.target - Timer Units. Jul 7 06:15:33.931674 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 06:15:33.931689 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 06:15:33.931705 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 06:15:33.931722 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 7 06:15:33.931738 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:15:33.931754 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 06:15:33.931774 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:15:33.931789 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 06:15:33.931803 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 06:15:33.931818 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 06:15:33.931835 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 06:15:33.931849 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 7 06:15:33.931864 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 06:15:33.931878 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 06:15:33.931893 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 06:15:33.931911 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:15:33.931959 systemd-journald[207]: Collecting audit messages is disabled. Jul 7 06:15:33.931998 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 06:15:33.932018 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:15:33.932035 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 06:15:33.932052 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 06:15:33.932069 systemd-journald[207]: Journal started Jul 7 06:15:33.932106 systemd-journald[207]: Runtime Journal (/run/log/journal/ec237e4371efbefd6e6c17a1b96467e4) is 4.8M, max 38.4M, 33.6M free. Jul 7 06:15:33.936302 systemd-modules-load[209]: Inserted module 'overlay' Jul 7 06:15:33.940388 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:15:33.946269 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 06:15:33.954701 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 06:15:33.959384 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 06:15:33.961572 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 06:15:33.970435 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 06:15:33.983272 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 06:15:33.986271 kernel: Bridge firewalling registered Jul 7 06:15:33.985935 systemd-modules-load[209]: Inserted module 'br_netfilter' Jul 7 06:15:33.988329 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 06:15:33.996527 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:15:33.999064 systemd-tmpfiles[225]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 7 06:15:34.006664 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:15:34.008873 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 7 06:15:34.013097 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:15:34.019675 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:15:34.023687 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 06:15:34.025295 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:15:34.028913 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 06:15:34.050947 dracut-cmdline[245]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2e0b2c30526b1d273b6d599d4c30389a93a14ce36aaa5af83a05b11c5ea5ae50 Jul 7 06:15:34.083399 systemd-resolved[247]: Positive Trust Anchors: Jul 7 06:15:34.083416 systemd-resolved[247]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 06:15:34.083480 systemd-resolved[247]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 06:15:34.093577 systemd-resolved[247]: Defaulting to hostname 'linux'. Jul 7 06:15:34.094998 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 06:15:34.095693 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:15:34.148279 kernel: SCSI subsystem initialized Jul 7 06:15:34.158271 kernel: Loading iSCSI transport class v2.0-870. Jul 7 06:15:34.169269 kernel: iscsi: registered transport (tcp) Jul 7 06:15:34.191454 kernel: iscsi: registered transport (qla4xxx) Jul 7 06:15:34.191534 kernel: QLogic iSCSI HBA Driver Jul 7 06:15:34.209888 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 06:15:34.230601 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 06:15:34.233686 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 06:15:34.281285 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 06:15:34.283430 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 06:15:34.338276 kernel: raid6: avx512x4 gen() 15134 MB/s Jul 7 06:15:34.356276 kernel: raid6: avx512x2 gen() 15183 MB/s Jul 7 06:15:34.374267 kernel: raid6: avx512x1 gen() 15111 MB/s Jul 7 06:15:34.392263 kernel: raid6: avx2x4 gen() 15070 MB/s Jul 7 06:15:34.410263 kernel: raid6: avx2x2 gen() 15111 MB/s Jul 7 06:15:34.428571 kernel: raid6: avx2x1 gen() 11412 MB/s Jul 7 06:15:34.428634 kernel: raid6: using algorithm avx512x2 gen() 15183 MB/s Jul 7 06:15:34.447633 kernel: raid6: .... xor() 24229 MB/s, rmw enabled Jul 7 06:15:34.447715 kernel: raid6: using avx512x2 recovery algorithm Jul 7 06:15:34.469283 kernel: xor: automatically using best checksumming function avx Jul 7 06:15:34.637275 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 06:15:34.644181 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 06:15:34.646254 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:15:34.677137 systemd-udevd[456]: Using default interface naming scheme 'v255'. Jul 7 06:15:34.683617 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:15:34.686502 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 06:15:34.709771 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Jul 7 06:15:34.737707 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 06:15:34.739695 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 06:15:34.800394 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:15:34.804807 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 06:15:34.881174 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 7 06:15:34.881472 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 7 06:15:34.886268 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jul 7 06:15:34.895265 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:6b:36:c3:c8:59 Jul 7 06:15:34.916266 kernel: cryptd: max_cpu_qlen set to 1000 Jul 7 06:15:34.917770 (udev-worker)[509]: Network interface NamePolicy= disabled on kernel command line. Jul 7 06:15:34.933333 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 7 06:15:34.936264 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jul 7 06:15:34.939581 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:15:34.947568 kernel: AES CTR mode by8 optimization enabled Jul 7 06:15:34.939774 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:15:34.941008 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:15:34.949484 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:15:34.952111 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 06:15:34.958318 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 7 06:15:34.970653 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 7 06:15:34.970739 kernel: GPT:9289727 != 16777215 Jul 7 06:15:34.970761 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 7 06:15:34.978528 kernel: GPT:9289727 != 16777215 Jul 7 06:15:34.978595 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 7 06:15:34.978622 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 7 06:15:34.977996 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:15:34.978136 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:15:34.980471 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 06:15:34.986101 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:15:35.008266 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Jul 7 06:15:35.022547 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:15:35.031270 kernel: nvme nvme0: using unchecked data buffer Jul 7 06:15:35.173209 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jul 7 06:15:35.174973 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 06:15:35.186176 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jul 7 06:15:35.196531 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jul 7 06:15:35.197076 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jul 7 06:15:35.208902 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 7 06:15:35.209551 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 06:15:35.210802 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:15:35.211966 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 06:15:35.213689 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 06:15:35.218422 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 06:15:35.237791 disk-uuid[701]: Primary Header is updated. Jul 7 06:15:35.237791 disk-uuid[701]: Secondary Entries is updated. Jul 7 06:15:35.237791 disk-uuid[701]: Secondary Header is updated. Jul 7 06:15:35.245313 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 7 06:15:35.246534 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 06:15:36.261313 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 7 06:15:36.261492 disk-uuid[703]: The operation has completed successfully. Jul 7 06:15:36.388137 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 06:15:36.388291 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 06:15:36.427510 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 06:15:36.459865 sh[969]: Success Jul 7 06:15:36.480517 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 06:15:36.480601 kernel: device-mapper: uevent: version 1.0.3 Jul 7 06:15:36.483369 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 7 06:15:36.494263 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Jul 7 06:15:36.592193 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 06:15:36.594903 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 06:15:36.618799 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 06:15:36.640300 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 7 06:15:36.640366 kernel: BTRFS: device fsid 9d124217-7448-4fc6-a329-8a233bb5a0ac devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (992) Jul 7 06:15:36.646266 kernel: BTRFS info (device dm-0): first mount of filesystem 9d124217-7448-4fc6-a329-8a233bb5a0ac Jul 7 06:15:36.646325 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:15:36.648642 kernel: BTRFS info (device dm-0): using free-space-tree Jul 7 06:15:36.727523 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 06:15:36.728559 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 7 06:15:36.729104 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 06:15:36.729854 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 06:15:36.731593 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 06:15:36.773667 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1025) Jul 7 06:15:36.781931 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:15:36.782011 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:15:36.782034 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 7 06:15:36.816612 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:15:36.817900 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 06:15:36.820620 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 06:15:36.865525 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 06:15:36.868540 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 06:15:36.910680 systemd-networkd[1161]: lo: Link UP Jul 7 06:15:36.910693 systemd-networkd[1161]: lo: Gained carrier Jul 7 06:15:36.912471 systemd-networkd[1161]: Enumeration completed Jul 7 06:15:36.912882 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 06:15:36.913203 systemd-networkd[1161]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:15:36.913208 systemd-networkd[1161]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 06:15:36.917941 systemd-networkd[1161]: eth0: Link UP Jul 7 06:15:36.917949 systemd-networkd[1161]: eth0: Gained carrier Jul 7 06:15:36.917964 systemd-networkd[1161]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:15:36.920424 systemd[1]: Reached target network.target - Network. Jul 7 06:15:36.938354 systemd-networkd[1161]: eth0: DHCPv4 address 172.31.23.116/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 7 06:15:37.162862 ignition[1102]: Ignition 2.21.0 Jul 7 06:15:37.162878 ignition[1102]: Stage: fetch-offline Jul 7 06:15:37.163095 ignition[1102]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:15:37.163108 ignition[1102]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 06:15:37.163499 ignition[1102]: Ignition finished successfully Jul 7 06:15:37.166370 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 06:15:37.167922 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 7 06:15:37.196960 ignition[1171]: Ignition 2.21.0 Jul 7 06:15:37.196976 ignition[1171]: Stage: fetch Jul 7 06:15:37.197370 ignition[1171]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:15:37.197383 ignition[1171]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 06:15:37.197497 ignition[1171]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 06:15:37.218215 ignition[1171]: PUT result: OK Jul 7 06:15:37.220691 ignition[1171]: parsed url from cmdline: "" Jul 7 06:15:37.220704 ignition[1171]: no config URL provided Jul 7 06:15:37.220712 ignition[1171]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 06:15:37.220724 ignition[1171]: no config at "/usr/lib/ignition/user.ign" Jul 7 06:15:37.220748 ignition[1171]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 06:15:37.221550 ignition[1171]: PUT result: OK Jul 7 06:15:37.221599 ignition[1171]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 7 06:15:37.222633 ignition[1171]: GET result: OK Jul 7 06:15:37.222701 ignition[1171]: parsing config with SHA512: d912151ac74eab4b45dca2cde4e27688b3ed07bc0ddc9c50cd1896f10ce0ebe97ac10b37fb91f3398d957caf80e7cdfb9f0f973f72bc3ef340717cadbc2a0200 Jul 7 06:15:37.228106 unknown[1171]: fetched base config from "system" Jul 7 06:15:37.228564 unknown[1171]: fetched base config from "system" Jul 7 06:15:37.228894 ignition[1171]: fetch: fetch complete Jul 7 06:15:37.228570 unknown[1171]: fetched user config from "aws" Jul 7 06:15:37.228899 ignition[1171]: fetch: fetch passed Jul 7 06:15:37.228942 ignition[1171]: Ignition finished successfully Jul 7 06:15:37.231345 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 7 06:15:37.232637 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 06:15:37.294207 ignition[1177]: Ignition 2.21.0 Jul 7 06:15:37.294225 ignition[1177]: Stage: kargs Jul 7 06:15:37.294619 ignition[1177]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:15:37.294633 ignition[1177]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 06:15:37.294827 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 06:15:37.295678 ignition[1177]: PUT result: OK Jul 7 06:15:37.298102 ignition[1177]: kargs: kargs passed Jul 7 06:15:37.298181 ignition[1177]: Ignition finished successfully Jul 7 06:15:37.300271 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 06:15:37.301739 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 06:15:37.330567 ignition[1184]: Ignition 2.21.0 Jul 7 06:15:37.330581 ignition[1184]: Stage: disks Jul 7 06:15:37.330993 ignition[1184]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:15:37.331006 ignition[1184]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 06:15:37.331126 ignition[1184]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 06:15:37.332357 ignition[1184]: PUT result: OK Jul 7 06:15:37.336320 ignition[1184]: disks: disks passed Jul 7 06:15:37.336844 ignition[1184]: Ignition finished successfully Jul 7 06:15:37.338614 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 06:15:37.339372 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 06:15:37.339766 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 06:15:37.340603 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 06:15:37.340930 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 06:15:37.341514 systemd[1]: Reached target basic.target - Basic System. Jul 7 06:15:37.343169 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 06:15:37.400162 systemd-fsck[1193]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 7 06:15:37.403252 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 06:15:37.404792 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 06:15:37.556268 kernel: EXT4-fs (nvme0n1p9): mounted filesystem df0fa228-af1b-4496-9a54-2d4ccccd27d9 r/w with ordered data mode. Quota mode: none. Jul 7 06:15:37.556727 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 06:15:37.557664 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 06:15:37.559612 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 06:15:37.562150 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 06:15:37.564841 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 7 06:15:37.564913 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 06:15:37.564950 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 06:15:37.574743 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 06:15:37.576888 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 06:15:37.596275 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1212) Jul 7 06:15:37.600574 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:15:37.600648 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:15:37.600669 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 7 06:15:37.609414 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 06:15:37.860761 initrd-setup-root[1236]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 06:15:37.890437 initrd-setup-root[1243]: cut: /sysroot/etc/group: No such file or directory Jul 7 06:15:37.896195 initrd-setup-root[1250]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 06:15:37.901869 initrd-setup-root[1257]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 06:15:38.152013 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 06:15:38.154524 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 06:15:38.156316 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 06:15:38.168906 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 06:15:38.172526 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:15:38.220383 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 06:15:38.221788 ignition[1324]: INFO : Ignition 2.21.0 Jul 7 06:15:38.223219 ignition[1324]: INFO : Stage: mount Jul 7 06:15:38.223219 ignition[1324]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:15:38.223219 ignition[1324]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 06:15:38.223219 ignition[1324]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 06:15:38.225601 ignition[1324]: INFO : PUT result: OK Jul 7 06:15:38.228895 ignition[1324]: INFO : mount: mount passed Jul 7 06:15:38.229575 ignition[1324]: INFO : Ignition finished successfully Jul 7 06:15:38.231041 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 06:15:38.232627 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 06:15:38.254208 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 06:15:38.291271 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1336) Jul 7 06:15:38.295096 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:15:38.295163 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:15:38.295177 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 7 06:15:38.304630 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 06:15:38.334895 ignition[1352]: INFO : Ignition 2.21.0 Jul 7 06:15:38.334895 ignition[1352]: INFO : Stage: files Jul 7 06:15:38.336407 ignition[1352]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:15:38.336407 ignition[1352]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 06:15:38.336407 ignition[1352]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 06:15:38.337884 ignition[1352]: INFO : PUT result: OK Jul 7 06:15:38.342695 ignition[1352]: DEBUG : files: compiled without relabeling support, skipping Jul 7 06:15:38.344169 ignition[1352]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 06:15:38.344169 ignition[1352]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 06:15:38.348264 ignition[1352]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 06:15:38.348995 ignition[1352]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 06:15:38.348995 ignition[1352]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 06:15:38.348725 unknown[1352]: wrote ssh authorized keys file for user: core Jul 7 06:15:38.352563 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 7 06:15:38.353332 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 7 06:15:38.410413 systemd-networkd[1161]: eth0: Gained IPv6LL Jul 7 06:15:38.425969 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 7 06:15:38.593019 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 7 06:15:38.594181 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 7 06:15:38.594181 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 7 06:15:39.077909 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 7 06:15:39.175750 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 7 06:15:39.175750 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 7 06:15:39.177644 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 06:15:39.177644 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 06:15:39.177644 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 06:15:39.177644 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 06:15:39.177644 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 06:15:39.177644 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 06:15:39.177644 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 06:15:39.182497 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 06:15:39.183366 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 06:15:39.183366 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 06:15:39.185172 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 06:15:39.185172 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 06:15:39.185172 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 7 06:15:39.900982 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 7 06:15:40.140923 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 06:15:40.140923 ignition[1352]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 7 06:15:40.143675 ignition[1352]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 06:15:40.147923 ignition[1352]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 06:15:40.147923 ignition[1352]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 7 06:15:40.147923 ignition[1352]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jul 7 06:15:40.150902 ignition[1352]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 06:15:40.150902 ignition[1352]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 06:15:40.150902 ignition[1352]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 06:15:40.150902 ignition[1352]: INFO : files: files passed Jul 7 06:15:40.150902 ignition[1352]: INFO : Ignition finished successfully Jul 7 06:15:40.150127 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 06:15:40.153470 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 06:15:40.156996 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 06:15:40.165584 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 06:15:40.166107 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 06:15:40.172985 initrd-setup-root-after-ignition[1383]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:15:40.172985 initrd-setup-root-after-ignition[1383]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:15:40.175121 initrd-setup-root-after-ignition[1387]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:15:40.176878 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 06:15:40.177739 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 06:15:40.179331 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 06:15:40.235519 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 06:15:40.235628 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 06:15:40.238065 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 06:15:40.239109 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 06:15:40.239932 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 06:15:40.240836 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 06:15:40.259799 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 06:15:40.261735 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 06:15:40.286521 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:15:40.287341 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:15:40.288732 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 06:15:40.289479 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 06:15:40.289630 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 06:15:40.290577 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 06:15:40.291575 systemd[1]: Stopped target basic.target - Basic System. Jul 7 06:15:40.292279 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 06:15:40.292926 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 06:15:40.293621 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 06:15:40.294399 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 7 06:15:40.295321 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 06:15:40.295969 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 06:15:40.296793 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 06:15:40.297768 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 06:15:40.298547 systemd[1]: Stopped target swap.target - Swaps. Jul 7 06:15:40.299341 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 06:15:40.299497 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 06:15:40.300380 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:15:40.301475 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:15:40.302062 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 06:15:40.302335 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:15:40.303946 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 06:15:40.304078 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 06:15:40.305195 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 06:15:40.305407 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 06:15:40.306130 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 06:15:40.306273 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 06:15:40.308197 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 06:15:40.308614 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 06:15:40.308763 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:15:40.310424 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 06:15:40.311330 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 06:15:40.312386 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:15:40.313088 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 06:15:40.313212 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 06:15:40.319488 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 06:15:40.322341 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 06:15:40.337325 ignition[1407]: INFO : Ignition 2.21.0 Jul 7 06:15:40.337325 ignition[1407]: INFO : Stage: umount Jul 7 06:15:40.338927 ignition[1407]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:15:40.338927 ignition[1407]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 06:15:40.338927 ignition[1407]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 06:15:40.343049 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 06:15:40.366285 ignition[1407]: INFO : PUT result: OK Jul 7 06:15:40.371192 ignition[1407]: INFO : umount: umount passed Jul 7 06:15:40.371192 ignition[1407]: INFO : Ignition finished successfully Jul 7 06:15:40.372407 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 06:15:40.372513 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 06:15:40.373717 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 06:15:40.373810 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 06:15:40.374585 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 06:15:40.374640 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 06:15:40.375436 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 7 06:15:40.375483 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 7 06:15:40.376026 systemd[1]: Stopped target network.target - Network. Jul 7 06:15:40.376932 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 06:15:40.377015 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 06:15:40.377588 systemd[1]: Stopped target paths.target - Path Units. Jul 7 06:15:40.378138 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 06:15:40.381322 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:15:40.381688 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 06:15:40.382592 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 06:15:40.383327 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 06:15:40.383381 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 06:15:40.383917 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 06:15:40.383962 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 06:15:40.384510 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 06:15:40.384569 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 06:15:40.385102 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 06:15:40.385154 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 06:15:40.385806 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 06:15:40.386339 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 06:15:40.389816 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 06:15:40.389945 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 06:15:40.393100 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 7 06:15:40.393406 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 06:15:40.393462 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:15:40.395452 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 7 06:15:40.397422 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 06:15:40.397543 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 06:15:40.399593 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 7 06:15:40.399828 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 7 06:15:40.400666 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 06:15:40.400703 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:15:40.402127 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 06:15:40.402992 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 06:15:40.403350 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 06:15:40.404387 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 06:15:40.404435 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:15:40.405655 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 06:15:40.406047 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 06:15:40.407130 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:15:40.410759 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 7 06:15:40.432885 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 06:15:40.433057 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:15:40.434120 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 06:15:40.434191 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 06:15:40.434870 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 06:15:40.434916 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:15:40.436435 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 06:15:40.436488 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 06:15:40.437493 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 06:15:40.437542 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 06:15:40.438702 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 06:15:40.438769 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:15:40.442067 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 06:15:40.443469 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 7 06:15:40.443545 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 06:15:40.445373 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 06:15:40.445433 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:15:40.448193 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:15:40.448288 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:15:40.450207 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 06:15:40.451508 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 06:15:40.460139 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 06:15:40.460583 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 06:15:40.461852 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 06:15:40.461989 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 06:15:40.463577 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 06:15:40.464405 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 06:15:40.464497 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 06:15:40.466145 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 06:15:40.484501 systemd[1]: Switching root. Jul 7 06:15:40.528215 systemd-journald[207]: Journal stopped Jul 7 06:15:42.409424 systemd-journald[207]: Received SIGTERM from PID 1 (systemd). Jul 7 06:15:42.409529 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 06:15:42.409552 kernel: SELinux: policy capability open_perms=1 Jul 7 06:15:42.409571 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 06:15:42.409595 kernel: SELinux: policy capability always_check_network=0 Jul 7 06:15:42.409614 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 06:15:42.409633 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 06:15:42.409653 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 06:15:42.409672 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 06:15:42.409694 kernel: SELinux: policy capability userspace_initial_context=0 Jul 7 06:15:42.409713 kernel: audit: type=1403 audit(1751868940.897:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 06:15:42.409734 systemd[1]: Successfully loaded SELinux policy in 76.108ms. Jul 7 06:15:42.409766 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.328ms. Jul 7 06:15:42.409788 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 06:15:42.409810 systemd[1]: Detected virtualization amazon. Jul 7 06:15:42.409831 systemd[1]: Detected architecture x86-64. Jul 7 06:15:42.409855 systemd[1]: Detected first boot. Jul 7 06:15:42.409875 systemd[1]: Initializing machine ID from VM UUID. Jul 7 06:15:42.409899 zram_generator::config[1451]: No configuration found. Jul 7 06:15:42.409920 kernel: Guest personality initialized and is inactive Jul 7 06:15:42.409940 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 7 06:15:42.409959 kernel: Initialized host personality Jul 7 06:15:42.409978 kernel: NET: Registered PF_VSOCK protocol family Jul 7 06:15:42.409997 systemd[1]: Populated /etc with preset unit settings. Jul 7 06:15:42.410025 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 7 06:15:42.410045 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 7 06:15:42.410068 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 7 06:15:42.410088 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 7 06:15:42.410110 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 06:15:42.410129 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 06:15:42.410150 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 06:15:42.410168 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 06:15:42.410191 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 06:15:42.410210 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 06:15:42.410229 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 06:15:42.410269 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 06:15:42.410288 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:15:42.410307 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:15:42.410326 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 06:15:42.410344 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 06:15:42.410363 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 06:15:42.410382 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 06:15:42.410405 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 7 06:15:42.410423 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:15:42.410442 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:15:42.410461 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 7 06:15:42.410480 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 7 06:15:42.410499 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 7 06:15:42.410518 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 06:15:42.410537 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:15:42.410556 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 06:15:42.410578 systemd[1]: Reached target slices.target - Slice Units. Jul 7 06:15:42.410597 systemd[1]: Reached target swap.target - Swaps. Jul 7 06:15:42.410616 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 06:15:42.410637 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 06:15:42.410656 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 7 06:15:42.410742 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:15:42.410762 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 06:15:42.410781 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:15:42.410799 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 06:15:42.410818 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 06:15:42.410840 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 06:15:42.410858 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 06:15:42.410877 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:15:42.410895 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 06:15:42.410914 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 06:15:42.410933 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 06:15:42.410952 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 06:15:42.410971 systemd[1]: Reached target machines.target - Containers. Jul 7 06:15:42.410992 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 06:15:42.411011 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:15:42.411030 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 06:15:42.411049 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 06:15:42.411067 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:15:42.411085 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 06:15:42.411106 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:15:42.411125 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 06:15:42.411146 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:15:42.411164 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 06:15:42.411183 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 7 06:15:42.411200 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 7 06:15:42.411218 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 7 06:15:42.411236 systemd[1]: Stopped systemd-fsck-usr.service. Jul 7 06:15:42.411274 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:15:42.411294 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 06:15:42.411315 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 06:15:42.411338 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 06:15:42.411357 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 06:15:42.411375 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 7 06:15:42.411393 kernel: loop: module loaded Jul 7 06:15:42.411412 kernel: fuse: init (API version 7.41) Jul 7 06:15:42.411435 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 06:15:42.411456 systemd[1]: verity-setup.service: Deactivated successfully. Jul 7 06:15:42.411478 systemd[1]: Stopped verity-setup.service. Jul 7 06:15:42.411503 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:15:42.411523 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 06:15:42.411548 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 06:15:42.411567 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 06:15:42.411585 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 06:15:42.411604 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 06:15:42.411621 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 06:15:42.411640 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:15:42.411659 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 06:15:42.411678 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 06:15:42.411697 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:15:42.411717 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:15:42.411735 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:15:42.411753 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:15:42.411771 kernel: ACPI: bus type drm_connector registered Jul 7 06:15:42.411788 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 06:15:42.411807 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 06:15:42.411826 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 06:15:42.411845 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 06:15:42.411862 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:15:42.411883 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:15:42.411902 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 06:15:42.411923 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 06:15:42.411945 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 06:15:42.411966 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 06:15:42.411988 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 06:15:42.412008 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 06:15:42.412026 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 7 06:15:42.412072 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 06:15:42.412093 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:15:42.412112 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 06:15:42.412134 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 06:15:42.412153 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 06:15:42.412175 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 06:15:42.412196 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:15:42.412218 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 06:15:42.412262 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 06:15:42.412290 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 06:15:42.412313 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 7 06:15:42.412380 systemd-journald[1534]: Collecting audit messages is disabled. Jul 7 06:15:42.412427 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:15:42.412448 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 06:15:42.412467 kernel: loop0: detected capacity change from 0 to 221472 Jul 7 06:15:42.412488 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 06:15:42.412510 systemd-journald[1534]: Journal started Jul 7 06:15:42.412547 systemd-journald[1534]: Runtime Journal (/run/log/journal/ec237e4371efbefd6e6c17a1b96467e4) is 4.8M, max 38.4M, 33.6M free. Jul 7 06:15:41.886915 systemd[1]: Queued start job for default target multi-user.target. Jul 7 06:15:41.906561 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 7 06:15:41.907172 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 7 06:15:42.415302 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 06:15:42.417661 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 06:15:42.438607 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 06:15:42.440065 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 06:15:42.443999 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 06:15:42.446792 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 7 06:15:42.455435 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 06:15:42.460266 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 06:15:42.460031 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:15:42.472225 systemd-journald[1534]: Time spent on flushing to /var/log/journal/ec237e4371efbefd6e6c17a1b96467e4 is 43.061ms for 1023 entries. Jul 7 06:15:42.472225 systemd-journald[1534]: System Journal (/var/log/journal/ec237e4371efbefd6e6c17a1b96467e4) is 8M, max 195.6M, 187.6M free. Jul 7 06:15:42.526897 systemd-journald[1534]: Received client request to flush runtime journal. Jul 7 06:15:42.526986 kernel: loop1: detected capacity change from 0 to 113872 Jul 7 06:15:42.484410 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 7 06:15:42.528999 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 06:15:42.565011 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 06:15:42.569539 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 06:15:42.609149 systemd-tmpfiles[1603]: ACLs are not supported, ignoring. Jul 7 06:15:42.609444 systemd-tmpfiles[1603]: ACLs are not supported, ignoring. Jul 7 06:15:42.615399 kernel: loop2: detected capacity change from 0 to 72352 Jul 7 06:15:42.615237 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:15:42.729268 kernel: loop3: detected capacity change from 0 to 146240 Jul 7 06:15:42.824347 kernel: loop4: detected capacity change from 0 to 221472 Jul 7 06:15:42.871436 kernel: loop5: detected capacity change from 0 to 113872 Jul 7 06:15:42.889281 kernel: loop6: detected capacity change from 0 to 72352 Jul 7 06:15:42.909413 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 06:15:42.918583 kernel: loop7: detected capacity change from 0 to 146240 Jul 7 06:15:42.947913 (sd-merge)[1610]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jul 7 06:15:42.948449 (sd-merge)[1610]: Merged extensions into '/usr'. Jul 7 06:15:42.954103 systemd[1]: Reload requested from client PID 1563 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 06:15:42.954294 systemd[1]: Reloading... Jul 7 06:15:43.021262 zram_generator::config[1633]: No configuration found. Jul 7 06:15:43.181704 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:15:43.344493 systemd[1]: Reloading finished in 388 ms. Jul 7 06:15:43.366563 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 06:15:43.379399 systemd[1]: Starting ensure-sysext.service... Jul 7 06:15:43.382410 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 06:15:43.420615 systemd[1]: Reload requested from client PID 1687 ('systemctl') (unit ensure-sysext.service)... Jul 7 06:15:43.420974 systemd[1]: Reloading... Jul 7 06:15:43.423723 systemd-tmpfiles[1688]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 7 06:15:43.423763 systemd-tmpfiles[1688]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 7 06:15:43.424116 systemd-tmpfiles[1688]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 06:15:43.424547 systemd-tmpfiles[1688]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 06:15:43.425831 systemd-tmpfiles[1688]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 06:15:43.426285 systemd-tmpfiles[1688]: ACLs are not supported, ignoring. Jul 7 06:15:43.426376 systemd-tmpfiles[1688]: ACLs are not supported, ignoring. Jul 7 06:15:43.441558 systemd-tmpfiles[1688]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 06:15:43.442524 systemd-tmpfiles[1688]: Skipping /boot Jul 7 06:15:43.480215 systemd-tmpfiles[1688]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 06:15:43.480233 systemd-tmpfiles[1688]: Skipping /boot Jul 7 06:15:43.570281 zram_generator::config[1719]: No configuration found. Jul 7 06:15:43.692564 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:15:43.804082 systemd[1]: Reloading finished in 382 ms. Jul 7 06:15:43.816614 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 06:15:43.833619 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:15:43.845184 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 06:15:43.850887 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 06:15:43.859043 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 06:15:43.864433 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 06:15:43.868908 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:15:43.874534 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 06:15:43.882770 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:15:43.883062 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:15:43.889015 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:15:43.895489 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:15:43.899600 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:15:43.900485 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:15:43.900679 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:15:43.908431 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 06:15:43.909315 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:15:43.921877 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:15:43.922275 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:15:43.922603 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:15:43.922837 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:15:43.923055 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:15:43.934752 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:15:43.935193 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:15:43.940803 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 06:15:43.941764 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:15:43.941947 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:15:43.942215 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 06:15:43.944629 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:15:43.953990 systemd[1]: Finished ensure-sysext.service. Jul 7 06:15:43.956022 ldconfig[1555]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 06:15:43.956534 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 06:15:43.958007 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:15:43.958265 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:15:43.968349 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 06:15:43.974183 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:15:43.975404 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:15:43.977191 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:15:43.977503 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:15:43.979212 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 06:15:43.980333 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 06:15:43.992960 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 06:15:43.993070 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 06:15:44.037512 systemd-udevd[1775]: Using default interface naming scheme 'v255'. Jul 7 06:15:44.037774 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 06:15:44.041503 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 06:15:44.059524 augenrules[1809]: No rules Jul 7 06:15:44.061941 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 06:15:44.062261 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 06:15:44.064869 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 06:15:44.078233 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 06:15:44.089215 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 06:15:44.091381 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 06:15:44.111152 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:15:44.115111 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 06:15:44.248785 (udev-worker)[1834]: Network interface NamePolicy= disabled on kernel command line. Jul 7 06:15:44.252590 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 7 06:15:44.290145 systemd-resolved[1773]: Positive Trust Anchors: Jul 7 06:15:44.290169 systemd-resolved[1773]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 06:15:44.290226 systemd-resolved[1773]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 06:15:44.301583 systemd-resolved[1773]: Defaulting to hostname 'linux'. Jul 7 06:15:44.306088 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 06:15:44.307568 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:15:44.309378 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 06:15:44.310611 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 06:15:44.311995 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 06:15:44.313051 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 7 06:15:44.314585 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 06:15:44.315667 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 06:15:44.316803 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 06:15:44.317699 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 06:15:44.317836 systemd[1]: Reached target paths.target - Path Units. Jul 7 06:15:44.319326 systemd[1]: Reached target timers.target - Timer Units. Jul 7 06:15:44.322338 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 06:15:44.328146 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 06:15:44.339230 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 7 06:15:44.343558 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 7 06:15:44.344356 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 7 06:15:44.354964 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 06:15:44.356869 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 7 06:15:44.360155 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 06:15:44.362233 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 06:15:44.363878 systemd[1]: Reached target basic.target - Basic System. Jul 7 06:15:44.364865 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 06:15:44.364905 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 06:15:44.368412 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 7 06:15:44.372446 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 06:15:44.377503 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 06:15:44.386437 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 06:15:44.391541 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 06:15:44.393343 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 06:15:44.395562 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 7 06:15:44.401503 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 06:15:44.411758 systemd[1]: Started ntpd.service - Network Time Service. Jul 7 06:15:44.419711 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 06:15:44.441297 systemd-networkd[1826]: lo: Link UP Jul 7 06:15:44.441309 systemd-networkd[1826]: lo: Gained carrier Jul 7 06:15:44.444817 systemd[1]: Starting setup-oem.service - Setup OEM... Jul 7 06:15:44.447401 systemd-networkd[1826]: Enumeration completed Jul 7 06:15:44.458421 systemd-networkd[1826]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:15:44.459272 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 06:15:44.462838 systemd-networkd[1826]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 06:15:44.468271 jq[1865]: false Jul 7 06:15:44.471794 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 06:15:44.480494 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 06:15:44.484721 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 06:15:44.485475 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 06:15:44.488507 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 06:15:44.502389 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 06:15:44.503856 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 06:15:44.507119 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 06:15:44.508022 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 06:15:44.516877 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 06:15:44.519381 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 06:15:44.519639 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 06:15:44.530094 systemd[1]: Reached target network.target - Network. Jul 7 06:15:44.532319 systemd-networkd[1826]: eth0: Link UP Jul 7 06:15:44.532559 systemd-networkd[1826]: eth0: Gained carrier Jul 7 06:15:44.532591 systemd-networkd[1826]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:15:44.540827 google_oslogin_nss_cache[1867]: oslogin_cache_refresh[1867]: Refreshing passwd entry cache Jul 7 06:15:44.539335 oslogin_cache_refresh[1867]: Refreshing passwd entry cache Jul 7 06:15:44.541330 systemd-networkd[1826]: eth0: DHCPv4 address 172.31.23.116/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 7 06:15:44.541367 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 06:15:44.545497 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 7 06:15:44.558598 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 06:15:44.573275 kernel: mousedev: PS/2 mouse device common for all mice Jul 7 06:15:44.590737 extend-filesystems[1866]: Found /dev/nvme0n1p6 Jul 7 06:15:44.593119 google_oslogin_nss_cache[1867]: oslogin_cache_refresh[1867]: Failure getting users, quitting Jul 7 06:15:44.593119 google_oslogin_nss_cache[1867]: oslogin_cache_refresh[1867]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 06:15:44.593119 google_oslogin_nss_cache[1867]: oslogin_cache_refresh[1867]: Refreshing group entry cache Jul 7 06:15:44.592360 oslogin_cache_refresh[1867]: Failure getting users, quitting Jul 7 06:15:44.592384 oslogin_cache_refresh[1867]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 06:15:44.592447 oslogin_cache_refresh[1867]: Refreshing group entry cache Jul 7 06:15:44.594681 google_oslogin_nss_cache[1867]: oslogin_cache_refresh[1867]: Failure getting groups, quitting Jul 7 06:15:44.595550 oslogin_cache_refresh[1867]: Failure getting groups, quitting Jul 7 06:15:44.595700 google_oslogin_nss_cache[1867]: oslogin_cache_refresh[1867]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 06:15:44.595577 oslogin_cache_refresh[1867]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 06:15:44.598965 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 7 06:15:44.599279 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 7 06:15:44.637097 jq[1877]: true Jul 7 06:15:44.643970 extend-filesystems[1866]: Found /dev/nvme0n1p9 Jul 7 06:15:44.667903 extend-filesystems[1866]: Checking size of /dev/nvme0n1p9 Jul 7 06:15:44.675260 update_engine[1876]: I20250707 06:15:44.674810 1876 main.cc:92] Flatcar Update Engine starting Jul 7 06:15:44.679566 dbus-daemon[1863]: [system] SELinux support is enabled Jul 7 06:15:44.679797 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 06:15:44.686508 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 06:15:44.686554 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 06:15:44.695413 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 06:15:44.695448 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 06:15:44.709993 dbus-daemon[1863]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1826 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 7 06:15:44.711565 update_engine[1876]: I20250707 06:15:44.711504 1876 update_check_scheduler.cc:74] Next update check in 3m52s Jul 7 06:15:44.716517 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 7 06:15:44.718397 systemd[1]: Started update-engine.service - Update Engine. Jul 7 06:15:44.721845 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 7 06:15:44.729425 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 06:15:44.730498 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 06:15:44.730805 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 06:15:44.737546 systemd[1]: Finished setup-oem.service - Setup OEM. Jul 7 06:15:44.750322 tar[1891]: linux-amd64/helm Jul 7 06:15:44.750697 jq[1904]: true Jul 7 06:15:44.759278 extend-filesystems[1866]: Resized partition /dev/nvme0n1p9 Jul 7 06:15:44.765842 extend-filesystems[1935]: resize2fs 1.47.2 (1-Jan-2025) Jul 7 06:15:44.775732 (ntainerd)[1926]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 06:15:44.808768 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 7 06:15:44.845638 ntpd[1869]: ntpd 4.2.8p17@1.4004-o Sun Jul 6 21:17:10 UTC 2025 (1): Starting Jul 7 06:15:44.846199 ntpd[1869]: 7 Jul 06:15:44 ntpd[1869]: ntpd 4.2.8p17@1.4004-o Sun Jul 6 21:17:10 UTC 2025 (1): Starting Jul 7 06:15:44.848183 ntpd[1869]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 7 06:15:44.851011 ntpd[1869]: 7 Jul 06:15:44 ntpd[1869]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 7 06:15:44.851011 ntpd[1869]: 7 Jul 06:15:44 ntpd[1869]: ---------------------------------------------------- Jul 7 06:15:44.851011 ntpd[1869]: 7 Jul 06:15:44 ntpd[1869]: ntp-4 is maintained by Network Time Foundation, Jul 7 06:15:44.851011 ntpd[1869]: 7 Jul 06:15:44 ntpd[1869]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 7 06:15:44.851011 ntpd[1869]: 7 Jul 06:15:44 ntpd[1869]: corporation. Support and training for ntp-4 are Jul 7 06:15:44.851011 ntpd[1869]: 7 Jul 06:15:44 ntpd[1869]: available at https://www.nwtime.org/support Jul 7 06:15:44.851011 ntpd[1869]: 7 Jul 06:15:44 ntpd[1869]: ---------------------------------------------------- Jul 7 06:15:44.848216 ntpd[1869]: ---------------------------------------------------- Jul 7 06:15:44.848227 ntpd[1869]: ntp-4 is maintained by Network Time Foundation, Jul 7 06:15:44.848257 ntpd[1869]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 7 06:15:44.848267 ntpd[1869]: corporation. Support and training for ntp-4 are Jul 7 06:15:44.848277 ntpd[1869]: available at https://www.nwtime.org/support Jul 7 06:15:44.848287 ntpd[1869]: ---------------------------------------------------- Jul 7 06:15:44.856784 ntpd[1869]: 7 Jul 06:15:44 ntpd[1869]: proto: precision = 0.093 usec (-23) Jul 7 06:15:44.855719 ntpd[1869]: proto: precision = 0.093 usec (-23) Jul 7 06:15:44.858346 ntpd[1869]: basedate set to 2025-06-24 Jul 7 06:15:44.859384 ntpd[1869]: 7 Jul 06:15:44 ntpd[1869]: basedate set to 2025-06-24 Jul 7 06:15:44.859384 ntpd[1869]: 7 Jul 06:15:44 ntpd[1869]: gps base set to 2025-06-29 (week 2373) Jul 7 06:15:44.858371 ntpd[1869]: gps base set to 2025-06-29 (week 2373) Jul 7 06:15:44.869738 ntpd[1869]: Listen and drop on 0 v6wildcard [::]:123 Jul 7 06:15:44.873767 ntpd[1869]: 7 Jul 06:15:44 ntpd[1869]: Listen and drop on 0 v6wildcard [::]:123 Jul 7 06:15:44.873767 ntpd[1869]: 7 Jul 06:15:44 ntpd[1869]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 7 06:15:44.873767 ntpd[1869]: 7 Jul 06:15:44 ntpd[1869]: Listen normally on 2 lo 127.0.0.1:123 Jul 7 06:15:44.873767 ntpd[1869]: 7 Jul 06:15:44 ntpd[1869]: Listen normally on 3 eth0 172.31.23.116:123 Jul 7 06:15:44.873767 ntpd[1869]: 7 Jul 06:15:44 ntpd[1869]: Listen normally on 4 lo [::1]:123 Jul 7 06:15:44.873767 ntpd[1869]: 7 Jul 06:15:44 ntpd[1869]: bind(21) AF_INET6 fe80::46b:36ff:fec3:c859%2#123 flags 0x11 failed: Cannot assign requested address Jul 7 06:15:44.873767 ntpd[1869]: 7 Jul 06:15:44 ntpd[1869]: unable to create socket on eth0 (5) for fe80::46b:36ff:fec3:c859%2#123 Jul 7 06:15:44.873767 ntpd[1869]: 7 Jul 06:15:44 ntpd[1869]: failed to init interface for address fe80::46b:36ff:fec3:c859%2 Jul 7 06:15:44.873767 ntpd[1869]: 7 Jul 06:15:44 ntpd[1869]: Listening on routing socket on fd #21 for interface updates Jul 7 06:15:44.869809 ntpd[1869]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 7 06:15:44.870014 ntpd[1869]: Listen normally on 2 lo 127.0.0.1:123 Jul 7 06:15:44.870057 ntpd[1869]: Listen normally on 3 eth0 172.31.23.116:123 Jul 7 06:15:44.870101 ntpd[1869]: Listen normally on 4 lo [::1]:123 Jul 7 06:15:44.870146 ntpd[1869]: bind(21) AF_INET6 fe80::46b:36ff:fec3:c859%2#123 flags 0x11 failed: Cannot assign requested address Jul 7 06:15:44.870169 ntpd[1869]: unable to create socket on eth0 (5) for fe80::46b:36ff:fec3:c859%2#123 Jul 7 06:15:44.870184 ntpd[1869]: failed to init interface for address fe80::46b:36ff:fec3:c859%2 Jul 7 06:15:44.870217 ntpd[1869]: Listening on routing socket on fd #21 for interface updates Jul 7 06:15:44.888091 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 7 06:15:44.893314 ntpd[1869]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 7 06:15:44.894735 ntpd[1869]: 7 Jul 06:15:44 ntpd[1869]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 7 06:15:44.894735 ntpd[1869]: 7 Jul 06:15:44 ntpd[1869]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 7 06:15:44.893352 ntpd[1869]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 7 06:15:44.899990 kernel: ACPI: button: Power Button [PWRF] Jul 7 06:15:44.900064 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jul 7 06:15:44.903371 kernel: ACPI: button: Sleep Button [SLPF] Jul 7 06:15:44.908271 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 7 06:15:44.929294 extend-filesystems[1935]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 7 06:15:44.929294 extend-filesystems[1935]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 7 06:15:44.929294 extend-filesystems[1935]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 7 06:15:44.949739 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jul 7 06:15:44.950075 extend-filesystems[1866]: Resized filesystem in /dev/nvme0n1p9 Jul 7 06:15:44.930380 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 06:15:44.930754 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 06:15:45.005276 bash[1970]: Updated "/home/core/.ssh/authorized_keys" Jul 7 06:15:45.004995 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 06:15:45.019563 systemd[1]: Starting sshkeys.service... Jul 7 06:15:45.033596 coreos-metadata[1862]: Jul 07 06:15:45.033 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 7 06:15:45.042717 coreos-metadata[1862]: Jul 07 06:15:45.042 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jul 7 06:15:45.049730 coreos-metadata[1862]: Jul 07 06:15:45.049 INFO Fetch successful Jul 7 06:15:45.049730 coreos-metadata[1862]: Jul 07 06:15:45.049 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jul 7 06:15:45.050523 coreos-metadata[1862]: Jul 07 06:15:45.050 INFO Fetch successful Jul 7 06:15:45.050523 coreos-metadata[1862]: Jul 07 06:15:45.050 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jul 7 06:15:45.061933 coreos-metadata[1862]: Jul 07 06:15:45.061 INFO Fetch successful Jul 7 06:15:45.061933 coreos-metadata[1862]: Jul 07 06:15:45.061 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jul 7 06:15:45.062894 coreos-metadata[1862]: Jul 07 06:15:45.062 INFO Fetch successful Jul 7 06:15:45.062894 coreos-metadata[1862]: Jul 07 06:15:45.062 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jul 7 06:15:45.072281 coreos-metadata[1862]: Jul 07 06:15:45.069 INFO Fetch failed with 404: resource not found Jul 7 06:15:45.072281 coreos-metadata[1862]: Jul 07 06:15:45.071 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jul 7 06:15:45.078350 coreos-metadata[1862]: Jul 07 06:15:45.077 INFO Fetch successful Jul 7 06:15:45.078350 coreos-metadata[1862]: Jul 07 06:15:45.077 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jul 7 06:15:45.081711 coreos-metadata[1862]: Jul 07 06:15:45.080 INFO Fetch successful Jul 7 06:15:45.081711 coreos-metadata[1862]: Jul 07 06:15:45.081 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jul 7 06:15:45.081966 coreos-metadata[1862]: Jul 07 06:15:45.081 INFO Fetch successful Jul 7 06:15:45.094150 coreos-metadata[1862]: Jul 07 06:15:45.088 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jul 7 06:15:45.097729 coreos-metadata[1862]: Jul 07 06:15:45.096 INFO Fetch successful Jul 7 06:15:45.097729 coreos-metadata[1862]: Jul 07 06:15:45.097 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jul 7 06:15:45.096819 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 7 06:15:45.107130 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 7 06:15:45.113691 coreos-metadata[1862]: Jul 07 06:15:45.110 INFO Fetch successful Jul 7 06:15:45.183271 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 7 06:15:45.197932 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 06:15:45.238386 locksmithd[1924]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 06:15:45.287578 coreos-metadata[2018]: Jul 07 06:15:45.287 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 7 06:15:45.288645 coreos-metadata[2018]: Jul 07 06:15:45.288 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jul 7 06:15:45.289848 coreos-metadata[2018]: Jul 07 06:15:45.289 INFO Fetch successful Jul 7 06:15:45.289848 coreos-metadata[2018]: Jul 07 06:15:45.289 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 7 06:15:45.291523 coreos-metadata[2018]: Jul 07 06:15:45.291 INFO Fetch successful Jul 7 06:15:45.292850 unknown[2018]: wrote ssh authorized keys file for user: core Jul 7 06:15:45.373318 update-ssh-keys[2050]: Updated "/home/core/.ssh/authorized_keys" Jul 7 06:15:45.370863 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 7 06:15:45.378541 systemd[1]: Finished sshkeys.service. Jul 7 06:15:45.429557 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 7 06:15:45.434077 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 06:15:45.528685 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 06:15:45.589875 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:15:45.645388 containerd[1926]: time="2025-07-07T06:15:45Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 7 06:15:45.647431 systemd-logind[1875]: New seat seat0. Jul 7 06:15:45.649445 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 06:15:45.663410 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:15:45.664165 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:15:45.673269 containerd[1926]: time="2025-07-07T06:15:45.671322092Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 7 06:15:45.672410 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:15:45.728217 systemd-logind[1875]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 7 06:15:45.732698 systemd-logind[1875]: Watching system buttons on /dev/input/event2 (Power Button) Jul 7 06:15:45.737328 systemd-logind[1875]: Watching system buttons on /dev/input/event3 (Sleep Button) Jul 7 06:15:45.743381 containerd[1926]: time="2025-07-07T06:15:45.742971400Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.041µs" Jul 7 06:15:45.743381 containerd[1926]: time="2025-07-07T06:15:45.743013910Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 7 06:15:45.743381 containerd[1926]: time="2025-07-07T06:15:45.743037693Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 7 06:15:45.743381 containerd[1926]: time="2025-07-07T06:15:45.743208167Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 7 06:15:45.743381 containerd[1926]: time="2025-07-07T06:15:45.743225524Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 7 06:15:45.743381 containerd[1926]: time="2025-07-07T06:15:45.743269851Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 06:15:45.743381 containerd[1926]: time="2025-07-07T06:15:45.743333882Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 06:15:45.743381 containerd[1926]: time="2025-07-07T06:15:45.743348805Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 06:15:45.765415 containerd[1926]: time="2025-07-07T06:15:45.765359978Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 06:15:45.765415 containerd[1926]: time="2025-07-07T06:15:45.765411384Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 06:15:45.765549 containerd[1926]: time="2025-07-07T06:15:45.765433442Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 06:15:45.765549 containerd[1926]: time="2025-07-07T06:15:45.765445392Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 7 06:15:45.765641 containerd[1926]: time="2025-07-07T06:15:45.765602045Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 7 06:15:45.765883 containerd[1926]: time="2025-07-07T06:15:45.765853372Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 06:15:45.765946 containerd[1926]: time="2025-07-07T06:15:45.765908287Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 06:15:45.765946 containerd[1926]: time="2025-07-07T06:15:45.765924792Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 7 06:15:45.768008 containerd[1926]: time="2025-07-07T06:15:45.766705278Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 7 06:15:45.769650 containerd[1926]: time="2025-07-07T06:15:45.769613578Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 7 06:15:45.769767 containerd[1926]: time="2025-07-07T06:15:45.769746645Z" level=info msg="metadata content store policy set" policy=shared Jul 7 06:15:45.783282 containerd[1926]: time="2025-07-07T06:15:45.780886929Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 7 06:15:45.783282 containerd[1926]: time="2025-07-07T06:15:45.781027677Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 7 06:15:45.783282 containerd[1926]: time="2025-07-07T06:15:45.781051187Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 7 06:15:45.783282 containerd[1926]: time="2025-07-07T06:15:45.781084135Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 7 06:15:45.783282 containerd[1926]: time="2025-07-07T06:15:45.781109747Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 7 06:15:45.783282 containerd[1926]: time="2025-07-07T06:15:45.781125815Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 7 06:15:45.783282 containerd[1926]: time="2025-07-07T06:15:45.781156811Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 7 06:15:45.783282 containerd[1926]: time="2025-07-07T06:15:45.781172180Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 7 06:15:45.783282 containerd[1926]: time="2025-07-07T06:15:45.781187774Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 7 06:15:45.783282 containerd[1926]: time="2025-07-07T06:15:45.781203188Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 7 06:15:45.783282 containerd[1926]: time="2025-07-07T06:15:45.781227773Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 7 06:15:45.783282 containerd[1926]: time="2025-07-07T06:15:45.781257341Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 7 06:15:45.783282 containerd[1926]: time="2025-07-07T06:15:45.781439577Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 7 06:15:45.783282 containerd[1926]: time="2025-07-07T06:15:45.781485001Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 7 06:15:45.783827 containerd[1926]: time="2025-07-07T06:15:45.781507510Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 7 06:15:45.783827 containerd[1926]: time="2025-07-07T06:15:45.781536228Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 7 06:15:45.783827 containerd[1926]: time="2025-07-07T06:15:45.781552782Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 7 06:15:45.783827 containerd[1926]: time="2025-07-07T06:15:45.781568269Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 7 06:15:45.783827 containerd[1926]: time="2025-07-07T06:15:45.781586144Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 7 06:15:45.783827 containerd[1926]: time="2025-07-07T06:15:45.781615872Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 7 06:15:45.783827 containerd[1926]: time="2025-07-07T06:15:45.781634387Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 7 06:15:45.783827 containerd[1926]: time="2025-07-07T06:15:45.781651898Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 7 06:15:45.783827 containerd[1926]: time="2025-07-07T06:15:45.781668662Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 7 06:15:45.783827 containerd[1926]: time="2025-07-07T06:15:45.781802604Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 7 06:15:45.783827 containerd[1926]: time="2025-07-07T06:15:45.781823449Z" level=info msg="Start snapshots syncer" Jul 7 06:15:45.783827 containerd[1926]: time="2025-07-07T06:15:45.781979089Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 7 06:15:45.784268 containerd[1926]: time="2025-07-07T06:15:45.782350836Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 7 06:15:45.784268 containerd[1926]: time="2025-07-07T06:15:45.782422933Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 7 06:15:45.789774 containerd[1926]: time="2025-07-07T06:15:45.789689713Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 7 06:15:45.789940 containerd[1926]: time="2025-07-07T06:15:45.789911848Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 7 06:15:45.789995 containerd[1926]: time="2025-07-07T06:15:45.789955373Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 7 06:15:45.789995 containerd[1926]: time="2025-07-07T06:15:45.789974498Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 7 06:15:45.789995 containerd[1926]: time="2025-07-07T06:15:45.789989968Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 7 06:15:45.790110 containerd[1926]: time="2025-07-07T06:15:45.790007229Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 7 06:15:45.790110 containerd[1926]: time="2025-07-07T06:15:45.790022694Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 7 06:15:45.790110 containerd[1926]: time="2025-07-07T06:15:45.790052220Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 7 06:15:45.790110 containerd[1926]: time="2025-07-07T06:15:45.790090675Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 7 06:15:45.790110 containerd[1926]: time="2025-07-07T06:15:45.790107233Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 7 06:15:45.790315 containerd[1926]: time="2025-07-07T06:15:45.790124698Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 7 06:15:45.790315 containerd[1926]: time="2025-07-07T06:15:45.790168610Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 06:15:45.790315 containerd[1926]: time="2025-07-07T06:15:45.790190165Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 06:15:45.790315 containerd[1926]: time="2025-07-07T06:15:45.790216100Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 06:15:45.790315 containerd[1926]: time="2025-07-07T06:15:45.790231288Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 06:15:45.790315 containerd[1926]: time="2025-07-07T06:15:45.790261156Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 7 06:15:45.790315 containerd[1926]: time="2025-07-07T06:15:45.790285561Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 7 06:15:45.790315 containerd[1926]: time="2025-07-07T06:15:45.790301722Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 7 06:15:45.790578 containerd[1926]: time="2025-07-07T06:15:45.790325752Z" level=info msg="runtime interface created" Jul 7 06:15:45.790578 containerd[1926]: time="2025-07-07T06:15:45.790334344Z" level=info msg="created NRI interface" Jul 7 06:15:45.790578 containerd[1926]: time="2025-07-07T06:15:45.790351623Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 7 06:15:45.790578 containerd[1926]: time="2025-07-07T06:15:45.790371606Z" level=info msg="Connect containerd service" Jul 7 06:15:45.790578 containerd[1926]: time="2025-07-07T06:15:45.790414474Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 06:15:45.798265 containerd[1926]: time="2025-07-07T06:15:45.797784367Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 06:15:45.848680 ntpd[1869]: bind(24) AF_INET6 fe80::46b:36ff:fec3:c859%2#123 flags 0x11 failed: Cannot assign requested address Jul 7 06:15:45.849166 ntpd[1869]: 7 Jul 06:15:45 ntpd[1869]: bind(24) AF_INET6 fe80::46b:36ff:fec3:c859%2#123 flags 0x11 failed: Cannot assign requested address Jul 7 06:15:45.849166 ntpd[1869]: 7 Jul 06:15:45 ntpd[1869]: unable to create socket on eth0 (6) for fe80::46b:36ff:fec3:c859%2#123 Jul 7 06:15:45.849166 ntpd[1869]: 7 Jul 06:15:45 ntpd[1869]: failed to init interface for address fe80::46b:36ff:fec3:c859%2 Jul 7 06:15:45.848742 ntpd[1869]: unable to create socket on eth0 (6) for fe80::46b:36ff:fec3:c859%2#123 Jul 7 06:15:45.848759 ntpd[1869]: failed to init interface for address fe80::46b:36ff:fec3:c859%2 Jul 7 06:15:45.903984 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:15:46.052418 sshd_keygen[1917]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 06:15:46.127845 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 06:15:46.129880 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 7 06:15:46.134955 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 06:15:46.135899 dbus-daemon[1863]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 7 06:15:46.142381 dbus-daemon[1863]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1916 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 7 06:15:46.152679 systemd[1]: Starting polkit.service - Authorization Manager... Jul 7 06:15:46.209640 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 06:15:46.210124 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 06:15:46.218312 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 06:15:46.313812 containerd[1926]: time="2025-07-07T06:15:46.313760068Z" level=info msg="Start subscribing containerd event" Jul 7 06:15:46.315134 containerd[1926]: time="2025-07-07T06:15:46.314431650Z" level=info msg="Start recovering state" Jul 7 06:15:46.317038 containerd[1926]: time="2025-07-07T06:15:46.316306976Z" level=info msg="Start event monitor" Jul 7 06:15:46.317038 containerd[1926]: time="2025-07-07T06:15:46.316340474Z" level=info msg="Start cni network conf syncer for default" Jul 7 06:15:46.317038 containerd[1926]: time="2025-07-07T06:15:46.316351375Z" level=info msg="Start streaming server" Jul 7 06:15:46.317038 containerd[1926]: time="2025-07-07T06:15:46.316366990Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 7 06:15:46.317038 containerd[1926]: time="2025-07-07T06:15:46.316379516Z" level=info msg="runtime interface starting up..." Jul 7 06:15:46.317038 containerd[1926]: time="2025-07-07T06:15:46.316389001Z" level=info msg="starting plugins..." Jul 7 06:15:46.317038 containerd[1926]: time="2025-07-07T06:15:46.316403780Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 7 06:15:46.318793 containerd[1926]: time="2025-07-07T06:15:46.318669926Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 06:15:46.319056 containerd[1926]: time="2025-07-07T06:15:46.319016763Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 06:15:46.319998 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 06:15:46.320928 containerd[1926]: time="2025-07-07T06:15:46.320906036Z" level=info msg="containerd successfully booted in 0.676030s" Jul 7 06:15:46.332909 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 06:15:46.337655 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 06:15:46.344704 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 7 06:15:46.346607 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 06:15:46.411658 systemd-networkd[1826]: eth0: Gained IPv6LL Jul 7 06:15:46.415046 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 06:15:46.420333 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 06:15:46.424819 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jul 7 06:15:46.431808 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:15:46.436713 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 06:15:46.556155 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 06:15:46.572613 amazon-ssm-agent[2194]: Initializing new seelog logger Jul 7 06:15:46.573203 amazon-ssm-agent[2194]: New Seelog Logger Creation Complete Jul 7 06:15:46.573316 amazon-ssm-agent[2194]: 2025/07/07 06:15:46 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 06:15:46.573720 amazon-ssm-agent[2194]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 06:15:46.574051 amazon-ssm-agent[2194]: 2025/07/07 06:15:46 processing appconfig overrides Jul 7 06:15:46.574597 amazon-ssm-agent[2194]: 2025/07/07 06:15:46 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 06:15:46.574684 amazon-ssm-agent[2194]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 06:15:46.574830 amazon-ssm-agent[2194]: 2025/07/07 06:15:46 processing appconfig overrides Jul 7 06:15:46.575197 amazon-ssm-agent[2194]: 2025/07/07 06:15:46 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 06:15:46.575280 amazon-ssm-agent[2194]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 06:15:46.575420 amazon-ssm-agent[2194]: 2025/07/07 06:15:46 processing appconfig overrides Jul 7 06:15:46.575943 amazon-ssm-agent[2194]: 2025-07-07 06:15:46.5744 INFO Proxy environment variables: Jul 7 06:15:46.579462 amazon-ssm-agent[2194]: 2025/07/07 06:15:46 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 06:15:46.579462 amazon-ssm-agent[2194]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 06:15:46.579462 amazon-ssm-agent[2194]: 2025/07/07 06:15:46 processing appconfig overrides Jul 7 06:15:46.590809 polkitd[2163]: Started polkitd version 126 Jul 7 06:15:46.602938 polkitd[2163]: Loading rules from directory /etc/polkit-1/rules.d Jul 7 06:15:46.607259 polkitd[2163]: Loading rules from directory /run/polkit-1/rules.d Jul 7 06:15:46.607329 polkitd[2163]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 7 06:15:46.609463 polkitd[2163]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jul 7 06:15:46.609521 polkitd[2163]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 7 06:15:46.609580 polkitd[2163]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 7 06:15:46.612507 polkitd[2163]: Finished loading, compiling and executing 2 rules Jul 7 06:15:46.614703 systemd[1]: Started polkit.service - Authorization Manager. Jul 7 06:15:46.617516 dbus-daemon[1863]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 7 06:15:46.620658 polkitd[2163]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 7 06:15:46.627749 tar[1891]: linux-amd64/LICENSE Jul 7 06:15:46.627749 tar[1891]: linux-amd64/README.md Jul 7 06:15:46.650700 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 06:15:46.655918 systemd-hostnamed[1916]: Hostname set to (transient) Jul 7 06:15:46.656316 systemd-resolved[1773]: System hostname changed to 'ip-172-31-23-116'. Jul 7 06:15:46.677019 amazon-ssm-agent[2194]: 2025-07-07 06:15:46.5745 INFO https_proxy: Jul 7 06:15:46.775009 amazon-ssm-agent[2194]: 2025-07-07 06:15:46.5745 INFO http_proxy: Jul 7 06:15:46.786355 amazon-ssm-agent[2194]: 2025/07/07 06:15:46 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 06:15:46.786355 amazon-ssm-agent[2194]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 06:15:46.786485 amazon-ssm-agent[2194]: 2025/07/07 06:15:46 processing appconfig overrides Jul 7 06:15:46.815715 amazon-ssm-agent[2194]: 2025-07-07 06:15:46.5745 INFO no_proxy: Jul 7 06:15:46.815715 amazon-ssm-agent[2194]: 2025-07-07 06:15:46.5748 INFO Checking if agent identity type OnPrem can be assumed Jul 7 06:15:46.815715 amazon-ssm-agent[2194]: 2025-07-07 06:15:46.5750 INFO Checking if agent identity type EC2 can be assumed Jul 7 06:15:46.815715 amazon-ssm-agent[2194]: 2025-07-07 06:15:46.6684 INFO Agent will take identity from EC2 Jul 7 06:15:46.815715 amazon-ssm-agent[2194]: 2025-07-07 06:15:46.6699 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Jul 7 06:15:46.815715 amazon-ssm-agent[2194]: 2025-07-07 06:15:46.6700 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jul 7 06:15:46.815715 amazon-ssm-agent[2194]: 2025-07-07 06:15:46.6700 INFO [amazon-ssm-agent] Starting Core Agent Jul 7 06:15:46.815715 amazon-ssm-agent[2194]: 2025-07-07 06:15:46.6700 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Jul 7 06:15:46.815715 amazon-ssm-agent[2194]: 2025-07-07 06:15:46.6700 INFO [Registrar] Starting registrar module Jul 7 06:15:46.815715 amazon-ssm-agent[2194]: 2025-07-07 06:15:46.6711 INFO [EC2Identity] Checking disk for registration info Jul 7 06:15:46.815715 amazon-ssm-agent[2194]: 2025-07-07 06:15:46.6711 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Jul 7 06:15:46.815715 amazon-ssm-agent[2194]: 2025-07-07 06:15:46.6711 INFO [EC2Identity] Generating registration keypair Jul 7 06:15:46.815715 amazon-ssm-agent[2194]: 2025-07-07 06:15:46.7518 INFO [EC2Identity] Checking write access before registering Jul 7 06:15:46.816175 amazon-ssm-agent[2194]: 2025-07-07 06:15:46.7522 INFO [EC2Identity] Registering EC2 instance with Systems Manager Jul 7 06:15:46.816175 amazon-ssm-agent[2194]: 2025-07-07 06:15:46.7861 INFO [EC2Identity] EC2 registration was successful. Jul 7 06:15:46.816175 amazon-ssm-agent[2194]: 2025-07-07 06:15:46.7861 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Jul 7 06:15:46.816175 amazon-ssm-agent[2194]: 2025-07-07 06:15:46.7862 INFO [CredentialRefresher] credentialRefresher has started Jul 7 06:15:46.816175 amazon-ssm-agent[2194]: 2025-07-07 06:15:46.7862 INFO [CredentialRefresher] Starting credentials refresher loop Jul 7 06:15:46.816175 amazon-ssm-agent[2194]: 2025-07-07 06:15:46.8153 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jul 7 06:15:46.816175 amazon-ssm-agent[2194]: 2025-07-07 06:15:46.8155 INFO [CredentialRefresher] Credentials ready Jul 7 06:15:46.872778 amazon-ssm-agent[2194]: 2025-07-07 06:15:46.8158 INFO [CredentialRefresher] Next credential rotation will be in 29.999992739266666 minutes Jul 7 06:15:47.834853 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 06:15:47.839290 amazon-ssm-agent[2194]: 2025-07-07 06:15:47.8338 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jul 7 06:15:47.839854 systemd[1]: Started sshd@0-172.31.23.116:22-139.178.89.65:45778.service - OpenSSH per-connection server daemon (139.178.89.65:45778). Jul 7 06:15:47.939538 amazon-ssm-agent[2194]: 2025-07-07 06:15:47.8380 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2228) started Jul 7 06:15:48.041009 amazon-ssm-agent[2194]: 2025-07-07 06:15:47.8381 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jul 7 06:15:48.087405 sshd[2229]: Accepted publickey for core from 139.178.89.65 port 45778 ssh2: RSA SHA256:1kMr/NVCVvXsWyefEhF1pHl3N2KP5iBIRut6ncABJco Jul 7 06:15:48.088565 sshd-session[2229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:48.095731 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 06:15:48.097791 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 06:15:48.108409 systemd-logind[1875]: New session 1 of user core. Jul 7 06:15:48.121427 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 06:15:48.124925 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 06:15:48.139141 (systemd)[2244]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 06:15:48.144283 systemd-logind[1875]: New session c1 of user core. Jul 7 06:15:48.174417 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:15:48.175781 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 06:15:48.184685 (kubelet)[2253]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:15:48.333566 systemd[2244]: Queued start job for default target default.target. Jul 7 06:15:48.346378 systemd[2244]: Created slice app.slice - User Application Slice. Jul 7 06:15:48.346410 systemd[2244]: Reached target paths.target - Paths. Jul 7 06:15:48.346544 systemd[2244]: Reached target timers.target - Timers. Jul 7 06:15:48.348343 systemd[2244]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 06:15:48.360660 systemd[2244]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 06:15:48.360784 systemd[2244]: Reached target sockets.target - Sockets. Jul 7 06:15:48.360830 systemd[2244]: Reached target basic.target - Basic System. Jul 7 06:15:48.360867 systemd[2244]: Reached target default.target - Main User Target. Jul 7 06:15:48.360897 systemd[2244]: Startup finished in 206ms. Jul 7 06:15:48.361178 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 06:15:48.366489 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 06:15:48.370599 systemd[1]: Startup finished in 2.729s (kernel) + 7.234s (initrd) + 7.546s (userspace) = 17.510s. Jul 7 06:15:48.529190 systemd[1]: Started sshd@1-172.31.23.116:22-139.178.89.65:45782.service - OpenSSH per-connection server daemon (139.178.89.65:45782). Jul 7 06:15:48.697859 sshd[2269]: Accepted publickey for core from 139.178.89.65 port 45782 ssh2: RSA SHA256:1kMr/NVCVvXsWyefEhF1pHl3N2KP5iBIRut6ncABJco Jul 7 06:15:48.700579 sshd-session[2269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:48.709315 systemd-logind[1875]: New session 2 of user core. Jul 7 06:15:48.711451 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 06:15:48.835229 sshd[2271]: Connection closed by 139.178.89.65 port 45782 Jul 7 06:15:48.835760 sshd-session[2269]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:48.840543 systemd[1]: sshd@1-172.31.23.116:22-139.178.89.65:45782.service: Deactivated successfully. Jul 7 06:15:48.842413 systemd[1]: session-2.scope: Deactivated successfully. Jul 7 06:15:48.843785 systemd-logind[1875]: Session 2 logged out. Waiting for processes to exit. Jul 7 06:15:48.845230 systemd-logind[1875]: Removed session 2. Jul 7 06:15:48.848675 ntpd[1869]: Listen normally on 7 eth0 [fe80::46b:36ff:fec3:c859%2]:123 Jul 7 06:15:48.849014 ntpd[1869]: 7 Jul 06:15:48 ntpd[1869]: Listen normally on 7 eth0 [fe80::46b:36ff:fec3:c859%2]:123 Jul 7 06:15:48.866050 systemd[1]: Started sshd@2-172.31.23.116:22-139.178.89.65:45798.service - OpenSSH per-connection server daemon (139.178.89.65:45798). Jul 7 06:15:49.037400 sshd[2278]: Accepted publickey for core from 139.178.89.65 port 45798 ssh2: RSA SHA256:1kMr/NVCVvXsWyefEhF1pHl3N2KP5iBIRut6ncABJco Jul 7 06:15:49.038453 sshd-session[2278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:49.044857 systemd-logind[1875]: New session 3 of user core. Jul 7 06:15:49.048455 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 06:15:49.169694 sshd[2280]: Connection closed by 139.178.89.65 port 45798 Jul 7 06:15:49.170273 sshd-session[2278]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:49.173413 kubelet[2253]: E0707 06:15:49.173175 2253 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:15:49.175710 systemd[1]: sshd@2-172.31.23.116:22-139.178.89.65:45798.service: Deactivated successfully. Jul 7 06:15:49.177787 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:15:49.177969 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:15:49.178507 systemd[1]: kubelet.service: Consumed 1.051s CPU time, 264.1M memory peak. Jul 7 06:15:49.179198 systemd[1]: session-3.scope: Deactivated successfully. Jul 7 06:15:49.180111 systemd-logind[1875]: Session 3 logged out. Waiting for processes to exit. Jul 7 06:15:49.182731 systemd-logind[1875]: Removed session 3. Jul 7 06:15:49.208166 systemd[1]: Started sshd@3-172.31.23.116:22-139.178.89.65:45806.service - OpenSSH per-connection server daemon (139.178.89.65:45806). Jul 7 06:15:49.379398 sshd[2287]: Accepted publickey for core from 139.178.89.65 port 45806 ssh2: RSA SHA256:1kMr/NVCVvXsWyefEhF1pHl3N2KP5iBIRut6ncABJco Jul 7 06:15:49.380387 sshd-session[2287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:49.385826 systemd-logind[1875]: New session 4 of user core. Jul 7 06:15:49.388461 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 06:15:49.513781 sshd[2289]: Connection closed by 139.178.89.65 port 45806 Jul 7 06:15:49.514315 sshd-session[2287]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:49.517766 systemd[1]: sshd@3-172.31.23.116:22-139.178.89.65:45806.service: Deactivated successfully. Jul 7 06:15:49.519612 systemd[1]: session-4.scope: Deactivated successfully. Jul 7 06:15:49.520411 systemd-logind[1875]: Session 4 logged out. Waiting for processes to exit. Jul 7 06:15:49.521975 systemd-logind[1875]: Removed session 4. Jul 7 06:15:49.547209 systemd[1]: Started sshd@4-172.31.23.116:22-139.178.89.65:45114.service - OpenSSH per-connection server daemon (139.178.89.65:45114). Jul 7 06:15:49.716149 sshd[2295]: Accepted publickey for core from 139.178.89.65 port 45114 ssh2: RSA SHA256:1kMr/NVCVvXsWyefEhF1pHl3N2KP5iBIRut6ncABJco Jul 7 06:15:49.717640 sshd-session[2295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:49.722304 systemd-logind[1875]: New session 5 of user core. Jul 7 06:15:49.732489 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 06:15:49.849796 sudo[2298]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 06:15:49.850403 sudo[2298]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:15:49.860838 sudo[2298]: pam_unix(sudo:session): session closed for user root Jul 7 06:15:49.883867 sshd[2297]: Connection closed by 139.178.89.65 port 45114 Jul 7 06:15:49.884621 sshd-session[2295]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:49.888624 systemd[1]: sshd@4-172.31.23.116:22-139.178.89.65:45114.service: Deactivated successfully. Jul 7 06:15:49.890367 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 06:15:49.892180 systemd-logind[1875]: Session 5 logged out. Waiting for processes to exit. Jul 7 06:15:49.893602 systemd-logind[1875]: Removed session 5. Jul 7 06:15:49.919100 systemd[1]: Started sshd@5-172.31.23.116:22-139.178.89.65:45120.service - OpenSSH per-connection server daemon (139.178.89.65:45120). Jul 7 06:15:50.090192 sshd[2304]: Accepted publickey for core from 139.178.89.65 port 45120 ssh2: RSA SHA256:1kMr/NVCVvXsWyefEhF1pHl3N2KP5iBIRut6ncABJco Jul 7 06:15:50.091595 sshd-session[2304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:50.097178 systemd-logind[1875]: New session 6 of user core. Jul 7 06:15:50.104451 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 06:15:50.203063 sudo[2308]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 06:15:50.203348 sudo[2308]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:15:50.209163 sudo[2308]: pam_unix(sudo:session): session closed for user root Jul 7 06:15:50.214841 sudo[2307]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 7 06:15:50.215121 sudo[2307]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:15:50.225541 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 06:15:50.263464 augenrules[2330]: No rules Jul 7 06:15:50.264160 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 06:15:50.264381 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 06:15:50.265652 sudo[2307]: pam_unix(sudo:session): session closed for user root Jul 7 06:15:50.288443 sshd[2306]: Connection closed by 139.178.89.65 port 45120 Jul 7 06:15:50.289220 sshd-session[2304]: pam_unix(sshd:session): session closed for user core Jul 7 06:15:50.293320 systemd[1]: sshd@5-172.31.23.116:22-139.178.89.65:45120.service: Deactivated successfully. Jul 7 06:15:50.295789 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 06:15:50.298309 systemd-logind[1875]: Session 6 logged out. Waiting for processes to exit. Jul 7 06:15:50.299743 systemd-logind[1875]: Removed session 6. Jul 7 06:15:50.324107 systemd[1]: Started sshd@6-172.31.23.116:22-139.178.89.65:45126.service - OpenSSH per-connection server daemon (139.178.89.65:45126). Jul 7 06:15:50.501683 sshd[2339]: Accepted publickey for core from 139.178.89.65 port 45126 ssh2: RSA SHA256:1kMr/NVCVvXsWyefEhF1pHl3N2KP5iBIRut6ncABJco Jul 7 06:15:50.503550 sshd-session[2339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:15:50.508150 systemd-logind[1875]: New session 7 of user core. Jul 7 06:15:50.520472 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 06:15:50.617093 sudo[2342]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 06:15:50.617388 sudo[2342]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:15:51.227070 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 06:15:51.248748 (dockerd)[2360]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 06:15:51.692575 dockerd[2360]: time="2025-07-07T06:15:51.692436574Z" level=info msg="Starting up" Jul 7 06:15:51.693417 dockerd[2360]: time="2025-07-07T06:15:51.693386659Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 7 06:15:52.095844 systemd-resolved[1773]: Clock change detected. Flushing caches. Jul 7 06:15:52.155214 dockerd[2360]: time="2025-07-07T06:15:52.155134654Z" level=info msg="Loading containers: start." Jul 7 06:15:52.166760 kernel: Initializing XFRM netlink socket Jul 7 06:15:52.397468 (udev-worker)[2381]: Network interface NamePolicy= disabled on kernel command line. Jul 7 06:15:52.441638 systemd-networkd[1826]: docker0: Link UP Jul 7 06:15:52.447645 dockerd[2360]: time="2025-07-07T06:15:52.447589484Z" level=info msg="Loading containers: done." Jul 7 06:15:52.465893 dockerd[2360]: time="2025-07-07T06:15:52.465838243Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 06:15:52.466068 dockerd[2360]: time="2025-07-07T06:15:52.465938203Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 7 06:15:52.466119 dockerd[2360]: time="2025-07-07T06:15:52.466070765Z" level=info msg="Initializing buildkit" Jul 7 06:15:52.490900 dockerd[2360]: time="2025-07-07T06:15:52.490855555Z" level=info msg="Completed buildkit initialization" Jul 7 06:15:52.498551 dockerd[2360]: time="2025-07-07T06:15:52.498500996Z" level=info msg="Daemon has completed initialization" Jul 7 06:15:52.499106 dockerd[2360]: time="2025-07-07T06:15:52.498708006Z" level=info msg="API listen on /run/docker.sock" Jul 7 06:15:52.498742 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 06:15:53.614898 containerd[1926]: time="2025-07-07T06:15:53.614863411Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 7 06:15:54.186707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4247992254.mount: Deactivated successfully. Jul 7 06:15:55.500388 containerd[1926]: time="2025-07-07T06:15:55.500321079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:55.501806 containerd[1926]: time="2025-07-07T06:15:55.501624882Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077744" Jul 7 06:15:55.502768 containerd[1926]: time="2025-07-07T06:15:55.502734281Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:55.506246 containerd[1926]: time="2025-07-07T06:15:55.505947082Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:55.507608 containerd[1926]: time="2025-07-07T06:15:55.507446982Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 1.892548001s" Jul 7 06:15:55.507608 containerd[1926]: time="2025-07-07T06:15:55.507489223Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 7 06:15:55.508255 containerd[1926]: time="2025-07-07T06:15:55.508204656Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 7 06:15:56.905177 containerd[1926]: time="2025-07-07T06:15:56.905130964Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:56.907061 containerd[1926]: time="2025-07-07T06:15:56.906994193Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713294" Jul 7 06:15:56.909724 containerd[1926]: time="2025-07-07T06:15:56.909325775Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:56.913601 containerd[1926]: time="2025-07-07T06:15:56.913565481Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:56.914289 containerd[1926]: time="2025-07-07T06:15:56.914257627Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 1.406016223s" Jul 7 06:15:56.914381 containerd[1926]: time="2025-07-07T06:15:56.914369026Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 7 06:15:56.914992 containerd[1926]: time="2025-07-07T06:15:56.914973469Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 7 06:15:58.058744 containerd[1926]: time="2025-07-07T06:15:58.058690336Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:58.059943 containerd[1926]: time="2025-07-07T06:15:58.059834370Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783671" Jul 7 06:15:58.061087 containerd[1926]: time="2025-07-07T06:15:58.060971610Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:58.064259 containerd[1926]: time="2025-07-07T06:15:58.063946502Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:58.064801 containerd[1926]: time="2025-07-07T06:15:58.064770422Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 1.149749986s" Jul 7 06:15:58.064867 containerd[1926]: time="2025-07-07T06:15:58.064806469Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 7 06:15:58.065387 containerd[1926]: time="2025-07-07T06:15:58.065362697Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 7 06:15:59.129141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3671811733.mount: Deactivated successfully. Jul 7 06:15:59.654291 containerd[1926]: time="2025-07-07T06:15:59.654225335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:59.655491 containerd[1926]: time="2025-07-07T06:15:59.655344624Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383943" Jul 7 06:15:59.657282 containerd[1926]: time="2025-07-07T06:15:59.657225020Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:59.659788 containerd[1926]: time="2025-07-07T06:15:59.659717774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:59.660722 containerd[1926]: time="2025-07-07T06:15:59.660691959Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 1.595294001s" Jul 7 06:15:59.660817 containerd[1926]: time="2025-07-07T06:15:59.660805061Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 7 06:15:59.663766 containerd[1926]: time="2025-07-07T06:15:59.663723493Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 7 06:15:59.675201 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 06:15:59.677160 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:15:59.934192 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:15:59.945186 (kubelet)[2640]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:16:00.025475 kubelet[2640]: E0707 06:16:00.025392 2640 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:16:00.030147 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:16:00.030378 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:16:00.031051 systemd[1]: kubelet.service: Consumed 184ms CPU time, 108.7M memory peak. Jul 7 06:16:00.208411 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount334538063.mount: Deactivated successfully. Jul 7 06:16:01.215901 containerd[1926]: time="2025-07-07T06:16:01.215840223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:01.218436 containerd[1926]: time="2025-07-07T06:16:01.218371148Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 7 06:16:01.221686 containerd[1926]: time="2025-07-07T06:16:01.219898221Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:01.224212 containerd[1926]: time="2025-07-07T06:16:01.224141880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:01.226310 containerd[1926]: time="2025-07-07T06:16:01.225542200Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.561761322s" Jul 7 06:16:01.226310 containerd[1926]: time="2025-07-07T06:16:01.225589821Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 7 06:16:01.226310 containerd[1926]: time="2025-07-07T06:16:01.226101426Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 06:16:01.891025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1958646139.mount: Deactivated successfully. Jul 7 06:16:01.900569 containerd[1926]: time="2025-07-07T06:16:01.899189296Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:16:01.900569 containerd[1926]: time="2025-07-07T06:16:01.900528542Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 7 06:16:01.902989 containerd[1926]: time="2025-07-07T06:16:01.902927891Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:16:01.907810 containerd[1926]: time="2025-07-07T06:16:01.907721965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:16:01.911132 containerd[1926]: time="2025-07-07T06:16:01.910706594Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 684.570739ms" Jul 7 06:16:01.911861 containerd[1926]: time="2025-07-07T06:16:01.911322047Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 7 06:16:01.912630 containerd[1926]: time="2025-07-07T06:16:01.912594769Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 7 06:16:02.545437 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2325082176.mount: Deactivated successfully. Jul 7 06:16:05.989959 containerd[1926]: time="2025-07-07T06:16:05.989888895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:05.991248 containerd[1926]: time="2025-07-07T06:16:05.991210082Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Jul 7 06:16:05.992698 containerd[1926]: time="2025-07-07T06:16:05.992108750Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:05.995046 containerd[1926]: time="2025-07-07T06:16:05.994982761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:05.996550 containerd[1926]: time="2025-07-07T06:16:05.996163082Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 4.083530774s" Jul 7 06:16:05.996550 containerd[1926]: time="2025-07-07T06:16:05.996209012Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 7 06:16:08.608067 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:16:08.608833 systemd[1]: kubelet.service: Consumed 184ms CPU time, 108.7M memory peak. Jul 7 06:16:08.611507 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:16:08.650375 systemd[1]: Reload requested from client PID 2784 ('systemctl') (unit session-7.scope)... Jul 7 06:16:08.650394 systemd[1]: Reloading... Jul 7 06:16:08.801790 zram_generator::config[2832]: No configuration found. Jul 7 06:16:08.940513 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:16:09.081247 systemd[1]: Reloading finished in 430 ms. Jul 7 06:16:09.148286 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 7 06:16:09.148399 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 7 06:16:09.148718 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:16:09.148789 systemd[1]: kubelet.service: Consumed 140ms CPU time, 98.3M memory peak. Jul 7 06:16:09.151488 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:16:09.380799 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:16:09.392245 (kubelet)[2892]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 06:16:09.464280 kubelet[2892]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:16:09.464280 kubelet[2892]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 7 06:16:09.464280 kubelet[2892]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:16:09.466577 kubelet[2892]: I0707 06:16:09.466522 2892 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 06:16:09.837722 kubelet[2892]: I0707 06:16:09.837127 2892 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 7 06:16:09.837722 kubelet[2892]: I0707 06:16:09.837167 2892 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 06:16:09.837722 kubelet[2892]: I0707 06:16:09.837536 2892 server.go:934] "Client rotation is on, will bootstrap in background" Jul 7 06:16:09.893336 kubelet[2892]: I0707 06:16:09.892788 2892 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 06:16:09.893336 kubelet[2892]: E0707 06:16:09.892839 2892 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.23.116:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.23.116:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:16:09.906766 kubelet[2892]: I0707 06:16:09.906741 2892 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 06:16:09.913980 kubelet[2892]: I0707 06:16:09.913949 2892 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 06:16:09.915900 kubelet[2892]: I0707 06:16:09.915857 2892 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 7 06:16:09.916091 kubelet[2892]: I0707 06:16:09.916035 2892 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 06:16:09.916339 kubelet[2892]: I0707 06:16:09.916072 2892 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-116","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 06:16:09.916339 kubelet[2892]: I0707 06:16:09.916251 2892 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 06:16:09.916339 kubelet[2892]: I0707 06:16:09.916259 2892 container_manager_linux.go:300] "Creating device plugin manager" Jul 7 06:16:09.916525 kubelet[2892]: I0707 06:16:09.916356 2892 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:16:09.922945 kubelet[2892]: I0707 06:16:09.922891 2892 kubelet.go:408] "Attempting to sync node with API server" Jul 7 06:16:09.922945 kubelet[2892]: I0707 06:16:09.922945 2892 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 06:16:09.925386 kubelet[2892]: I0707 06:16:09.925107 2892 kubelet.go:314] "Adding apiserver pod source" Jul 7 06:16:09.925386 kubelet[2892]: I0707 06:16:09.925135 2892 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 06:16:09.931375 kubelet[2892]: W0707 06:16:09.930228 2892 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.23.116:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-116&limit=500&resourceVersion=0": dial tcp 172.31.23.116:6443: connect: connection refused Jul 7 06:16:09.931375 kubelet[2892]: E0707 06:16:09.931153 2892 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.23.116:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-116&limit=500&resourceVersion=0\": dial tcp 172.31.23.116:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:16:09.931375 kubelet[2892]: I0707 06:16:09.931250 2892 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 06:16:09.935627 kubelet[2892]: I0707 06:16:09.935507 2892 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 06:16:09.936402 kubelet[2892]: W0707 06:16:09.936376 2892 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 06:16:09.937771 kubelet[2892]: W0707 06:16:09.937488 2892 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.23.116:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.23.116:6443: connect: connection refused Jul 7 06:16:09.937771 kubelet[2892]: I0707 06:16:09.937529 2892 server.go:1274] "Started kubelet" Jul 7 06:16:09.937771 kubelet[2892]: E0707 06:16:09.937540 2892 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.23.116:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.23.116:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:16:09.937771 kubelet[2892]: I0707 06:16:09.937630 2892 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 06:16:09.942562 kubelet[2892]: I0707 06:16:09.942495 2892 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 06:16:09.942933 kubelet[2892]: I0707 06:16:09.942889 2892 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 06:16:09.949574 kubelet[2892]: E0707 06:16:09.943716 2892 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.23.116:6443/api/v1/namespaces/default/events\": dial tcp 172.31.23.116:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-23-116.184fe38bde279131 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-116,UID:ip-172-31-23-116,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-116,},FirstTimestamp:2025-07-07 06:16:09.937506609 +0000 UTC m=+0.540378230,LastTimestamp:2025-07-07 06:16:09.937506609 +0000 UTC m=+0.540378230,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-116,}" Jul 7 06:16:09.949574 kubelet[2892]: I0707 06:16:09.949401 2892 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 06:16:09.950684 kubelet[2892]: I0707 06:16:09.949999 2892 server.go:449] "Adding debug handlers to kubelet server" Jul 7 06:16:09.953137 kubelet[2892]: I0707 06:16:09.953108 2892 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 06:16:09.958870 kubelet[2892]: E0707 06:16:09.958839 2892 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-23-116\" not found" Jul 7 06:16:09.959020 kubelet[2892]: I0707 06:16:09.959012 2892 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 7 06:16:09.959274 kubelet[2892]: I0707 06:16:09.959262 2892 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 7 06:16:09.959387 kubelet[2892]: I0707 06:16:09.959380 2892 reconciler.go:26] "Reconciler: start to sync state" Jul 7 06:16:09.961030 kubelet[2892]: W0707 06:16:09.960973 2892 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.23.116:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.116:6443: connect: connection refused Jul 7 06:16:09.961160 kubelet[2892]: E0707 06:16:09.961146 2892 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.23.116:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.23.116:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:16:09.961412 kubelet[2892]: E0707 06:16:09.961388 2892 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 06:16:09.961768 kubelet[2892]: E0707 06:16:09.961745 2892 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-116?timeout=10s\": dial tcp 172.31.23.116:6443: connect: connection refused" interval="200ms" Jul 7 06:16:09.967148 kubelet[2892]: I0707 06:16:09.967011 2892 factory.go:221] Registration of the containerd container factory successfully Jul 7 06:16:09.967148 kubelet[2892]: I0707 06:16:09.967029 2892 factory.go:221] Registration of the systemd container factory successfully Jul 7 06:16:09.967148 kubelet[2892]: I0707 06:16:09.967107 2892 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 06:16:09.992209 kubelet[2892]: I0707 06:16:09.991423 2892 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 06:16:09.997934 kubelet[2892]: I0707 06:16:09.997887 2892 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 06:16:09.997934 kubelet[2892]: I0707 06:16:09.997924 2892 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 7 06:16:09.998118 kubelet[2892]: I0707 06:16:09.997948 2892 kubelet.go:2321] "Starting kubelet main sync loop" Jul 7 06:16:09.998118 kubelet[2892]: E0707 06:16:09.998007 2892 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 06:16:10.002204 kubelet[2892]: I0707 06:16:10.002163 2892 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 7 06:16:10.002204 kubelet[2892]: I0707 06:16:10.002183 2892 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 7 06:16:10.002204 kubelet[2892]: I0707 06:16:10.002205 2892 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:16:10.004349 kubelet[2892]: W0707 06:16:10.004282 2892 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.23.116:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.116:6443: connect: connection refused Jul 7 06:16:10.004557 kubelet[2892]: E0707 06:16:10.004479 2892 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.23.116:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.23.116:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:16:10.007866 kubelet[2892]: I0707 06:16:10.007686 2892 policy_none.go:49] "None policy: Start" Jul 7 06:16:10.008683 kubelet[2892]: I0707 06:16:10.008638 2892 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 7 06:16:10.008808 kubelet[2892]: I0707 06:16:10.008697 2892 state_mem.go:35] "Initializing new in-memory state store" Jul 7 06:16:10.025721 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 7 06:16:10.038269 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 7 06:16:10.043080 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 7 06:16:10.053689 kubelet[2892]: I0707 06:16:10.053500 2892 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 06:16:10.054262 kubelet[2892]: I0707 06:16:10.053803 2892 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 06:16:10.054262 kubelet[2892]: I0707 06:16:10.053818 2892 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 06:16:10.054262 kubelet[2892]: I0707 06:16:10.054111 2892 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 06:16:10.057772 kubelet[2892]: E0707 06:16:10.057738 2892 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-23-116\" not found" Jul 7 06:16:10.112098 systemd[1]: Created slice kubepods-burstable-pod4a60199cb9cb05adf0423781a0db0a69.slice - libcontainer container kubepods-burstable-pod4a60199cb9cb05adf0423781a0db0a69.slice. Jul 7 06:16:10.136447 systemd[1]: Created slice kubepods-burstable-pod67b6a9e5e33f53bd06dae326ce7be9d4.slice - libcontainer container kubepods-burstable-pod67b6a9e5e33f53bd06dae326ce7be9d4.slice. Jul 7 06:16:10.141460 systemd[1]: Created slice kubepods-burstable-poda800753c77f252e1c96a6ead39803caf.slice - libcontainer container kubepods-burstable-poda800753c77f252e1c96a6ead39803caf.slice. Jul 7 06:16:10.156155 kubelet[2892]: I0707 06:16:10.155793 2892 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-23-116" Jul 7 06:16:10.156358 kubelet[2892]: E0707 06:16:10.156325 2892 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.23.116:6443/api/v1/nodes\": dial tcp 172.31.23.116:6443: connect: connection refused" node="ip-172-31-23-116" Jul 7 06:16:10.163180 kubelet[2892]: E0707 06:16:10.163122 2892 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-116?timeout=10s\": dial tcp 172.31.23.116:6443: connect: connection refused" interval="400ms" Jul 7 06:16:10.261820 kubelet[2892]: I0707 06:16:10.261772 2892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/67b6a9e5e33f53bd06dae326ce7be9d4-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-116\" (UID: \"67b6a9e5e33f53bd06dae326ce7be9d4\") " pod="kube-system/kube-controller-manager-ip-172-31-23-116" Jul 7 06:16:10.261820 kubelet[2892]: I0707 06:16:10.261816 2892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4a60199cb9cb05adf0423781a0db0a69-ca-certs\") pod \"kube-apiserver-ip-172-31-23-116\" (UID: \"4a60199cb9cb05adf0423781a0db0a69\") " pod="kube-system/kube-apiserver-ip-172-31-23-116" Jul 7 06:16:10.261820 kubelet[2892]: I0707 06:16:10.261838 2892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/67b6a9e5e33f53bd06dae326ce7be9d4-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-116\" (UID: \"67b6a9e5e33f53bd06dae326ce7be9d4\") " pod="kube-system/kube-controller-manager-ip-172-31-23-116" Jul 7 06:16:10.262039 kubelet[2892]: I0707 06:16:10.261854 2892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/67b6a9e5e33f53bd06dae326ce7be9d4-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-116\" (UID: \"67b6a9e5e33f53bd06dae326ce7be9d4\") " pod="kube-system/kube-controller-manager-ip-172-31-23-116" Jul 7 06:16:10.262039 kubelet[2892]: I0707 06:16:10.261870 2892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/67b6a9e5e33f53bd06dae326ce7be9d4-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-116\" (UID: \"67b6a9e5e33f53bd06dae326ce7be9d4\") " pod="kube-system/kube-controller-manager-ip-172-31-23-116" Jul 7 06:16:10.262039 kubelet[2892]: I0707 06:16:10.261890 2892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4a60199cb9cb05adf0423781a0db0a69-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-116\" (UID: \"4a60199cb9cb05adf0423781a0db0a69\") " pod="kube-system/kube-apiserver-ip-172-31-23-116" Jul 7 06:16:10.262039 kubelet[2892]: I0707 06:16:10.261908 2892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4a60199cb9cb05adf0423781a0db0a69-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-116\" (UID: \"4a60199cb9cb05adf0423781a0db0a69\") " pod="kube-system/kube-apiserver-ip-172-31-23-116" Jul 7 06:16:10.262039 kubelet[2892]: I0707 06:16:10.261926 2892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/67b6a9e5e33f53bd06dae326ce7be9d4-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-116\" (UID: \"67b6a9e5e33f53bd06dae326ce7be9d4\") " pod="kube-system/kube-controller-manager-ip-172-31-23-116" Jul 7 06:16:10.262166 kubelet[2892]: I0707 06:16:10.261942 2892 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a800753c77f252e1c96a6ead39803caf-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-116\" (UID: \"a800753c77f252e1c96a6ead39803caf\") " pod="kube-system/kube-scheduler-ip-172-31-23-116" Jul 7 06:16:10.358668 kubelet[2892]: I0707 06:16:10.358523 2892 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-23-116" Jul 7 06:16:10.358930 kubelet[2892]: E0707 06:16:10.358903 2892 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.23.116:6443/api/v1/nodes\": dial tcp 172.31.23.116:6443: connect: connection refused" node="ip-172-31-23-116" Jul 7 06:16:10.435790 containerd[1926]: time="2025-07-07T06:16:10.435732458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-116,Uid:4a60199cb9cb05adf0423781a0db0a69,Namespace:kube-system,Attempt:0,}" Jul 7 06:16:10.440040 containerd[1926]: time="2025-07-07T06:16:10.439872872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-116,Uid:67b6a9e5e33f53bd06dae326ce7be9d4,Namespace:kube-system,Attempt:0,}" Jul 7 06:16:10.445055 containerd[1926]: time="2025-07-07T06:16:10.444868103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-116,Uid:a800753c77f252e1c96a6ead39803caf,Namespace:kube-system,Attempt:0,}" Jul 7 06:16:10.564363 kubelet[2892]: E0707 06:16:10.564293 2892 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-116?timeout=10s\": dial tcp 172.31.23.116:6443: connect: connection refused" interval="800ms" Jul 7 06:16:10.597325 containerd[1926]: time="2025-07-07T06:16:10.597267544Z" level=info msg="connecting to shim b36399912801631363094753b4c1af163945e642372137e4683681914de944f5" address="unix:///run/containerd/s/0d070009a2f79052aca058b8e950f3b5daeaf10a7a90b2d49c243eaf6837d478" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:16:10.599008 containerd[1926]: time="2025-07-07T06:16:10.598890026Z" level=info msg="connecting to shim f9bdba2ea7fd0a0daf39469dabb75f2a5ff5600c94408f18e86380ecc257d55b" address="unix:///run/containerd/s/d69bffc31af5728bc4b8ff3c6589a6dc4ffbfe1d18784936b70a8f8a51d65039" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:16:10.608948 containerd[1926]: time="2025-07-07T06:16:10.608885717Z" level=info msg="connecting to shim 2cf2df62e127cb22af87766399334862c1833780c688521671f4e06d001c9ae4" address="unix:///run/containerd/s/ac389a905c0fc2a58d24a4897a1fcdc1c903f0838884eee7ef48755168c626c6" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:16:10.715354 systemd[1]: Started cri-containerd-2cf2df62e127cb22af87766399334862c1833780c688521671f4e06d001c9ae4.scope - libcontainer container 2cf2df62e127cb22af87766399334862c1833780c688521671f4e06d001c9ae4. Jul 7 06:16:10.721779 systemd[1]: Started cri-containerd-b36399912801631363094753b4c1af163945e642372137e4683681914de944f5.scope - libcontainer container b36399912801631363094753b4c1af163945e642372137e4683681914de944f5. Jul 7 06:16:10.723634 systemd[1]: Started cri-containerd-f9bdba2ea7fd0a0daf39469dabb75f2a5ff5600c94408f18e86380ecc257d55b.scope - libcontainer container f9bdba2ea7fd0a0daf39469dabb75f2a5ff5600c94408f18e86380ecc257d55b. Jul 7 06:16:10.763071 kubelet[2892]: I0707 06:16:10.763043 2892 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-23-116" Jul 7 06:16:10.763472 kubelet[2892]: E0707 06:16:10.763450 2892 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.23.116:6443/api/v1/nodes\": dial tcp 172.31.23.116:6443: connect: connection refused" node="ip-172-31-23-116" Jul 7 06:16:10.800701 containerd[1926]: time="2025-07-07T06:16:10.800474008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-116,Uid:4a60199cb9cb05adf0423781a0db0a69,Namespace:kube-system,Attempt:0,} returns sandbox id \"2cf2df62e127cb22af87766399334862c1833780c688521671f4e06d001c9ae4\"" Jul 7 06:16:10.811954 containerd[1926]: time="2025-07-07T06:16:10.811799835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-116,Uid:67b6a9e5e33f53bd06dae326ce7be9d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"b36399912801631363094753b4c1af163945e642372137e4683681914de944f5\"" Jul 7 06:16:10.815811 containerd[1926]: time="2025-07-07T06:16:10.815758415Z" level=info msg="CreateContainer within sandbox \"2cf2df62e127cb22af87766399334862c1833780c688521671f4e06d001c9ae4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 06:16:10.823067 containerd[1926]: time="2025-07-07T06:16:10.823006822Z" level=info msg="CreateContainer within sandbox \"b36399912801631363094753b4c1af163945e642372137e4683681914de944f5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 06:16:10.838708 containerd[1926]: time="2025-07-07T06:16:10.838671019Z" level=info msg="Container 2f419cb6cbf22b2fe29bde89c6d8a86d0c81f8c23208871ff878128738f56d68: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:16:10.839775 containerd[1926]: time="2025-07-07T06:16:10.839740863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-116,Uid:a800753c77f252e1c96a6ead39803caf,Namespace:kube-system,Attempt:0,} returns sandbox id \"f9bdba2ea7fd0a0daf39469dabb75f2a5ff5600c94408f18e86380ecc257d55b\"" Jul 7 06:16:10.843076 containerd[1926]: time="2025-07-07T06:16:10.843040490Z" level=info msg="CreateContainer within sandbox \"f9bdba2ea7fd0a0daf39469dabb75f2a5ff5600c94408f18e86380ecc257d55b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 06:16:10.846286 containerd[1926]: time="2025-07-07T06:16:10.845630371Z" level=info msg="Container aab461a0244d51fec1efbe925561ab31c832d52ffd66b42d137842073c17e4d6: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:16:10.854148 containerd[1926]: time="2025-07-07T06:16:10.854110803Z" level=info msg="CreateContainer within sandbox \"2cf2df62e127cb22af87766399334862c1833780c688521671f4e06d001c9ae4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2f419cb6cbf22b2fe29bde89c6d8a86d0c81f8c23208871ff878128738f56d68\"" Jul 7 06:16:10.855207 containerd[1926]: time="2025-07-07T06:16:10.855174283Z" level=info msg="StartContainer for \"2f419cb6cbf22b2fe29bde89c6d8a86d0c81f8c23208871ff878128738f56d68\"" Jul 7 06:16:10.858313 containerd[1926]: time="2025-07-07T06:16:10.858256309Z" level=info msg="connecting to shim 2f419cb6cbf22b2fe29bde89c6d8a86d0c81f8c23208871ff878128738f56d68" address="unix:///run/containerd/s/ac389a905c0fc2a58d24a4897a1fcdc1c903f0838884eee7ef48755168c626c6" protocol=ttrpc version=3 Jul 7 06:16:10.859466 kubelet[2892]: W0707 06:16:10.859379 2892 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.23.116:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.116:6443: connect: connection refused Jul 7 06:16:10.860061 kubelet[2892]: E0707 06:16:10.859465 2892 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.23.116:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.23.116:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:16:10.863081 containerd[1926]: time="2025-07-07T06:16:10.863038954Z" level=info msg="CreateContainer within sandbox \"b36399912801631363094753b4c1af163945e642372137e4683681914de944f5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"aab461a0244d51fec1efbe925561ab31c832d52ffd66b42d137842073c17e4d6\"" Jul 7 06:16:10.863960 containerd[1926]: time="2025-07-07T06:16:10.863930931Z" level=info msg="StartContainer for \"aab461a0244d51fec1efbe925561ab31c832d52ffd66b42d137842073c17e4d6\"" Jul 7 06:16:10.865212 containerd[1926]: time="2025-07-07T06:16:10.865184061Z" level=info msg="connecting to shim aab461a0244d51fec1efbe925561ab31c832d52ffd66b42d137842073c17e4d6" address="unix:///run/containerd/s/0d070009a2f79052aca058b8e950f3b5daeaf10a7a90b2d49c243eaf6837d478" protocol=ttrpc version=3 Jul 7 06:16:10.870278 containerd[1926]: time="2025-07-07T06:16:10.870241600Z" level=info msg="Container 0ea2d199ba6f2d7681d98446731b231c6c589652426b05fb573959c46423bda8: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:16:10.887118 containerd[1926]: time="2025-07-07T06:16:10.887070371Z" level=info msg="CreateContainer within sandbox \"f9bdba2ea7fd0a0daf39469dabb75f2a5ff5600c94408f18e86380ecc257d55b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0ea2d199ba6f2d7681d98446731b231c6c589652426b05fb573959c46423bda8\"" Jul 7 06:16:10.887803 containerd[1926]: time="2025-07-07T06:16:10.887771639Z" level=info msg="StartContainer for \"0ea2d199ba6f2d7681d98446731b231c6c589652426b05fb573959c46423bda8\"" Jul 7 06:16:10.888005 systemd[1]: Started cri-containerd-2f419cb6cbf22b2fe29bde89c6d8a86d0c81f8c23208871ff878128738f56d68.scope - libcontainer container 2f419cb6cbf22b2fe29bde89c6d8a86d0c81f8c23208871ff878128738f56d68. Jul 7 06:16:10.889535 containerd[1926]: time="2025-07-07T06:16:10.889256708Z" level=info msg="connecting to shim 0ea2d199ba6f2d7681d98446731b231c6c589652426b05fb573959c46423bda8" address="unix:///run/containerd/s/d69bffc31af5728bc4b8ff3c6589a6dc4ffbfe1d18784936b70a8f8a51d65039" protocol=ttrpc version=3 Jul 7 06:16:10.903259 systemd[1]: Started cri-containerd-aab461a0244d51fec1efbe925561ab31c832d52ffd66b42d137842073c17e4d6.scope - libcontainer container aab461a0244d51fec1efbe925561ab31c832d52ffd66b42d137842073c17e4d6. Jul 7 06:16:10.927614 systemd[1]: Started cri-containerd-0ea2d199ba6f2d7681d98446731b231c6c589652426b05fb573959c46423bda8.scope - libcontainer container 0ea2d199ba6f2d7681d98446731b231c6c589652426b05fb573959c46423bda8. Jul 7 06:16:10.956880 kubelet[2892]: W0707 06:16:10.956838 2892 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.23.116:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.116:6443: connect: connection refused Jul 7 06:16:10.957027 kubelet[2892]: E0707 06:16:10.956888 2892 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.23.116:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.23.116:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:16:10.998809 containerd[1926]: time="2025-07-07T06:16:10.998067107Z" level=info msg="StartContainer for \"2f419cb6cbf22b2fe29bde89c6d8a86d0c81f8c23208871ff878128738f56d68\" returns successfully" Jul 7 06:16:10.998809 containerd[1926]: time="2025-07-07T06:16:10.998215120Z" level=info msg="StartContainer for \"aab461a0244d51fec1efbe925561ab31c832d52ffd66b42d137842073c17e4d6\" returns successfully" Jul 7 06:16:11.074827 containerd[1926]: time="2025-07-07T06:16:11.074760784Z" level=info msg="StartContainer for \"0ea2d199ba6f2d7681d98446731b231c6c589652426b05fb573959c46423bda8\" returns successfully" Jul 7 06:16:11.272782 kubelet[2892]: W0707 06:16:11.272600 2892 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.23.116:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.23.116:6443: connect: connection refused Jul 7 06:16:11.272782 kubelet[2892]: E0707 06:16:11.272721 2892 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.23.116:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.23.116:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:16:11.327554 kubelet[2892]: W0707 06:16:11.327466 2892 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.23.116:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-116&limit=500&resourceVersion=0": dial tcp 172.31.23.116:6443: connect: connection refused Jul 7 06:16:11.327721 kubelet[2892]: E0707 06:16:11.327567 2892 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.23.116:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-116&limit=500&resourceVersion=0\": dial tcp 172.31.23.116:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:16:11.365670 kubelet[2892]: E0707 06:16:11.365402 2892 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-116?timeout=10s\": dial tcp 172.31.23.116:6443: connect: connection refused" interval="1.6s" Jul 7 06:16:11.565625 kubelet[2892]: I0707 06:16:11.565478 2892 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-23-116" Jul 7 06:16:11.568131 kubelet[2892]: E0707 06:16:11.568087 2892 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.23.116:6443/api/v1/nodes\": dial tcp 172.31.23.116:6443: connect: connection refused" node="ip-172-31-23-116" Jul 7 06:16:13.171260 kubelet[2892]: I0707 06:16:13.171235 2892 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-23-116" Jul 7 06:16:14.141674 kubelet[2892]: I0707 06:16:14.141616 2892 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-23-116" Jul 7 06:16:14.142416 kubelet[2892]: E0707 06:16:14.141864 2892 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ip-172-31-23-116\": node \"ip-172-31-23-116\" not found" Jul 7 06:16:14.172246 kubelet[2892]: E0707 06:16:14.172201 2892 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-23-116\" not found" Jul 7 06:16:14.273168 kubelet[2892]: E0707 06:16:14.273123 2892 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-23-116\" not found" Jul 7 06:16:14.374287 kubelet[2892]: E0707 06:16:14.374240 2892 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-23-116\" not found" Jul 7 06:16:14.940193 kubelet[2892]: I0707 06:16:14.940148 2892 apiserver.go:52] "Watching apiserver" Jul 7 06:16:14.960163 kubelet[2892]: I0707 06:16:14.960105 2892 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 7 06:16:16.362361 systemd[1]: Reload requested from client PID 3162 ('systemctl') (unit session-7.scope)... Jul 7 06:16:16.362379 systemd[1]: Reloading... Jul 7 06:16:16.502702 zram_generator::config[3210]: No configuration found. Jul 7 06:16:16.612286 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:16:16.772514 systemd[1]: Reloading finished in 409 ms. Jul 7 06:16:16.797803 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:16:16.805001 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 06:16:16.805272 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:16:16.805333 systemd[1]: kubelet.service: Consumed 885ms CPU time, 126.3M memory peak. Jul 7 06:16:16.808070 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:16:16.923779 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 7 06:16:17.149515 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:16:17.164192 (kubelet)[3270]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 06:16:17.234606 kubelet[3270]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:16:17.236671 kubelet[3270]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 7 06:16:17.236671 kubelet[3270]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:16:17.236671 kubelet[3270]: I0707 06:16:17.235185 3270 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 06:16:17.247342 kubelet[3270]: I0707 06:16:17.247307 3270 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 7 06:16:17.247508 kubelet[3270]: I0707 06:16:17.247496 3270 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 06:16:17.247951 kubelet[3270]: I0707 06:16:17.247933 3270 server.go:934] "Client rotation is on, will bootstrap in background" Jul 7 06:16:17.250392 kubelet[3270]: I0707 06:16:17.250358 3270 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 7 06:16:17.267103 kubelet[3270]: I0707 06:16:17.267069 3270 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 06:16:17.272297 kubelet[3270]: I0707 06:16:17.272276 3270 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 06:16:17.275928 kubelet[3270]: I0707 06:16:17.275907 3270 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 06:16:17.276181 kubelet[3270]: I0707 06:16:17.276170 3270 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 7 06:16:17.276400 kubelet[3270]: I0707 06:16:17.276373 3270 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 06:16:17.276682 kubelet[3270]: I0707 06:16:17.276487 3270 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-116","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 06:16:17.276828 kubelet[3270]: I0707 06:16:17.276818 3270 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 06:16:17.276885 kubelet[3270]: I0707 06:16:17.276877 3270 container_manager_linux.go:300] "Creating device plugin manager" Jul 7 06:16:17.276980 kubelet[3270]: I0707 06:16:17.276973 3270 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:16:17.277132 kubelet[3270]: I0707 06:16:17.277125 3270 kubelet.go:408] "Attempting to sync node with API server" Jul 7 06:16:17.277807 kubelet[3270]: I0707 06:16:17.277792 3270 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 06:16:17.278129 kubelet[3270]: I0707 06:16:17.278117 3270 kubelet.go:314] "Adding apiserver pod source" Jul 7 06:16:17.278233 kubelet[3270]: I0707 06:16:17.278224 3270 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 06:16:17.280925 kubelet[3270]: I0707 06:16:17.280868 3270 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 06:16:17.282449 kubelet[3270]: I0707 06:16:17.281898 3270 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 06:16:17.283795 kubelet[3270]: I0707 06:16:17.283769 3270 server.go:1274] "Started kubelet" Jul 7 06:16:17.294717 kubelet[3270]: I0707 06:16:17.293830 3270 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 06:16:17.295743 kubelet[3270]: I0707 06:16:17.295724 3270 server.go:449] "Adding debug handlers to kubelet server" Jul 7 06:16:17.302386 kubelet[3270]: I0707 06:16:17.302335 3270 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 06:16:17.302776 kubelet[3270]: I0707 06:16:17.302761 3270 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 06:16:17.306417 kubelet[3270]: I0707 06:16:17.306387 3270 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 06:16:17.306751 kubelet[3270]: I0707 06:16:17.306733 3270 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 06:16:17.320677 kubelet[3270]: I0707 06:16:17.320535 3270 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 7 06:16:17.321063 kubelet[3270]: E0707 06:16:17.320983 3270 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-23-116\" not found" Jul 7 06:16:17.322815 kubelet[3270]: I0707 06:16:17.322773 3270 reconciler.go:26] "Reconciler: start to sync state" Jul 7 06:16:17.323837 kubelet[3270]: I0707 06:16:17.323821 3270 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 7 06:16:17.325923 kubelet[3270]: E0707 06:16:17.324794 3270 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 06:16:17.340218 kubelet[3270]: I0707 06:16:17.340120 3270 factory.go:221] Registration of the containerd container factory successfully Jul 7 06:16:17.340218 kubelet[3270]: I0707 06:16:17.340146 3270 factory.go:221] Registration of the systemd container factory successfully Jul 7 06:16:17.340446 kubelet[3270]: I0707 06:16:17.340253 3270 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 06:16:17.356456 kubelet[3270]: I0707 06:16:17.356253 3270 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 06:16:17.360714 kubelet[3270]: I0707 06:16:17.359046 3270 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 06:16:17.360714 kubelet[3270]: I0707 06:16:17.359109 3270 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 7 06:16:17.360714 kubelet[3270]: I0707 06:16:17.359133 3270 kubelet.go:2321] "Starting kubelet main sync loop" Jul 7 06:16:17.360714 kubelet[3270]: E0707 06:16:17.359187 3270 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 06:16:17.400236 sudo[3301]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 7 06:16:17.401609 sudo[3301]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 7 06:16:17.434698 kubelet[3270]: I0707 06:16:17.434672 3270 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 7 06:16:17.434860 kubelet[3270]: I0707 06:16:17.434849 3270 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 7 06:16:17.434934 kubelet[3270]: I0707 06:16:17.434925 3270 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:16:17.435551 kubelet[3270]: I0707 06:16:17.435179 3270 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 06:16:17.435745 kubelet[3270]: I0707 06:16:17.435677 3270 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 06:16:17.435932 kubelet[3270]: I0707 06:16:17.435923 3270 policy_none.go:49] "None policy: Start" Jul 7 06:16:17.437507 kubelet[3270]: I0707 06:16:17.437470 3270 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 7 06:16:17.437722 kubelet[3270]: I0707 06:16:17.437709 3270 state_mem.go:35] "Initializing new in-memory state store" Jul 7 06:16:17.438067 kubelet[3270]: I0707 06:16:17.438057 3270 state_mem.go:75] "Updated machine memory state" Jul 7 06:16:17.445484 kubelet[3270]: I0707 06:16:17.445447 3270 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 06:16:17.448397 kubelet[3270]: I0707 06:16:17.447560 3270 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 06:16:17.450279 kubelet[3270]: I0707 06:16:17.448959 3270 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 06:16:17.452688 kubelet[3270]: I0707 06:16:17.452601 3270 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 06:16:17.489896 kubelet[3270]: E0707 06:16:17.488312 3270 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-23-116\" already exists" pod="kube-system/kube-apiserver-ip-172-31-23-116" Jul 7 06:16:17.525159 kubelet[3270]: I0707 06:16:17.525120 3270 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4a60199cb9cb05adf0423781a0db0a69-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-116\" (UID: \"4a60199cb9cb05adf0423781a0db0a69\") " pod="kube-system/kube-apiserver-ip-172-31-23-116" Jul 7 06:16:17.525426 kubelet[3270]: I0707 06:16:17.525404 3270 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4a60199cb9cb05adf0423781a0db0a69-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-116\" (UID: \"4a60199cb9cb05adf0423781a0db0a69\") " pod="kube-system/kube-apiserver-ip-172-31-23-116" Jul 7 06:16:17.525594 kubelet[3270]: I0707 06:16:17.525578 3270 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/67b6a9e5e33f53bd06dae326ce7be9d4-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-116\" (UID: \"67b6a9e5e33f53bd06dae326ce7be9d4\") " pod="kube-system/kube-controller-manager-ip-172-31-23-116" Jul 7 06:16:17.525754 kubelet[3270]: I0707 06:16:17.525737 3270 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/67b6a9e5e33f53bd06dae326ce7be9d4-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-116\" (UID: \"67b6a9e5e33f53bd06dae326ce7be9d4\") " pod="kube-system/kube-controller-manager-ip-172-31-23-116" Jul 7 06:16:17.525870 kubelet[3270]: I0707 06:16:17.525858 3270 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a800753c77f252e1c96a6ead39803caf-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-116\" (UID: \"a800753c77f252e1c96a6ead39803caf\") " pod="kube-system/kube-scheduler-ip-172-31-23-116" Jul 7 06:16:17.526022 kubelet[3270]: I0707 06:16:17.526005 3270 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4a60199cb9cb05adf0423781a0db0a69-ca-certs\") pod \"kube-apiserver-ip-172-31-23-116\" (UID: \"4a60199cb9cb05adf0423781a0db0a69\") " pod="kube-system/kube-apiserver-ip-172-31-23-116" Jul 7 06:16:17.526150 kubelet[3270]: I0707 06:16:17.526137 3270 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/67b6a9e5e33f53bd06dae326ce7be9d4-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-116\" (UID: \"67b6a9e5e33f53bd06dae326ce7be9d4\") " pod="kube-system/kube-controller-manager-ip-172-31-23-116" Jul 7 06:16:17.526278 kubelet[3270]: I0707 06:16:17.526261 3270 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/67b6a9e5e33f53bd06dae326ce7be9d4-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-116\" (UID: \"67b6a9e5e33f53bd06dae326ce7be9d4\") " pod="kube-system/kube-controller-manager-ip-172-31-23-116" Jul 7 06:16:17.526501 kubelet[3270]: I0707 06:16:17.526450 3270 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/67b6a9e5e33f53bd06dae326ce7be9d4-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-116\" (UID: \"67b6a9e5e33f53bd06dae326ce7be9d4\") " pod="kube-system/kube-controller-manager-ip-172-31-23-116" Jul 7 06:16:17.583888 kubelet[3270]: I0707 06:16:17.583853 3270 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-23-116" Jul 7 06:16:17.595672 kubelet[3270]: I0707 06:16:17.594784 3270 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-23-116" Jul 7 06:16:17.595672 kubelet[3270]: I0707 06:16:17.594982 3270 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-23-116" Jul 7 06:16:18.067115 sudo[3301]: pam_unix(sudo:session): session closed for user root Jul 7 06:16:18.280645 kubelet[3270]: I0707 06:16:18.280362 3270 apiserver.go:52] "Watching apiserver" Jul 7 06:16:18.324643 kubelet[3270]: I0707 06:16:18.324526 3270 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 7 06:16:18.461586 kubelet[3270]: I0707 06:16:18.461513 3270 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-23-116" podStartSLOduration=1.461491799 podStartE2EDuration="1.461491799s" podCreationTimestamp="2025-07-07 06:16:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:16:18.424502349 +0000 UTC m=+1.251455324" watchObservedRunningTime="2025-07-07 06:16:18.461491799 +0000 UTC m=+1.288444777" Jul 7 06:16:18.476507 kubelet[3270]: I0707 06:16:18.476445 3270 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-23-116" podStartSLOduration=1.4764235 podStartE2EDuration="1.4764235s" podCreationTimestamp="2025-07-07 06:16:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:16:18.461828564 +0000 UTC m=+1.288781543" watchObservedRunningTime="2025-07-07 06:16:18.4764235 +0000 UTC m=+1.303376478" Jul 7 06:16:18.494689 kubelet[3270]: I0707 06:16:18.494603 3270 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-23-116" podStartSLOduration=3.494581227 podStartE2EDuration="3.494581227s" podCreationTimestamp="2025-07-07 06:16:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:16:18.477394914 +0000 UTC m=+1.304347895" watchObservedRunningTime="2025-07-07 06:16:18.494581227 +0000 UTC m=+1.321534206" Jul 7 06:16:19.823062 sudo[2342]: pam_unix(sudo:session): session closed for user root Jul 7 06:16:19.847555 sshd[2341]: Connection closed by 139.178.89.65 port 45126 Jul 7 06:16:19.846949 sshd-session[2339]: pam_unix(sshd:session): session closed for user core Jul 7 06:16:19.851065 systemd-logind[1875]: Session 7 logged out. Waiting for processes to exit. Jul 7 06:16:19.851164 systemd[1]: sshd@6-172.31.23.116:22-139.178.89.65:45126.service: Deactivated successfully. Jul 7 06:16:19.853714 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 06:16:19.853955 systemd[1]: session-7.scope: Consumed 4.529s CPU time, 210.5M memory peak. Jul 7 06:16:19.856528 systemd-logind[1875]: Removed session 7. Jul 7 06:16:21.569204 kubelet[3270]: I0707 06:16:21.569172 3270 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 06:16:21.569851 containerd[1926]: time="2025-07-07T06:16:21.569792089Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 06:16:21.570289 kubelet[3270]: I0707 06:16:21.570159 3270 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 06:16:22.276733 systemd[1]: Created slice kubepods-besteffort-podb2c02941_7465_41bd_b114_4a1d7ad6f881.slice - libcontainer container kubepods-besteffort-podb2c02941_7465_41bd_b114_4a1d7ad6f881.slice. Jul 7 06:16:22.277822 kubelet[3270]: W0707 06:16:22.277093 3270 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-23-116" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-23-116' and this object Jul 7 06:16:22.277822 kubelet[3270]: E0707 06:16:22.277134 3270 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ip-172-31-23-116\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-23-116' and this object" logger="UnhandledError" Jul 7 06:16:22.277822 kubelet[3270]: W0707 06:16:22.277641 3270 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-23-116" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-23-116' and this object Jul 7 06:16:22.277822 kubelet[3270]: E0707 06:16:22.277679 3270 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ip-172-31-23-116\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-23-116' and this object" logger="UnhandledError" Jul 7 06:16:22.282694 kubelet[3270]: W0707 06:16:22.282566 3270 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-23-116" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-23-116' and this object Jul 7 06:16:22.282694 kubelet[3270]: E0707 06:16:22.282619 3270 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ip-172-31-23-116\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-23-116' and this object" logger="UnhandledError" Jul 7 06:16:22.291218 systemd[1]: Created slice kubepods-burstable-podb407ee20_16e1_433c_9d0d_b1ccd11db3d0.slice - libcontainer container kubepods-burstable-podb407ee20_16e1_433c_9d0d_b1ccd11db3d0.slice. Jul 7 06:16:22.359010 kubelet[3270]: I0707 06:16:22.358942 3270 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b2c02941-7465-41bd-b114-4a1d7ad6f881-kube-proxy\") pod \"kube-proxy-bbqs9\" (UID: \"b2c02941-7465-41bd-b114-4a1d7ad6f881\") " pod="kube-system/kube-proxy-bbqs9" Jul 7 06:16:22.359010 kubelet[3270]: I0707 06:16:22.358989 3270 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq2nf\" (UniqueName: \"kubernetes.io/projected/b2c02941-7465-41bd-b114-4a1d7ad6f881-kube-api-access-jq2nf\") pod \"kube-proxy-bbqs9\" (UID: \"b2c02941-7465-41bd-b114-4a1d7ad6f881\") " pod="kube-system/kube-proxy-bbqs9" Jul 7 06:16:22.359010 kubelet[3270]: I0707 06:16:22.359011 3270 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-xtables-lock\") pod \"cilium-5wft9\" (UID: \"b407ee20-16e1-433c-9d0d-b1ccd11db3d0\") " pod="kube-system/cilium-5wft9" Jul 7 06:16:22.359010 kubelet[3270]: I0707 06:16:22.359028 3270 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-host-proc-sys-kernel\") pod \"cilium-5wft9\" (UID: \"b407ee20-16e1-433c-9d0d-b1ccd11db3d0\") " pod="kube-system/cilium-5wft9" Jul 7 06:16:22.359541 kubelet[3270]: I0707 06:16:22.359045 3270 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-hostproc\") pod \"cilium-5wft9\" (UID: \"b407ee20-16e1-433c-9d0d-b1ccd11db3d0\") " pod="kube-system/cilium-5wft9" Jul 7 06:16:22.359541 kubelet[3270]: I0707 06:16:22.359059 3270 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-lib-modules\") pod \"cilium-5wft9\" (UID: \"b407ee20-16e1-433c-9d0d-b1ccd11db3d0\") " pod="kube-system/cilium-5wft9" Jul 7 06:16:22.359541 kubelet[3270]: I0707 06:16:22.359074 3270 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-bpf-maps\") pod \"cilium-5wft9\" (UID: \"b407ee20-16e1-433c-9d0d-b1ccd11db3d0\") " pod="kube-system/cilium-5wft9" Jul 7 06:16:22.359541 kubelet[3270]: I0707 06:16:22.359088 3270 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-hubble-tls\") pod \"cilium-5wft9\" (UID: \"b407ee20-16e1-433c-9d0d-b1ccd11db3d0\") " pod="kube-system/cilium-5wft9" Jul 7 06:16:22.359541 kubelet[3270]: I0707 06:16:22.359102 3270 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-clustermesh-secrets\") pod \"cilium-5wft9\" (UID: \"b407ee20-16e1-433c-9d0d-b1ccd11db3d0\") " pod="kube-system/cilium-5wft9" Jul 7 06:16:22.359541 kubelet[3270]: I0707 06:16:22.359117 3270 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-cilium-config-path\") pod \"cilium-5wft9\" (UID: \"b407ee20-16e1-433c-9d0d-b1ccd11db3d0\") " pod="kube-system/cilium-5wft9" Jul 7 06:16:22.359717 kubelet[3270]: I0707 06:16:22.359132 3270 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-host-proc-sys-net\") pod \"cilium-5wft9\" (UID: \"b407ee20-16e1-433c-9d0d-b1ccd11db3d0\") " pod="kube-system/cilium-5wft9" Jul 7 06:16:22.359717 kubelet[3270]: I0707 06:16:22.359146 3270 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b2c02941-7465-41bd-b114-4a1d7ad6f881-lib-modules\") pod \"kube-proxy-bbqs9\" (UID: \"b2c02941-7465-41bd-b114-4a1d7ad6f881\") " pod="kube-system/kube-proxy-bbqs9" Jul 7 06:16:22.359717 kubelet[3270]: I0707 06:16:22.359198 3270 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-cilium-cgroup\") pod \"cilium-5wft9\" (UID: \"b407ee20-16e1-433c-9d0d-b1ccd11db3d0\") " pod="kube-system/cilium-5wft9" Jul 7 06:16:22.359717 kubelet[3270]: I0707 06:16:22.359231 3270 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-cni-path\") pod \"cilium-5wft9\" (UID: \"b407ee20-16e1-433c-9d0d-b1ccd11db3d0\") " pod="kube-system/cilium-5wft9" Jul 7 06:16:22.359717 kubelet[3270]: I0707 06:16:22.359249 3270 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-etc-cni-netd\") pod \"cilium-5wft9\" (UID: \"b407ee20-16e1-433c-9d0d-b1ccd11db3d0\") " pod="kube-system/cilium-5wft9" Jul 7 06:16:22.359717 kubelet[3270]: I0707 06:16:22.359267 3270 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-cilium-run\") pod \"cilium-5wft9\" (UID: \"b407ee20-16e1-433c-9d0d-b1ccd11db3d0\") " pod="kube-system/cilium-5wft9" Jul 7 06:16:22.359868 kubelet[3270]: I0707 06:16:22.359283 3270 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td8g2\" (UniqueName: \"kubernetes.io/projected/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-kube-api-access-td8g2\") pod \"cilium-5wft9\" (UID: \"b407ee20-16e1-433c-9d0d-b1ccd11db3d0\") " pod="kube-system/cilium-5wft9" Jul 7 06:16:22.359868 kubelet[3270]: I0707 06:16:22.359314 3270 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b2c02941-7465-41bd-b114-4a1d7ad6f881-xtables-lock\") pod \"kube-proxy-bbqs9\" (UID: \"b2c02941-7465-41bd-b114-4a1d7ad6f881\") " pod="kube-system/kube-proxy-bbqs9" Jul 7 06:16:22.590740 containerd[1926]: time="2025-07-07T06:16:22.590629024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bbqs9,Uid:b2c02941-7465-41bd-b114-4a1d7ad6f881,Namespace:kube-system,Attempt:0,}" Jul 7 06:16:22.653769 containerd[1926]: time="2025-07-07T06:16:22.653720138Z" level=info msg="connecting to shim b61cdcb580d4d8f25a7b5ac858c4f1bf2aa021605af01569033b8c1bc689e804" address="unix:///run/containerd/s/1713893c201ad8246bbed0584e2f4b31c3147859b85f9d1fc47d560820e347a4" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:16:22.703036 systemd[1]: Started cri-containerd-b61cdcb580d4d8f25a7b5ac858c4f1bf2aa021605af01569033b8c1bc689e804.scope - libcontainer container b61cdcb580d4d8f25a7b5ac858c4f1bf2aa021605af01569033b8c1bc689e804. Jul 7 06:16:22.730812 systemd[1]: Created slice kubepods-besteffort-podbb4f29e6_b2ba_421b_bd60_8b9e19cd539b.slice - libcontainer container kubepods-besteffort-podbb4f29e6_b2ba_421b_bd60_8b9e19cd539b.slice. Jul 7 06:16:22.762253 kubelet[3270]: I0707 06:16:22.762204 3270 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bb4f29e6-b2ba-421b-bd60-8b9e19cd539b-cilium-config-path\") pod \"cilium-operator-5d85765b45-jtffx\" (UID: \"bb4f29e6-b2ba-421b-bd60-8b9e19cd539b\") " pod="kube-system/cilium-operator-5d85765b45-jtffx" Jul 7 06:16:22.762253 kubelet[3270]: I0707 06:16:22.762255 3270 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjjhf\" (UniqueName: \"kubernetes.io/projected/bb4f29e6-b2ba-421b-bd60-8b9e19cd539b-kube-api-access-zjjhf\") pod \"cilium-operator-5d85765b45-jtffx\" (UID: \"bb4f29e6-b2ba-421b-bd60-8b9e19cd539b\") " pod="kube-system/cilium-operator-5d85765b45-jtffx" Jul 7 06:16:22.791929 containerd[1926]: time="2025-07-07T06:16:22.791827315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bbqs9,Uid:b2c02941-7465-41bd-b114-4a1d7ad6f881,Namespace:kube-system,Attempt:0,} returns sandbox id \"b61cdcb580d4d8f25a7b5ac858c4f1bf2aa021605af01569033b8c1bc689e804\"" Jul 7 06:16:22.796378 containerd[1926]: time="2025-07-07T06:16:22.796338452Z" level=info msg="CreateContainer within sandbox \"b61cdcb580d4d8f25a7b5ac858c4f1bf2aa021605af01569033b8c1bc689e804\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 06:16:22.819860 containerd[1926]: time="2025-07-07T06:16:22.819824584Z" level=info msg="Container e518abd4cdc0f88666736f58bf949b464c1a02d1c30528e28963e4b1ea8dea72: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:16:22.844104 containerd[1926]: time="2025-07-07T06:16:22.843750057Z" level=info msg="CreateContainer within sandbox \"b61cdcb580d4d8f25a7b5ac858c4f1bf2aa021605af01569033b8c1bc689e804\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e518abd4cdc0f88666736f58bf949b464c1a02d1c30528e28963e4b1ea8dea72\"" Jul 7 06:16:22.845942 containerd[1926]: time="2025-07-07T06:16:22.845772642Z" level=info msg="StartContainer for \"e518abd4cdc0f88666736f58bf949b464c1a02d1c30528e28963e4b1ea8dea72\"" Jul 7 06:16:22.847460 containerd[1926]: time="2025-07-07T06:16:22.847377343Z" level=info msg="connecting to shim e518abd4cdc0f88666736f58bf949b464c1a02d1c30528e28963e4b1ea8dea72" address="unix:///run/containerd/s/1713893c201ad8246bbed0584e2f4b31c3147859b85f9d1fc47d560820e347a4" protocol=ttrpc version=3 Jul 7 06:16:22.868988 systemd[1]: Started cri-containerd-e518abd4cdc0f88666736f58bf949b464c1a02d1c30528e28963e4b1ea8dea72.scope - libcontainer container e518abd4cdc0f88666736f58bf949b464c1a02d1c30528e28963e4b1ea8dea72. Jul 7 06:16:22.937819 containerd[1926]: time="2025-07-07T06:16:22.937781342Z" level=info msg="StartContainer for \"e518abd4cdc0f88666736f58bf949b464c1a02d1c30528e28963e4b1ea8dea72\" returns successfully" Jul 7 06:16:23.335264 containerd[1926]: time="2025-07-07T06:16:23.335214092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-jtffx,Uid:bb4f29e6-b2ba-421b-bd60-8b9e19cd539b,Namespace:kube-system,Attempt:0,}" Jul 7 06:16:23.373396 containerd[1926]: time="2025-07-07T06:16:23.373341740Z" level=info msg="connecting to shim ae6cfc246bf73122fb6a9b318b614ab1cd5114dc234d41b40df43ddb9a2077be" address="unix:///run/containerd/s/d251e8c58088d39a60e97c06bd93006ec0bad756e7d79b78c6d56d3040dead00" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:16:23.398866 systemd[1]: Started cri-containerd-ae6cfc246bf73122fb6a9b318b614ab1cd5114dc234d41b40df43ddb9a2077be.scope - libcontainer container ae6cfc246bf73122fb6a9b318b614ab1cd5114dc234d41b40df43ddb9a2077be. Jul 7 06:16:23.466591 kubelet[3270]: E0707 06:16:23.465834 3270 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Jul 7 06:16:23.466591 kubelet[3270]: E0707 06:16:23.465875 3270 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-5wft9: failed to sync secret cache: timed out waiting for the condition Jul 7 06:16:23.466591 kubelet[3270]: E0707 06:16:23.465948 3270 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-hubble-tls podName:b407ee20-16e1-433c-9d0d-b1ccd11db3d0 nodeName:}" failed. No retries permitted until 2025-07-07 06:16:23.965929149 +0000 UTC m=+6.792882108 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-hubble-tls") pod "cilium-5wft9" (UID: "b407ee20-16e1-433c-9d0d-b1ccd11db3d0") : failed to sync secret cache: timed out waiting for the condition Jul 7 06:16:23.469821 containerd[1926]: time="2025-07-07T06:16:23.469774618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-jtffx,Uid:bb4f29e6-b2ba-421b-bd60-8b9e19cd539b,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae6cfc246bf73122fb6a9b318b614ab1cd5114dc234d41b40df43ddb9a2077be\"" Jul 7 06:16:23.472390 containerd[1926]: time="2025-07-07T06:16:23.472346473Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 7 06:16:24.096717 containerd[1926]: time="2025-07-07T06:16:24.096637459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5wft9,Uid:b407ee20-16e1-433c-9d0d-b1ccd11db3d0,Namespace:kube-system,Attempt:0,}" Jul 7 06:16:24.132612 containerd[1926]: time="2025-07-07T06:16:24.132526673Z" level=info msg="connecting to shim 32f7754e42bcb40a038e49bedffc007640fc6c35b0089a271360344c84c16a00" address="unix:///run/containerd/s/f270d418bf607a940341f458f55e3dfea07d67fc3962b78228af4cb4650e0cd1" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:16:24.160981 systemd[1]: Started cri-containerd-32f7754e42bcb40a038e49bedffc007640fc6c35b0089a271360344c84c16a00.scope - libcontainer container 32f7754e42bcb40a038e49bedffc007640fc6c35b0089a271360344c84c16a00. Jul 7 06:16:24.208884 containerd[1926]: time="2025-07-07T06:16:24.208829609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5wft9,Uid:b407ee20-16e1-433c-9d0d-b1ccd11db3d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"32f7754e42bcb40a038e49bedffc007640fc6c35b0089a271360344c84c16a00\"" Jul 7 06:16:24.337235 kubelet[3270]: I0707 06:16:24.336761 3270 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bbqs9" podStartSLOduration=2.336739891 podStartE2EDuration="2.336739891s" podCreationTimestamp="2025-07-07 06:16:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:16:23.428125224 +0000 UTC m=+6.255078201" watchObservedRunningTime="2025-07-07 06:16:24.336739891 +0000 UTC m=+7.163692871" Jul 7 06:16:24.960857 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3653373814.mount: Deactivated successfully. Jul 7 06:16:25.591310 containerd[1926]: time="2025-07-07T06:16:25.590677955Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:25.591310 containerd[1926]: time="2025-07-07T06:16:25.591280361Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 7 06:16:25.592492 containerd[1926]: time="2025-07-07T06:16:25.592456349Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:25.593782 containerd[1926]: time="2025-07-07T06:16:25.593743504Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.121352033s" Jul 7 06:16:25.593927 containerd[1926]: time="2025-07-07T06:16:25.593904911Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 7 06:16:25.595546 containerd[1926]: time="2025-07-07T06:16:25.595050774Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 7 06:16:25.598573 containerd[1926]: time="2025-07-07T06:16:25.598528440Z" level=info msg="CreateContainer within sandbox \"ae6cfc246bf73122fb6a9b318b614ab1cd5114dc234d41b40df43ddb9a2077be\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 7 06:16:25.609930 containerd[1926]: time="2025-07-07T06:16:25.609820971Z" level=info msg="Container 210ebafd824ea4aa3f29ab559a3aa052e98a843a43c40d9372ca787aad22b89e: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:16:25.622714 containerd[1926]: time="2025-07-07T06:16:25.622675108Z" level=info msg="CreateContainer within sandbox \"ae6cfc246bf73122fb6a9b318b614ab1cd5114dc234d41b40df43ddb9a2077be\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"210ebafd824ea4aa3f29ab559a3aa052e98a843a43c40d9372ca787aad22b89e\"" Jul 7 06:16:25.623460 containerd[1926]: time="2025-07-07T06:16:25.623425054Z" level=info msg="StartContainer for \"210ebafd824ea4aa3f29ab559a3aa052e98a843a43c40d9372ca787aad22b89e\"" Jul 7 06:16:25.625475 containerd[1926]: time="2025-07-07T06:16:25.625443710Z" level=info msg="connecting to shim 210ebafd824ea4aa3f29ab559a3aa052e98a843a43c40d9372ca787aad22b89e" address="unix:///run/containerd/s/d251e8c58088d39a60e97c06bd93006ec0bad756e7d79b78c6d56d3040dead00" protocol=ttrpc version=3 Jul 7 06:16:25.657908 systemd[1]: Started cri-containerd-210ebafd824ea4aa3f29ab559a3aa052e98a843a43c40d9372ca787aad22b89e.scope - libcontainer container 210ebafd824ea4aa3f29ab559a3aa052e98a843a43c40d9372ca787aad22b89e. Jul 7 06:16:25.695327 containerd[1926]: time="2025-07-07T06:16:25.695212947Z" level=info msg="StartContainer for \"210ebafd824ea4aa3f29ab559a3aa052e98a843a43c40d9372ca787aad22b89e\" returns successfully" Jul 7 06:16:29.042394 kubelet[3270]: I0707 06:16:29.042300 3270 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-jtffx" podStartSLOduration=4.9187642700000005 podStartE2EDuration="7.042279258s" podCreationTimestamp="2025-07-07 06:16:22 +0000 UTC" firstStartedPulling="2025-07-07 06:16:23.471341699 +0000 UTC m=+6.298294656" lastFinishedPulling="2025-07-07 06:16:25.594856669 +0000 UTC m=+8.421809644" observedRunningTime="2025-07-07 06:16:26.546719549 +0000 UTC m=+9.373672531" watchObservedRunningTime="2025-07-07 06:16:29.042279258 +0000 UTC m=+11.869232236" Jul 7 06:16:30.110760 update_engine[1876]: I20250707 06:16:30.110697 1876 update_attempter.cc:509] Updating boot flags... Jul 7 06:16:31.004127 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2136189527.mount: Deactivated successfully. Jul 7 06:16:33.506460 containerd[1926]: time="2025-07-07T06:16:33.506380052Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:33.508566 containerd[1926]: time="2025-07-07T06:16:33.508519385Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 7 06:16:33.511419 containerd[1926]: time="2025-07-07T06:16:33.510638777Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:16:33.511863 containerd[1926]: time="2025-07-07T06:16:33.511831018Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.916745689s" Jul 7 06:16:33.511932 containerd[1926]: time="2025-07-07T06:16:33.511871184Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 7 06:16:33.516662 containerd[1926]: time="2025-07-07T06:16:33.516616855Z" level=info msg="CreateContainer within sandbox \"32f7754e42bcb40a038e49bedffc007640fc6c35b0089a271360344c84c16a00\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 7 06:16:33.552580 containerd[1926]: time="2025-07-07T06:16:33.552029326Z" level=info msg="Container b1d62c57e0653a9e6c8a8fa01f97ed08d61c7834d0b6664fbba1db023d7c4954: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:16:33.558756 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4104008781.mount: Deactivated successfully. Jul 7 06:16:33.567141 containerd[1926]: time="2025-07-07T06:16:33.567096684Z" level=info msg="CreateContainer within sandbox \"32f7754e42bcb40a038e49bedffc007640fc6c35b0089a271360344c84c16a00\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b1d62c57e0653a9e6c8a8fa01f97ed08d61c7834d0b6664fbba1db023d7c4954\"" Jul 7 06:16:33.569088 containerd[1926]: time="2025-07-07T06:16:33.568965022Z" level=info msg="StartContainer for \"b1d62c57e0653a9e6c8a8fa01f97ed08d61c7834d0b6664fbba1db023d7c4954\"" Jul 7 06:16:33.570004 containerd[1926]: time="2025-07-07T06:16:33.569953066Z" level=info msg="connecting to shim b1d62c57e0653a9e6c8a8fa01f97ed08d61c7834d0b6664fbba1db023d7c4954" address="unix:///run/containerd/s/f270d418bf607a940341f458f55e3dfea07d67fc3962b78228af4cb4650e0cd1" protocol=ttrpc version=3 Jul 7 06:16:33.617001 systemd[1]: Started cri-containerd-b1d62c57e0653a9e6c8a8fa01f97ed08d61c7834d0b6664fbba1db023d7c4954.scope - libcontainer container b1d62c57e0653a9e6c8a8fa01f97ed08d61c7834d0b6664fbba1db023d7c4954. Jul 7 06:16:33.654103 containerd[1926]: time="2025-07-07T06:16:33.654033742Z" level=info msg="StartContainer for \"b1d62c57e0653a9e6c8a8fa01f97ed08d61c7834d0b6664fbba1db023d7c4954\" returns successfully" Jul 7 06:16:33.665632 systemd[1]: cri-containerd-b1d62c57e0653a9e6c8a8fa01f97ed08d61c7834d0b6664fbba1db023d7c4954.scope: Deactivated successfully. Jul 7 06:16:33.725762 containerd[1926]: time="2025-07-07T06:16:33.725518362Z" level=info msg="received exit event container_id:\"b1d62c57e0653a9e6c8a8fa01f97ed08d61c7834d0b6664fbba1db023d7c4954\" id:\"b1d62c57e0653a9e6c8a8fa01f97ed08d61c7834d0b6664fbba1db023d7c4954\" pid:3919 exited_at:{seconds:1751868993 nanos:666483531}" Jul 7 06:16:33.736447 containerd[1926]: time="2025-07-07T06:16:33.736394559Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b1d62c57e0653a9e6c8a8fa01f97ed08d61c7834d0b6664fbba1db023d7c4954\" id:\"b1d62c57e0653a9e6c8a8fa01f97ed08d61c7834d0b6664fbba1db023d7c4954\" pid:3919 exited_at:{seconds:1751868993 nanos:666483531}" Jul 7 06:16:33.757238 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b1d62c57e0653a9e6c8a8fa01f97ed08d61c7834d0b6664fbba1db023d7c4954-rootfs.mount: Deactivated successfully. Jul 7 06:16:34.565888 containerd[1926]: time="2025-07-07T06:16:34.565850542Z" level=info msg="CreateContainer within sandbox \"32f7754e42bcb40a038e49bedffc007640fc6c35b0089a271360344c84c16a00\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 7 06:16:34.579675 containerd[1926]: time="2025-07-07T06:16:34.578908556Z" level=info msg="Container ad7399e8663af7b4f691195c035c1e25606917d75ad7e48a9500463519c4c1ea: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:16:34.590129 containerd[1926]: time="2025-07-07T06:16:34.590086115Z" level=info msg="CreateContainer within sandbox \"32f7754e42bcb40a038e49bedffc007640fc6c35b0089a271360344c84c16a00\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ad7399e8663af7b4f691195c035c1e25606917d75ad7e48a9500463519c4c1ea\"" Jul 7 06:16:34.591358 containerd[1926]: time="2025-07-07T06:16:34.590721320Z" level=info msg="StartContainer for \"ad7399e8663af7b4f691195c035c1e25606917d75ad7e48a9500463519c4c1ea\"" Jul 7 06:16:34.594162 containerd[1926]: time="2025-07-07T06:16:34.594118688Z" level=info msg="connecting to shim ad7399e8663af7b4f691195c035c1e25606917d75ad7e48a9500463519c4c1ea" address="unix:///run/containerd/s/f270d418bf607a940341f458f55e3dfea07d67fc3962b78228af4cb4650e0cd1" protocol=ttrpc version=3 Jul 7 06:16:34.631931 systemd[1]: Started cri-containerd-ad7399e8663af7b4f691195c035c1e25606917d75ad7e48a9500463519c4c1ea.scope - libcontainer container ad7399e8663af7b4f691195c035c1e25606917d75ad7e48a9500463519c4c1ea. Jul 7 06:16:34.673607 containerd[1926]: time="2025-07-07T06:16:34.673567327Z" level=info msg="StartContainer for \"ad7399e8663af7b4f691195c035c1e25606917d75ad7e48a9500463519c4c1ea\" returns successfully" Jul 7 06:16:34.691782 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 06:16:34.692136 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:16:34.693973 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:16:34.697500 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:16:34.698831 containerd[1926]: time="2025-07-07T06:16:34.698553722Z" level=info msg="received exit event container_id:\"ad7399e8663af7b4f691195c035c1e25606917d75ad7e48a9500463519c4c1ea\" id:\"ad7399e8663af7b4f691195c035c1e25606917d75ad7e48a9500463519c4c1ea\" pid:3964 exited_at:{seconds:1751868994 nanos:698300577}" Jul 7 06:16:34.702040 containerd[1926]: time="2025-07-07T06:16:34.700388558Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ad7399e8663af7b4f691195c035c1e25606917d75ad7e48a9500463519c4c1ea\" id:\"ad7399e8663af7b4f691195c035c1e25606917d75ad7e48a9500463519c4c1ea\" pid:3964 exited_at:{seconds:1751868994 nanos:698300577}" Jul 7 06:16:34.706056 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 7 06:16:34.707819 systemd[1]: cri-containerd-ad7399e8663af7b4f691195c035c1e25606917d75ad7e48a9500463519c4c1ea.scope: Deactivated successfully. Jul 7 06:16:34.740475 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad7399e8663af7b4f691195c035c1e25606917d75ad7e48a9500463519c4c1ea-rootfs.mount: Deactivated successfully. Jul 7 06:16:34.749696 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:16:35.572130 containerd[1926]: time="2025-07-07T06:16:35.570746546Z" level=info msg="CreateContainer within sandbox \"32f7754e42bcb40a038e49bedffc007640fc6c35b0089a271360344c84c16a00\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 7 06:16:35.600674 containerd[1926]: time="2025-07-07T06:16:35.599632184Z" level=info msg="Container eb3a8735ccacd0008494532bbf7b3bac751566ba563f967475de36c14f5e0eb8: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:16:35.604817 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3314283587.mount: Deactivated successfully. Jul 7 06:16:35.614675 containerd[1926]: time="2025-07-07T06:16:35.614620923Z" level=info msg="CreateContainer within sandbox \"32f7754e42bcb40a038e49bedffc007640fc6c35b0089a271360344c84c16a00\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"eb3a8735ccacd0008494532bbf7b3bac751566ba563f967475de36c14f5e0eb8\"" Jul 7 06:16:35.615247 containerd[1926]: time="2025-07-07T06:16:35.615208606Z" level=info msg="StartContainer for \"eb3a8735ccacd0008494532bbf7b3bac751566ba563f967475de36c14f5e0eb8\"" Jul 7 06:16:35.617057 containerd[1926]: time="2025-07-07T06:16:35.617021805Z" level=info msg="connecting to shim eb3a8735ccacd0008494532bbf7b3bac751566ba563f967475de36c14f5e0eb8" address="unix:///run/containerd/s/f270d418bf607a940341f458f55e3dfea07d67fc3962b78228af4cb4650e0cd1" protocol=ttrpc version=3 Jul 7 06:16:35.641873 systemd[1]: Started cri-containerd-eb3a8735ccacd0008494532bbf7b3bac751566ba563f967475de36c14f5e0eb8.scope - libcontainer container eb3a8735ccacd0008494532bbf7b3bac751566ba563f967475de36c14f5e0eb8. Jul 7 06:16:35.691583 containerd[1926]: time="2025-07-07T06:16:35.691541607Z" level=info msg="StartContainer for \"eb3a8735ccacd0008494532bbf7b3bac751566ba563f967475de36c14f5e0eb8\" returns successfully" Jul 7 06:16:35.703050 systemd[1]: cri-containerd-eb3a8735ccacd0008494532bbf7b3bac751566ba563f967475de36c14f5e0eb8.scope: Deactivated successfully. Jul 7 06:16:35.703827 systemd[1]: cri-containerd-eb3a8735ccacd0008494532bbf7b3bac751566ba563f967475de36c14f5e0eb8.scope: Consumed 27ms CPU time, 4.2M memory peak, 1.2M read from disk. Jul 7 06:16:35.704581 containerd[1926]: time="2025-07-07T06:16:35.704500971Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eb3a8735ccacd0008494532bbf7b3bac751566ba563f967475de36c14f5e0eb8\" id:\"eb3a8735ccacd0008494532bbf7b3bac751566ba563f967475de36c14f5e0eb8\" pid:4012 exited_at:{seconds:1751868995 nanos:703808381}" Jul 7 06:16:35.704850 containerd[1926]: time="2025-07-07T06:16:35.704788978Z" level=info msg="received exit event container_id:\"eb3a8735ccacd0008494532bbf7b3bac751566ba563f967475de36c14f5e0eb8\" id:\"eb3a8735ccacd0008494532bbf7b3bac751566ba563f967475de36c14f5e0eb8\" pid:4012 exited_at:{seconds:1751868995 nanos:703808381}" Jul 7 06:16:35.742042 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eb3a8735ccacd0008494532bbf7b3bac751566ba563f967475de36c14f5e0eb8-rootfs.mount: Deactivated successfully. Jul 7 06:16:36.579445 containerd[1926]: time="2025-07-07T06:16:36.579393703Z" level=info msg="CreateContainer within sandbox \"32f7754e42bcb40a038e49bedffc007640fc6c35b0089a271360344c84c16a00\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 7 06:16:36.600958 containerd[1926]: time="2025-07-07T06:16:36.600920832Z" level=info msg="Container 4e25f2a83f0594039efe9b1dea3a7010cf5c95c82529ee5ac9d78e677295a500: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:16:36.612833 containerd[1926]: time="2025-07-07T06:16:36.612759960Z" level=info msg="CreateContainer within sandbox \"32f7754e42bcb40a038e49bedffc007640fc6c35b0089a271360344c84c16a00\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4e25f2a83f0594039efe9b1dea3a7010cf5c95c82529ee5ac9d78e677295a500\"" Jul 7 06:16:36.613466 containerd[1926]: time="2025-07-07T06:16:36.613439380Z" level=info msg="StartContainer for \"4e25f2a83f0594039efe9b1dea3a7010cf5c95c82529ee5ac9d78e677295a500\"" Jul 7 06:16:36.614522 containerd[1926]: time="2025-07-07T06:16:36.614456185Z" level=info msg="connecting to shim 4e25f2a83f0594039efe9b1dea3a7010cf5c95c82529ee5ac9d78e677295a500" address="unix:///run/containerd/s/f270d418bf607a940341f458f55e3dfea07d67fc3962b78228af4cb4650e0cd1" protocol=ttrpc version=3 Jul 7 06:16:36.641878 systemd[1]: Started cri-containerd-4e25f2a83f0594039efe9b1dea3a7010cf5c95c82529ee5ac9d78e677295a500.scope - libcontainer container 4e25f2a83f0594039efe9b1dea3a7010cf5c95c82529ee5ac9d78e677295a500. Jul 7 06:16:36.670112 systemd[1]: cri-containerd-4e25f2a83f0594039efe9b1dea3a7010cf5c95c82529ee5ac9d78e677295a500.scope: Deactivated successfully. Jul 7 06:16:36.673088 containerd[1926]: time="2025-07-07T06:16:36.672997423Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4e25f2a83f0594039efe9b1dea3a7010cf5c95c82529ee5ac9d78e677295a500\" id:\"4e25f2a83f0594039efe9b1dea3a7010cf5c95c82529ee5ac9d78e677295a500\" pid:4051 exited_at:{seconds:1751868996 nanos:671432460}" Jul 7 06:16:36.674870 containerd[1926]: time="2025-07-07T06:16:36.674835666Z" level=info msg="received exit event container_id:\"4e25f2a83f0594039efe9b1dea3a7010cf5c95c82529ee5ac9d78e677295a500\" id:\"4e25f2a83f0594039efe9b1dea3a7010cf5c95c82529ee5ac9d78e677295a500\" pid:4051 exited_at:{seconds:1751868996 nanos:671432460}" Jul 7 06:16:36.682431 containerd[1926]: time="2025-07-07T06:16:36.682400137Z" level=info msg="StartContainer for \"4e25f2a83f0594039efe9b1dea3a7010cf5c95c82529ee5ac9d78e677295a500\" returns successfully" Jul 7 06:16:36.697292 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e25f2a83f0594039efe9b1dea3a7010cf5c95c82529ee5ac9d78e677295a500-rootfs.mount: Deactivated successfully. Jul 7 06:16:37.584894 containerd[1926]: time="2025-07-07T06:16:37.584830674Z" level=info msg="CreateContainer within sandbox \"32f7754e42bcb40a038e49bedffc007640fc6c35b0089a271360344c84c16a00\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 7 06:16:37.621848 containerd[1926]: time="2025-07-07T06:16:37.619826636Z" level=info msg="Container 0df31c409513493c5b7f230e8cc4276a99917c782b408f5ec1460ca06d4b7714: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:16:37.621828 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount424167102.mount: Deactivated successfully. Jul 7 06:16:37.629897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount183849856.mount: Deactivated successfully. Jul 7 06:16:37.642182 containerd[1926]: time="2025-07-07T06:16:37.642136847Z" level=info msg="CreateContainer within sandbox \"32f7754e42bcb40a038e49bedffc007640fc6c35b0089a271360344c84c16a00\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0df31c409513493c5b7f230e8cc4276a99917c782b408f5ec1460ca06d4b7714\"" Jul 7 06:16:37.643791 containerd[1926]: time="2025-07-07T06:16:37.643756713Z" level=info msg="StartContainer for \"0df31c409513493c5b7f230e8cc4276a99917c782b408f5ec1460ca06d4b7714\"" Jul 7 06:16:37.644718 containerd[1926]: time="2025-07-07T06:16:37.644686232Z" level=info msg="connecting to shim 0df31c409513493c5b7f230e8cc4276a99917c782b408f5ec1460ca06d4b7714" address="unix:///run/containerd/s/f270d418bf607a940341f458f55e3dfea07d67fc3962b78228af4cb4650e0cd1" protocol=ttrpc version=3 Jul 7 06:16:37.675903 systemd[1]: Started cri-containerd-0df31c409513493c5b7f230e8cc4276a99917c782b408f5ec1460ca06d4b7714.scope - libcontainer container 0df31c409513493c5b7f230e8cc4276a99917c782b408f5ec1460ca06d4b7714. Jul 7 06:16:37.718911 containerd[1926]: time="2025-07-07T06:16:37.718845701Z" level=info msg="StartContainer for \"0df31c409513493c5b7f230e8cc4276a99917c782b408f5ec1460ca06d4b7714\" returns successfully" Jul 7 06:16:37.858846 containerd[1926]: time="2025-07-07T06:16:37.858716130Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0df31c409513493c5b7f230e8cc4276a99917c782b408f5ec1460ca06d4b7714\" id:\"e2a4a7cfbd34e5e559ed3a984e956dacea0ab0050b847dc064d2d538ec398b19\" pid:4120 exited_at:{seconds:1751868997 nanos:857505163}" Jul 7 06:16:37.878681 kubelet[3270]: I0707 06:16:37.878548 3270 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 7 06:16:37.925300 systemd[1]: Created slice kubepods-burstable-pod22ae333b_4636_4d96_92db_b982e36ed7e2.slice - libcontainer container kubepods-burstable-pod22ae333b_4636_4d96_92db_b982e36ed7e2.slice. Jul 7 06:16:37.935030 systemd[1]: Created slice kubepods-burstable-pod7cfcb029_6b33_48b3_bf34_c754c7b61f36.slice - libcontainer container kubepods-burstable-pod7cfcb029_6b33_48b3_bf34_c754c7b61f36.slice. Jul 7 06:16:38.072605 kubelet[3270]: I0707 06:16:38.072465 3270 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7cfcb029-6b33-48b3-bf34-c754c7b61f36-config-volume\") pod \"coredns-7c65d6cfc9-kg62n\" (UID: \"7cfcb029-6b33-48b3-bf34-c754c7b61f36\") " pod="kube-system/coredns-7c65d6cfc9-kg62n" Jul 7 06:16:38.072605 kubelet[3270]: I0707 06:16:38.072517 3270 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfggz\" (UniqueName: \"kubernetes.io/projected/22ae333b-4636-4d96-92db-b982e36ed7e2-kube-api-access-vfggz\") pod \"coredns-7c65d6cfc9-5l9jv\" (UID: \"22ae333b-4636-4d96-92db-b982e36ed7e2\") " pod="kube-system/coredns-7c65d6cfc9-5l9jv" Jul 7 06:16:38.072605 kubelet[3270]: I0707 06:16:38.072538 3270 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghr9w\" (UniqueName: \"kubernetes.io/projected/7cfcb029-6b33-48b3-bf34-c754c7b61f36-kube-api-access-ghr9w\") pod \"coredns-7c65d6cfc9-kg62n\" (UID: \"7cfcb029-6b33-48b3-bf34-c754c7b61f36\") " pod="kube-system/coredns-7c65d6cfc9-kg62n" Jul 7 06:16:38.072605 kubelet[3270]: I0707 06:16:38.072554 3270 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22ae333b-4636-4d96-92db-b982e36ed7e2-config-volume\") pod \"coredns-7c65d6cfc9-5l9jv\" (UID: \"22ae333b-4636-4d96-92db-b982e36ed7e2\") " pod="kube-system/coredns-7c65d6cfc9-5l9jv" Jul 7 06:16:38.232458 containerd[1926]: time="2025-07-07T06:16:38.232130946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5l9jv,Uid:22ae333b-4636-4d96-92db-b982e36ed7e2,Namespace:kube-system,Attempt:0,}" Jul 7 06:16:38.240254 containerd[1926]: time="2025-07-07T06:16:38.240213094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kg62n,Uid:7cfcb029-6b33-48b3-bf34-c754c7b61f36,Namespace:kube-system,Attempt:0,}" Jul 7 06:16:40.305240 (udev-worker)[4210]: Network interface NamePolicy= disabled on kernel command line. Jul 7 06:16:40.306015 (udev-worker)[4177]: Network interface NamePolicy= disabled on kernel command line. Jul 7 06:16:40.307098 systemd-networkd[1826]: cilium_host: Link UP Jul 7 06:16:40.307276 systemd-networkd[1826]: cilium_net: Link UP Jul 7 06:16:40.307414 systemd-networkd[1826]: cilium_net: Gained carrier Jul 7 06:16:40.308435 systemd-networkd[1826]: cilium_host: Gained carrier Jul 7 06:16:40.438630 systemd-networkd[1826]: cilium_vxlan: Link UP Jul 7 06:16:40.438639 systemd-networkd[1826]: cilium_vxlan: Gained carrier Jul 7 06:16:40.728957 systemd-networkd[1826]: cilium_net: Gained IPv6LL Jul 7 06:16:40.785197 systemd-networkd[1826]: cilium_host: Gained IPv6LL Jul 7 06:16:41.043684 kernel: NET: Registered PF_ALG protocol family Jul 7 06:16:41.778310 systemd-networkd[1826]: lxc_health: Link UP Jul 7 06:16:41.781278 systemd-networkd[1826]: lxc_health: Gained carrier Jul 7 06:16:42.130667 kubelet[3270]: I0707 06:16:42.130571 3270 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5wft9" podStartSLOduration=10.827509379 podStartE2EDuration="20.130429641s" podCreationTimestamp="2025-07-07 06:16:22 +0000 UTC" firstStartedPulling="2025-07-07 06:16:24.210120707 +0000 UTC m=+7.037073664" lastFinishedPulling="2025-07-07 06:16:33.513040969 +0000 UTC m=+16.339993926" observedRunningTime="2025-07-07 06:16:38.61024573 +0000 UTC m=+21.437198707" watchObservedRunningTime="2025-07-07 06:16:42.130429641 +0000 UTC m=+24.957382618" Jul 7 06:16:42.329154 systemd-networkd[1826]: lxc625685d9c4e0: Link UP Jul 7 06:16:42.338078 kernel: eth0: renamed from tmpd304f Jul 7 06:16:42.338944 systemd-networkd[1826]: cilium_vxlan: Gained IPv6LL Jul 7 06:16:42.341931 systemd-networkd[1826]: lxc625685d9c4e0: Gained carrier Jul 7 06:16:42.344184 (udev-worker)[4221]: Network interface NamePolicy= disabled on kernel command line. Jul 7 06:16:42.354717 kernel: eth0: renamed from tmp8b734 Jul 7 06:16:42.360940 systemd-networkd[1826]: lxc6eba4f5228ac: Link UP Jul 7 06:16:42.369332 systemd-networkd[1826]: lxc6eba4f5228ac: Gained carrier Jul 7 06:16:43.104843 systemd-networkd[1826]: lxc_health: Gained IPv6LL Jul 7 06:16:43.489061 systemd-networkd[1826]: lxc625685d9c4e0: Gained IPv6LL Jul 7 06:16:43.808934 systemd-networkd[1826]: lxc6eba4f5228ac: Gained IPv6LL Jul 7 06:16:46.095314 ntpd[1869]: Listen normally on 8 cilium_host 192.168.0.173:123 Jul 7 06:16:46.095413 ntpd[1869]: Listen normally on 9 cilium_net [fe80::8c04:91ff:feee:31a7%4]:123 Jul 7 06:16:46.095877 ntpd[1869]: 7 Jul 06:16:46 ntpd[1869]: Listen normally on 8 cilium_host 192.168.0.173:123 Jul 7 06:16:46.095877 ntpd[1869]: 7 Jul 06:16:46 ntpd[1869]: Listen normally on 9 cilium_net [fe80::8c04:91ff:feee:31a7%4]:123 Jul 7 06:16:46.095877 ntpd[1869]: 7 Jul 06:16:46 ntpd[1869]: Listen normally on 10 cilium_host [fe80::401b:62ff:fe49:be09%5]:123 Jul 7 06:16:46.095877 ntpd[1869]: 7 Jul 06:16:46 ntpd[1869]: Listen normally on 11 cilium_vxlan [fe80::70d6:baff:fef1:4ef2%6]:123 Jul 7 06:16:46.095877 ntpd[1869]: 7 Jul 06:16:46 ntpd[1869]: Listen normally on 12 lxc_health [fe80::4423:4eff:fe96:7ef5%8]:123 Jul 7 06:16:46.095877 ntpd[1869]: 7 Jul 06:16:46 ntpd[1869]: Listen normally on 13 lxc625685d9c4e0 [fe80::7867:85ff:fe3f:7ce8%10]:123 Jul 7 06:16:46.095877 ntpd[1869]: 7 Jul 06:16:46 ntpd[1869]: Listen normally on 14 lxc6eba4f5228ac [fe80::c037:24ff:fef3:5fce%12]:123 Jul 7 06:16:46.095473 ntpd[1869]: Listen normally on 10 cilium_host [fe80::401b:62ff:fe49:be09%5]:123 Jul 7 06:16:46.095514 ntpd[1869]: Listen normally on 11 cilium_vxlan [fe80::70d6:baff:fef1:4ef2%6]:123 Jul 7 06:16:46.095554 ntpd[1869]: Listen normally on 12 lxc_health [fe80::4423:4eff:fe96:7ef5%8]:123 Jul 7 06:16:46.095597 ntpd[1869]: Listen normally on 13 lxc625685d9c4e0 [fe80::7867:85ff:fe3f:7ce8%10]:123 Jul 7 06:16:46.095636 ntpd[1869]: Listen normally on 14 lxc6eba4f5228ac [fe80::c037:24ff:fef3:5fce%12]:123 Jul 7 06:16:46.644014 containerd[1926]: time="2025-07-07T06:16:46.643907123Z" level=info msg="connecting to shim 8b7349072044ed6ebba0a80a55e8510122f2c9dbe000168e884f0ab31fa41c95" address="unix:///run/containerd/s/4a06878a6301d9779a969a0f9b5d27f825cee5955dfd07adccaaed02ba23880a" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:16:46.661675 containerd[1926]: time="2025-07-07T06:16:46.661421905Z" level=info msg="connecting to shim d304f75094435484a52b195934a5750132407dab9b48bbe8812b087fe0d0eba6" address="unix:///run/containerd/s/f0344bf43156c9f5561c56c699fde72ee7ea44b7c0946375ac66edacdac06ef1" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:16:46.733880 systemd[1]: Started cri-containerd-d304f75094435484a52b195934a5750132407dab9b48bbe8812b087fe0d0eba6.scope - libcontainer container d304f75094435484a52b195934a5750132407dab9b48bbe8812b087fe0d0eba6. Jul 7 06:16:46.747017 systemd[1]: Started cri-containerd-8b7349072044ed6ebba0a80a55e8510122f2c9dbe000168e884f0ab31fa41c95.scope - libcontainer container 8b7349072044ed6ebba0a80a55e8510122f2c9dbe000168e884f0ab31fa41c95. Jul 7 06:16:46.830445 containerd[1926]: time="2025-07-07T06:16:46.830159077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kg62n,Uid:7cfcb029-6b33-48b3-bf34-c754c7b61f36,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b7349072044ed6ebba0a80a55e8510122f2c9dbe000168e884f0ab31fa41c95\"" Jul 7 06:16:46.839706 containerd[1926]: time="2025-07-07T06:16:46.838505438Z" level=info msg="CreateContainer within sandbox \"8b7349072044ed6ebba0a80a55e8510122f2c9dbe000168e884f0ab31fa41c95\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 06:16:46.847356 containerd[1926]: time="2025-07-07T06:16:46.847325611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-5l9jv,Uid:22ae333b-4636-4d96-92db-b982e36ed7e2,Namespace:kube-system,Attempt:0,} returns sandbox id \"d304f75094435484a52b195934a5750132407dab9b48bbe8812b087fe0d0eba6\"" Jul 7 06:16:46.851983 containerd[1926]: time="2025-07-07T06:16:46.851953610Z" level=info msg="CreateContainer within sandbox \"d304f75094435484a52b195934a5750132407dab9b48bbe8812b087fe0d0eba6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 06:16:46.873926 containerd[1926]: time="2025-07-07T06:16:46.873890525Z" level=info msg="Container 69b83ce0a7e1cfad9a2626f37fce4598414e82e4f915f257d09aba3301bff76e: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:16:46.874175 containerd[1926]: time="2025-07-07T06:16:46.873917028Z" level=info msg="Container 157c596d348007ba6e8e11f834de22fca2eee9e24e65531d85305c7abf097ccf: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:16:46.902532 containerd[1926]: time="2025-07-07T06:16:46.902403265Z" level=info msg="CreateContainer within sandbox \"8b7349072044ed6ebba0a80a55e8510122f2c9dbe000168e884f0ab31fa41c95\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"157c596d348007ba6e8e11f834de22fca2eee9e24e65531d85305c7abf097ccf\"" Jul 7 06:16:46.902995 containerd[1926]: time="2025-07-07T06:16:46.902860362Z" level=info msg="CreateContainer within sandbox \"d304f75094435484a52b195934a5750132407dab9b48bbe8812b087fe0d0eba6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"69b83ce0a7e1cfad9a2626f37fce4598414e82e4f915f257d09aba3301bff76e\"" Jul 7 06:16:46.905255 containerd[1926]: time="2025-07-07T06:16:46.905220454Z" level=info msg="StartContainer for \"157c596d348007ba6e8e11f834de22fca2eee9e24e65531d85305c7abf097ccf\"" Jul 7 06:16:46.905416 containerd[1926]: time="2025-07-07T06:16:46.905391592Z" level=info msg="StartContainer for \"69b83ce0a7e1cfad9a2626f37fce4598414e82e4f915f257d09aba3301bff76e\"" Jul 7 06:16:46.906821 containerd[1926]: time="2025-07-07T06:16:46.906790389Z" level=info msg="connecting to shim 69b83ce0a7e1cfad9a2626f37fce4598414e82e4f915f257d09aba3301bff76e" address="unix:///run/containerd/s/f0344bf43156c9f5561c56c699fde72ee7ea44b7c0946375ac66edacdac06ef1" protocol=ttrpc version=3 Jul 7 06:16:46.911440 containerd[1926]: time="2025-07-07T06:16:46.911098936Z" level=info msg="connecting to shim 157c596d348007ba6e8e11f834de22fca2eee9e24e65531d85305c7abf097ccf" address="unix:///run/containerd/s/4a06878a6301d9779a969a0f9b5d27f825cee5955dfd07adccaaed02ba23880a" protocol=ttrpc version=3 Jul 7 06:16:46.945915 systemd[1]: Started cri-containerd-69b83ce0a7e1cfad9a2626f37fce4598414e82e4f915f257d09aba3301bff76e.scope - libcontainer container 69b83ce0a7e1cfad9a2626f37fce4598414e82e4f915f257d09aba3301bff76e. Jul 7 06:16:46.954175 systemd[1]: Started cri-containerd-157c596d348007ba6e8e11f834de22fca2eee9e24e65531d85305c7abf097ccf.scope - libcontainer container 157c596d348007ba6e8e11f834de22fca2eee9e24e65531d85305c7abf097ccf. Jul 7 06:16:47.013223 containerd[1926]: time="2025-07-07T06:16:47.013183671Z" level=info msg="StartContainer for \"69b83ce0a7e1cfad9a2626f37fce4598414e82e4f915f257d09aba3301bff76e\" returns successfully" Jul 7 06:16:47.013780 containerd[1926]: time="2025-07-07T06:16:47.013458265Z" level=info msg="StartContainer for \"157c596d348007ba6e8e11f834de22fca2eee9e24e65531d85305c7abf097ccf\" returns successfully" Jul 7 06:16:47.620374 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1080323958.mount: Deactivated successfully. Jul 7 06:16:47.632555 kubelet[3270]: I0707 06:16:47.632505 3270 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-5l9jv" podStartSLOduration=25.632490142 podStartE2EDuration="25.632490142s" podCreationTimestamp="2025-07-07 06:16:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:16:47.630944266 +0000 UTC m=+30.457897241" watchObservedRunningTime="2025-07-07 06:16:47.632490142 +0000 UTC m=+30.459443119" Jul 7 06:16:47.669875 kubelet[3270]: I0707 06:16:47.669826 3270 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-kg62n" podStartSLOduration=25.669810674 podStartE2EDuration="25.669810674s" podCreationTimestamp="2025-07-07 06:16:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:16:47.666923623 +0000 UTC m=+30.493876602" watchObservedRunningTime="2025-07-07 06:16:47.669810674 +0000 UTC m=+30.496763645" Jul 7 06:16:52.446476 systemd[1]: Started sshd@7-172.31.23.116:22-139.178.89.65:41392.service - OpenSSH per-connection server daemon (139.178.89.65:41392). Jul 7 06:16:52.650723 sshd[4754]: Accepted publickey for core from 139.178.89.65 port 41392 ssh2: RSA SHA256:1kMr/NVCVvXsWyefEhF1pHl3N2KP5iBIRut6ncABJco Jul 7 06:16:52.652488 sshd-session[4754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:16:52.660822 systemd-logind[1875]: New session 8 of user core. Jul 7 06:16:52.665910 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 06:16:52.949674 kubelet[3270]: I0707 06:16:52.949494 3270 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:16:53.589823 sshd[4759]: Connection closed by 139.178.89.65 port 41392 Jul 7 06:16:53.590855 sshd-session[4754]: pam_unix(sshd:session): session closed for user core Jul 7 06:16:53.602367 systemd[1]: sshd@7-172.31.23.116:22-139.178.89.65:41392.service: Deactivated successfully. Jul 7 06:16:53.604191 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 06:16:53.605083 systemd-logind[1875]: Session 8 logged out. Waiting for processes to exit. Jul 7 06:16:53.607388 systemd-logind[1875]: Removed session 8. Jul 7 06:16:58.635272 systemd[1]: Started sshd@8-172.31.23.116:22-139.178.89.65:41396.service - OpenSSH per-connection server daemon (139.178.89.65:41396). Jul 7 06:16:58.814905 sshd[4776]: Accepted publickey for core from 139.178.89.65 port 41396 ssh2: RSA SHA256:1kMr/NVCVvXsWyefEhF1pHl3N2KP5iBIRut6ncABJco Jul 7 06:16:58.816313 sshd-session[4776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:16:58.821726 systemd-logind[1875]: New session 9 of user core. Jul 7 06:16:58.826914 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 06:16:59.032305 sshd[4778]: Connection closed by 139.178.89.65 port 41396 Jul 7 06:16:59.033113 sshd-session[4776]: pam_unix(sshd:session): session closed for user core Jul 7 06:16:59.037953 systemd-logind[1875]: Session 9 logged out. Waiting for processes to exit. Jul 7 06:16:59.038102 systemd[1]: sshd@8-172.31.23.116:22-139.178.89.65:41396.service: Deactivated successfully. Jul 7 06:16:59.040790 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 06:16:59.042709 systemd-logind[1875]: Removed session 9. Jul 7 06:17:04.066622 systemd[1]: Started sshd@9-172.31.23.116:22-139.178.89.65:40104.service - OpenSSH per-connection server daemon (139.178.89.65:40104). Jul 7 06:17:04.238232 sshd[4791]: Accepted publickey for core from 139.178.89.65 port 40104 ssh2: RSA SHA256:1kMr/NVCVvXsWyefEhF1pHl3N2KP5iBIRut6ncABJco Jul 7 06:17:04.239848 sshd-session[4791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:17:04.246205 systemd-logind[1875]: New session 10 of user core. Jul 7 06:17:04.255068 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 06:17:04.465818 sshd[4793]: Connection closed by 139.178.89.65 port 40104 Jul 7 06:17:04.466584 sshd-session[4791]: pam_unix(sshd:session): session closed for user core Jul 7 06:17:04.472078 systemd[1]: sshd@9-172.31.23.116:22-139.178.89.65:40104.service: Deactivated successfully. Jul 7 06:17:04.474946 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 06:17:04.476349 systemd-logind[1875]: Session 10 logged out. Waiting for processes to exit. Jul 7 06:17:04.478604 systemd-logind[1875]: Removed session 10. Jul 7 06:17:04.494442 systemd[1]: Started sshd@10-172.31.23.116:22-139.178.89.65:40108.service - OpenSSH per-connection server daemon (139.178.89.65:40108). Jul 7 06:17:04.665699 sshd[4806]: Accepted publickey for core from 139.178.89.65 port 40108 ssh2: RSA SHA256:1kMr/NVCVvXsWyefEhF1pHl3N2KP5iBIRut6ncABJco Jul 7 06:17:04.667304 sshd-session[4806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:17:04.673977 systemd-logind[1875]: New session 11 of user core. Jul 7 06:17:04.679923 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 06:17:04.908306 sshd[4808]: Connection closed by 139.178.89.65 port 40108 Jul 7 06:17:04.908733 sshd-session[4806]: pam_unix(sshd:session): session closed for user core Jul 7 06:17:04.917774 systemd-logind[1875]: Session 11 logged out. Waiting for processes to exit. Jul 7 06:17:04.920547 systemd[1]: sshd@10-172.31.23.116:22-139.178.89.65:40108.service: Deactivated successfully. Jul 7 06:17:04.925642 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 06:17:04.930993 systemd-logind[1875]: Removed session 11. Jul 7 06:17:04.949630 systemd[1]: Started sshd@11-172.31.23.116:22-139.178.89.65:40110.service - OpenSSH per-connection server daemon (139.178.89.65:40110). Jul 7 06:17:05.123792 sshd[4818]: Accepted publickey for core from 139.178.89.65 port 40110 ssh2: RSA SHA256:1kMr/NVCVvXsWyefEhF1pHl3N2KP5iBIRut6ncABJco Jul 7 06:17:05.125641 sshd-session[4818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:17:05.131368 systemd-logind[1875]: New session 12 of user core. Jul 7 06:17:05.141028 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 06:17:05.339089 sshd[4820]: Connection closed by 139.178.89.65 port 40110 Jul 7 06:17:05.340802 sshd-session[4818]: pam_unix(sshd:session): session closed for user core Jul 7 06:17:05.343786 systemd[1]: sshd@11-172.31.23.116:22-139.178.89.65:40110.service: Deactivated successfully. Jul 7 06:17:05.346317 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 06:17:05.348760 systemd-logind[1875]: Session 12 logged out. Waiting for processes to exit. Jul 7 06:17:05.350482 systemd-logind[1875]: Removed session 12. Jul 7 06:17:10.376077 systemd[1]: Started sshd@12-172.31.23.116:22-139.178.89.65:47080.service - OpenSSH per-connection server daemon (139.178.89.65:47080). Jul 7 06:17:10.542286 sshd[4833]: Accepted publickey for core from 139.178.89.65 port 47080 ssh2: RSA SHA256:1kMr/NVCVvXsWyefEhF1pHl3N2KP5iBIRut6ncABJco Jul 7 06:17:10.542822 sshd-session[4833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:17:10.549432 systemd-logind[1875]: New session 13 of user core. Jul 7 06:17:10.556893 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 06:17:10.740982 sshd[4835]: Connection closed by 139.178.89.65 port 47080 Jul 7 06:17:10.741234 sshd-session[4833]: pam_unix(sshd:session): session closed for user core Jul 7 06:17:10.745885 systemd[1]: sshd@12-172.31.23.116:22-139.178.89.65:47080.service: Deactivated successfully. Jul 7 06:17:10.748713 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 06:17:10.749591 systemd-logind[1875]: Session 13 logged out. Waiting for processes to exit. Jul 7 06:17:10.751610 systemd-logind[1875]: Removed session 13. Jul 7 06:17:15.774123 systemd[1]: Started sshd@13-172.31.23.116:22-139.178.89.65:47088.service - OpenSSH per-connection server daemon (139.178.89.65:47088). Jul 7 06:17:15.945404 sshd[4847]: Accepted publickey for core from 139.178.89.65 port 47088 ssh2: RSA SHA256:1kMr/NVCVvXsWyefEhF1pHl3N2KP5iBIRut6ncABJco Jul 7 06:17:15.945990 sshd-session[4847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:17:15.951322 systemd-logind[1875]: New session 14 of user core. Jul 7 06:17:15.958915 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 06:17:16.146122 sshd[4849]: Connection closed by 139.178.89.65 port 47088 Jul 7 06:17:16.146877 sshd-session[4847]: pam_unix(sshd:session): session closed for user core Jul 7 06:17:16.151729 systemd-logind[1875]: Session 14 logged out. Waiting for processes to exit. Jul 7 06:17:16.152353 systemd[1]: sshd@13-172.31.23.116:22-139.178.89.65:47088.service: Deactivated successfully. Jul 7 06:17:16.154445 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 06:17:16.157091 systemd-logind[1875]: Removed session 14. Jul 7 06:17:21.186934 systemd[1]: Started sshd@14-172.31.23.116:22-139.178.89.65:59150.service - OpenSSH per-connection server daemon (139.178.89.65:59150). Jul 7 06:17:21.361376 sshd[4864]: Accepted publickey for core from 139.178.89.65 port 59150 ssh2: RSA SHA256:1kMr/NVCVvXsWyefEhF1pHl3N2KP5iBIRut6ncABJco Jul 7 06:17:21.363589 sshd-session[4864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:17:21.371023 systemd-logind[1875]: New session 15 of user core. Jul 7 06:17:21.376920 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 06:17:21.571169 sshd[4866]: Connection closed by 139.178.89.65 port 59150 Jul 7 06:17:21.572966 sshd-session[4864]: pam_unix(sshd:session): session closed for user core Jul 7 06:17:21.576062 systemd[1]: sshd@14-172.31.23.116:22-139.178.89.65:59150.service: Deactivated successfully. Jul 7 06:17:21.578396 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 06:17:21.579933 systemd-logind[1875]: Session 15 logged out. Waiting for processes to exit. Jul 7 06:17:21.582098 systemd-logind[1875]: Removed session 15. Jul 7 06:17:21.602256 systemd[1]: Started sshd@15-172.31.23.116:22-139.178.89.65:59154.service - OpenSSH per-connection server daemon (139.178.89.65:59154). Jul 7 06:17:21.784245 sshd[4878]: Accepted publickey for core from 139.178.89.65 port 59154 ssh2: RSA SHA256:1kMr/NVCVvXsWyefEhF1pHl3N2KP5iBIRut6ncABJco Jul 7 06:17:21.785831 sshd-session[4878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:17:21.791398 systemd-logind[1875]: New session 16 of user core. Jul 7 06:17:21.801913 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 06:17:22.478338 sshd[4880]: Connection closed by 139.178.89.65 port 59154 Jul 7 06:17:22.480205 sshd-session[4878]: pam_unix(sshd:session): session closed for user core Jul 7 06:17:22.489311 systemd[1]: sshd@15-172.31.23.116:22-139.178.89.65:59154.service: Deactivated successfully. Jul 7 06:17:22.492750 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 06:17:22.493930 systemd-logind[1875]: Session 16 logged out. Waiting for processes to exit. Jul 7 06:17:22.497022 systemd-logind[1875]: Removed session 16. Jul 7 06:17:22.515753 systemd[1]: Started sshd@16-172.31.23.116:22-139.178.89.65:59168.service - OpenSSH per-connection server daemon (139.178.89.65:59168). Jul 7 06:17:22.715037 sshd[4890]: Accepted publickey for core from 139.178.89.65 port 59168 ssh2: RSA SHA256:1kMr/NVCVvXsWyefEhF1pHl3N2KP5iBIRut6ncABJco Jul 7 06:17:22.716505 sshd-session[4890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:17:22.722122 systemd-logind[1875]: New session 17 of user core. Jul 7 06:17:22.727893 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 06:17:24.650070 sshd[4892]: Connection closed by 139.178.89.65 port 59168 Jul 7 06:17:24.650985 sshd-session[4890]: pam_unix(sshd:session): session closed for user core Jul 7 06:17:24.660381 systemd[1]: sshd@16-172.31.23.116:22-139.178.89.65:59168.service: Deactivated successfully. Jul 7 06:17:24.663710 systemd-logind[1875]: Session 17 logged out. Waiting for processes to exit. Jul 7 06:17:24.667073 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 06:17:24.685046 systemd-logind[1875]: Removed session 17. Jul 7 06:17:24.687728 systemd[1]: Started sshd@17-172.31.23.116:22-139.178.89.65:59174.service - OpenSSH per-connection server daemon (139.178.89.65:59174). Jul 7 06:17:24.882263 sshd[4912]: Accepted publickey for core from 139.178.89.65 port 59174 ssh2: RSA SHA256:1kMr/NVCVvXsWyefEhF1pHl3N2KP5iBIRut6ncABJco Jul 7 06:17:24.883757 sshd-session[4912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:17:24.889147 systemd-logind[1875]: New session 18 of user core. Jul 7 06:17:24.896981 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 06:17:25.253625 sshd[4914]: Connection closed by 139.178.89.65 port 59174 Jul 7 06:17:25.254899 sshd-session[4912]: pam_unix(sshd:session): session closed for user core Jul 7 06:17:25.259452 systemd[1]: sshd@17-172.31.23.116:22-139.178.89.65:59174.service: Deactivated successfully. Jul 7 06:17:25.263777 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 06:17:25.264906 systemd-logind[1875]: Session 18 logged out. Waiting for processes to exit. Jul 7 06:17:25.266796 systemd-logind[1875]: Removed session 18. Jul 7 06:17:25.286556 systemd[1]: Started sshd@18-172.31.23.116:22-139.178.89.65:59188.service - OpenSSH per-connection server daemon (139.178.89.65:59188). Jul 7 06:17:25.465122 sshd[4924]: Accepted publickey for core from 139.178.89.65 port 59188 ssh2: RSA SHA256:1kMr/NVCVvXsWyefEhF1pHl3N2KP5iBIRut6ncABJco Jul 7 06:17:25.466557 sshd-session[4924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:17:25.471905 systemd-logind[1875]: New session 19 of user core. Jul 7 06:17:25.484793 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 06:17:25.674332 sshd[4926]: Connection closed by 139.178.89.65 port 59188 Jul 7 06:17:25.675853 sshd-session[4924]: pam_unix(sshd:session): session closed for user core Jul 7 06:17:25.679864 systemd-logind[1875]: Session 19 logged out. Waiting for processes to exit. Jul 7 06:17:25.680529 systemd[1]: sshd@18-172.31.23.116:22-139.178.89.65:59188.service: Deactivated successfully. Jul 7 06:17:25.682565 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 06:17:25.684455 systemd-logind[1875]: Removed session 19. Jul 7 06:17:30.712855 systemd[1]: Started sshd@19-172.31.23.116:22-139.178.89.65:60304.service - OpenSSH per-connection server daemon (139.178.89.65:60304). Jul 7 06:17:30.878703 sshd[4941]: Accepted publickey for core from 139.178.89.65 port 60304 ssh2: RSA SHA256:1kMr/NVCVvXsWyefEhF1pHl3N2KP5iBIRut6ncABJco Jul 7 06:17:30.880073 sshd-session[4941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:17:30.885986 systemd-logind[1875]: New session 20 of user core. Jul 7 06:17:30.889875 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 06:17:31.079482 sshd[4943]: Connection closed by 139.178.89.65 port 60304 Jul 7 06:17:31.080289 sshd-session[4941]: pam_unix(sshd:session): session closed for user core Jul 7 06:17:31.084340 systemd[1]: sshd@19-172.31.23.116:22-139.178.89.65:60304.service: Deactivated successfully. Jul 7 06:17:31.086504 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 06:17:31.087966 systemd-logind[1875]: Session 20 logged out. Waiting for processes to exit. Jul 7 06:17:31.089329 systemd-logind[1875]: Removed session 20. Jul 7 06:17:36.115236 systemd[1]: Started sshd@20-172.31.23.116:22-139.178.89.65:60306.service - OpenSSH per-connection server daemon (139.178.89.65:60306). Jul 7 06:17:36.304608 sshd[4954]: Accepted publickey for core from 139.178.89.65 port 60306 ssh2: RSA SHA256:1kMr/NVCVvXsWyefEhF1pHl3N2KP5iBIRut6ncABJco Jul 7 06:17:36.306014 sshd-session[4954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:17:36.312086 systemd-logind[1875]: New session 21 of user core. Jul 7 06:17:36.315935 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 7 06:17:36.515031 sshd[4956]: Connection closed by 139.178.89.65 port 60306 Jul 7 06:17:36.515873 sshd-session[4954]: pam_unix(sshd:session): session closed for user core Jul 7 06:17:36.522044 systemd[1]: sshd@20-172.31.23.116:22-139.178.89.65:60306.service: Deactivated successfully. Jul 7 06:17:36.523980 systemd[1]: session-21.scope: Deactivated successfully. Jul 7 06:17:36.525329 systemd-logind[1875]: Session 21 logged out. Waiting for processes to exit. Jul 7 06:17:36.526947 systemd-logind[1875]: Removed session 21. Jul 7 06:17:41.549126 systemd[1]: Started sshd@21-172.31.23.116:22-139.178.89.65:49622.service - OpenSSH per-connection server daemon (139.178.89.65:49622). Jul 7 06:17:41.724963 sshd[4969]: Accepted publickey for core from 139.178.89.65 port 49622 ssh2: RSA SHA256:1kMr/NVCVvXsWyefEhF1pHl3N2KP5iBIRut6ncABJco Jul 7 06:17:41.726397 sshd-session[4969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:17:41.732708 systemd-logind[1875]: New session 22 of user core. Jul 7 06:17:41.738947 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 7 06:17:41.923062 sshd[4971]: Connection closed by 139.178.89.65 port 49622 Jul 7 06:17:41.923858 sshd-session[4969]: pam_unix(sshd:session): session closed for user core Jul 7 06:17:41.928769 systemd-logind[1875]: Session 22 logged out. Waiting for processes to exit. Jul 7 06:17:41.929862 systemd[1]: sshd@21-172.31.23.116:22-139.178.89.65:49622.service: Deactivated successfully. Jul 7 06:17:41.932115 systemd[1]: session-22.scope: Deactivated successfully. Jul 7 06:17:41.934296 systemd-logind[1875]: Removed session 22. Jul 7 06:17:41.955226 systemd[1]: Started sshd@22-172.31.23.116:22-139.178.89.65:49628.service - OpenSSH per-connection server daemon (139.178.89.65:49628). Jul 7 06:17:42.120554 sshd[4983]: Accepted publickey for core from 139.178.89.65 port 49628 ssh2: RSA SHA256:1kMr/NVCVvXsWyefEhF1pHl3N2KP5iBIRut6ncABJco Jul 7 06:17:42.121928 sshd-session[4983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:17:42.127726 systemd-logind[1875]: New session 23 of user core. Jul 7 06:17:42.134921 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 7 06:17:43.809235 containerd[1926]: time="2025-07-07T06:17:43.809098158Z" level=info msg="StopContainer for \"210ebafd824ea4aa3f29ab559a3aa052e98a843a43c40d9372ca787aad22b89e\" with timeout 30 (s)" Jul 7 06:17:43.824394 containerd[1926]: time="2025-07-07T06:17:43.823806542Z" level=info msg="Stop container \"210ebafd824ea4aa3f29ab559a3aa052e98a843a43c40d9372ca787aad22b89e\" with signal terminated" Jul 7 06:17:43.854800 systemd[1]: cri-containerd-210ebafd824ea4aa3f29ab559a3aa052e98a843a43c40d9372ca787aad22b89e.scope: Deactivated successfully. Jul 7 06:17:43.855611 systemd[1]: cri-containerd-210ebafd824ea4aa3f29ab559a3aa052e98a843a43c40d9372ca787aad22b89e.scope: Consumed 439ms CPU time, 36.8M memory peak, 15.4M read from disk, 4K written to disk. Jul 7 06:17:43.883670 containerd[1926]: time="2025-07-07T06:17:43.883571092Z" level=info msg="received exit event container_id:\"210ebafd824ea4aa3f29ab559a3aa052e98a843a43c40d9372ca787aad22b89e\" id:\"210ebafd824ea4aa3f29ab559a3aa052e98a843a43c40d9372ca787aad22b89e\" pid:3669 exited_at:{seconds:1751869063 nanos:877082856}" Jul 7 06:17:43.902933 containerd[1926]: time="2025-07-07T06:17:43.902827591Z" level=info msg="TaskExit event in podsandbox handler container_id:\"210ebafd824ea4aa3f29ab559a3aa052e98a843a43c40d9372ca787aad22b89e\" id:\"210ebafd824ea4aa3f29ab559a3aa052e98a843a43c40d9372ca787aad22b89e\" pid:3669 exited_at:{seconds:1751869063 nanos:877082856}" Jul 7 06:17:43.932216 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-210ebafd824ea4aa3f29ab559a3aa052e98a843a43c40d9372ca787aad22b89e-rootfs.mount: Deactivated successfully. Jul 7 06:17:43.938359 containerd[1926]: time="2025-07-07T06:17:43.938283561Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 06:17:43.940436 containerd[1926]: time="2025-07-07T06:17:43.940399701Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0df31c409513493c5b7f230e8cc4276a99917c782b408f5ec1460ca06d4b7714\" id:\"1d7815898721204ce7b69bcae53cc86bdd4cf9175e3c52f1083ee810dcd59578\" pid:5011 exited_at:{seconds:1751869063 nanos:938228542}" Jul 7 06:17:43.955544 containerd[1926]: time="2025-07-07T06:17:43.955500325Z" level=info msg="StopContainer for \"0df31c409513493c5b7f230e8cc4276a99917c782b408f5ec1460ca06d4b7714\" with timeout 2 (s)" Jul 7 06:17:43.956063 containerd[1926]: time="2025-07-07T06:17:43.956024136Z" level=info msg="Stop container \"0df31c409513493c5b7f230e8cc4276a99917c782b408f5ec1460ca06d4b7714\" with signal terminated" Jul 7 06:17:43.963033 containerd[1926]: time="2025-07-07T06:17:43.962987853Z" level=info msg="StopContainer for \"210ebafd824ea4aa3f29ab559a3aa052e98a843a43c40d9372ca787aad22b89e\" returns successfully" Jul 7 06:17:43.964092 containerd[1926]: time="2025-07-07T06:17:43.964057912Z" level=info msg="StopPodSandbox for \"ae6cfc246bf73122fb6a9b318b614ab1cd5114dc234d41b40df43ddb9a2077be\"" Jul 7 06:17:43.969123 systemd-networkd[1826]: lxc_health: Link DOWN Jul 7 06:17:43.969134 systemd-networkd[1826]: lxc_health: Lost carrier Jul 7 06:17:43.987322 containerd[1926]: time="2025-07-07T06:17:43.987190518Z" level=info msg="Container to stop \"210ebafd824ea4aa3f29ab559a3aa052e98a843a43c40d9372ca787aad22b89e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 06:17:43.994030 systemd[1]: cri-containerd-0df31c409513493c5b7f230e8cc4276a99917c782b408f5ec1460ca06d4b7714.scope: Deactivated successfully. Jul 7 06:17:43.994425 systemd[1]: cri-containerd-0df31c409513493c5b7f230e8cc4276a99917c782b408f5ec1460ca06d4b7714.scope: Consumed 7.764s CPU time, 193.3M memory peak, 72.7M read from disk, 13.3M written to disk. Jul 7 06:17:43.997639 containerd[1926]: time="2025-07-07T06:17:43.997585981Z" level=info msg="received exit event container_id:\"0df31c409513493c5b7f230e8cc4276a99917c782b408f5ec1460ca06d4b7714\" id:\"0df31c409513493c5b7f230e8cc4276a99917c782b408f5ec1460ca06d4b7714\" pid:4088 exited_at:{seconds:1751869063 nanos:996085952}" Jul 7 06:17:43.998167 containerd[1926]: time="2025-07-07T06:17:43.997599497Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0df31c409513493c5b7f230e8cc4276a99917c782b408f5ec1460ca06d4b7714\" id:\"0df31c409513493c5b7f230e8cc4276a99917c782b408f5ec1460ca06d4b7714\" pid:4088 exited_at:{seconds:1751869063 nanos:996085952}" Jul 7 06:17:44.001380 systemd[1]: cri-containerd-ae6cfc246bf73122fb6a9b318b614ab1cd5114dc234d41b40df43ddb9a2077be.scope: Deactivated successfully. Jul 7 06:17:44.005358 containerd[1926]: time="2025-07-07T06:17:44.005050165Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ae6cfc246bf73122fb6a9b318b614ab1cd5114dc234d41b40df43ddb9a2077be\" id:\"ae6cfc246bf73122fb6a9b318b614ab1cd5114dc234d41b40df43ddb9a2077be\" pid:3456 exit_status:137 exited_at:{seconds:1751869064 nanos:4753296}" Jul 7 06:17:44.040071 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0df31c409513493c5b7f230e8cc4276a99917c782b408f5ec1460ca06d4b7714-rootfs.mount: Deactivated successfully. Jul 7 06:17:44.068537 containerd[1926]: time="2025-07-07T06:17:44.068430139Z" level=info msg="StopContainer for \"0df31c409513493c5b7f230e8cc4276a99917c782b408f5ec1460ca06d4b7714\" returns successfully" Jul 7 06:17:44.073143 containerd[1926]: time="2025-07-07T06:17:44.072961415Z" level=info msg="StopPodSandbox for \"32f7754e42bcb40a038e49bedffc007640fc6c35b0089a271360344c84c16a00\"" Jul 7 06:17:44.073561 containerd[1926]: time="2025-07-07T06:17:44.073390438Z" level=info msg="Container to stop \"4e25f2a83f0594039efe9b1dea3a7010cf5c95c82529ee5ac9d78e677295a500\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 06:17:44.073561 containerd[1926]: time="2025-07-07T06:17:44.073417334Z" level=info msg="Container to stop \"ad7399e8663af7b4f691195c035c1e25606917d75ad7e48a9500463519c4c1ea\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 06:17:44.073197 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae6cfc246bf73122fb6a9b318b614ab1cd5114dc234d41b40df43ddb9a2077be-rootfs.mount: Deactivated successfully. Jul 7 06:17:44.075784 containerd[1926]: time="2025-07-07T06:17:44.073430863Z" level=info msg="Container to stop \"0df31c409513493c5b7f230e8cc4276a99917c782b408f5ec1460ca06d4b7714\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 06:17:44.075784 containerd[1926]: time="2025-07-07T06:17:44.074025766Z" level=info msg="Container to stop \"b1d62c57e0653a9e6c8a8fa01f97ed08d61c7834d0b6664fbba1db023d7c4954\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 06:17:44.075784 containerd[1926]: time="2025-07-07T06:17:44.074044109Z" level=info msg="Container to stop \"eb3a8735ccacd0008494532bbf7b3bac751566ba563f967475de36c14f5e0eb8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 06:17:44.084908 systemd[1]: cri-containerd-32f7754e42bcb40a038e49bedffc007640fc6c35b0089a271360344c84c16a00.scope: Deactivated successfully. Jul 7 06:17:44.094643 containerd[1926]: time="2025-07-07T06:17:44.094600592Z" level=info msg="shim disconnected" id=ae6cfc246bf73122fb6a9b318b614ab1cd5114dc234d41b40df43ddb9a2077be namespace=k8s.io Jul 7 06:17:44.094643 containerd[1926]: time="2025-07-07T06:17:44.094637381Z" level=warning msg="cleaning up after shim disconnected" id=ae6cfc246bf73122fb6a9b318b614ab1cd5114dc234d41b40df43ddb9a2077be namespace=k8s.io Jul 7 06:17:44.094987 containerd[1926]: time="2025-07-07T06:17:44.094659211Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 06:17:44.129055 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-32f7754e42bcb40a038e49bedffc007640fc6c35b0089a271360344c84c16a00-rootfs.mount: Deactivated successfully. Jul 7 06:17:44.149681 containerd[1926]: time="2025-07-07T06:17:44.149461433Z" level=info msg="shim disconnected" id=32f7754e42bcb40a038e49bedffc007640fc6c35b0089a271360344c84c16a00 namespace=k8s.io Jul 7 06:17:44.150871 containerd[1926]: time="2025-07-07T06:17:44.150828528Z" level=warning msg="cleaning up after shim disconnected" id=32f7754e42bcb40a038e49bedffc007640fc6c35b0089a271360344c84c16a00 namespace=k8s.io Jul 7 06:17:44.150986 containerd[1926]: time="2025-07-07T06:17:44.150866740Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 06:17:44.178680 containerd[1926]: time="2025-07-07T06:17:44.178479070Z" level=info msg="TaskExit event in podsandbox handler container_id:\"32f7754e42bcb40a038e49bedffc007640fc6c35b0089a271360344c84c16a00\" id:\"32f7754e42bcb40a038e49bedffc007640fc6c35b0089a271360344c84c16a00\" pid:3629 exit_status:137 exited_at:{seconds:1751869064 nanos:86822120}" Jul 7 06:17:44.194040 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ae6cfc246bf73122fb6a9b318b614ab1cd5114dc234d41b40df43ddb9a2077be-shm.mount: Deactivated successfully. Jul 7 06:17:44.197593 containerd[1926]: time="2025-07-07T06:17:44.197195863Z" level=info msg="TearDown network for sandbox \"32f7754e42bcb40a038e49bedffc007640fc6c35b0089a271360344c84c16a00\" successfully" Jul 7 06:17:44.197593 containerd[1926]: time="2025-07-07T06:17:44.197236283Z" level=info msg="StopPodSandbox for \"32f7754e42bcb40a038e49bedffc007640fc6c35b0089a271360344c84c16a00\" returns successfully" Jul 7 06:17:44.198752 containerd[1926]: time="2025-07-07T06:17:44.198432966Z" level=info msg="received exit event sandbox_id:\"32f7754e42bcb40a038e49bedffc007640fc6c35b0089a271360344c84c16a00\" exit_status:137 exited_at:{seconds:1751869064 nanos:86822120}" Jul 7 06:17:44.199812 containerd[1926]: time="2025-07-07T06:17:44.199777346Z" level=info msg="TearDown network for sandbox \"ae6cfc246bf73122fb6a9b318b614ab1cd5114dc234d41b40df43ddb9a2077be\" successfully" Jul 7 06:17:44.199898 containerd[1926]: time="2025-07-07T06:17:44.199815597Z" level=info msg="StopPodSandbox for \"ae6cfc246bf73122fb6a9b318b614ab1cd5114dc234d41b40df43ddb9a2077be\" returns successfully" Jul 7 06:17:44.199973 containerd[1926]: time="2025-07-07T06:17:44.199952376Z" level=info msg="received exit event sandbox_id:\"ae6cfc246bf73122fb6a9b318b614ab1cd5114dc234d41b40df43ddb9a2077be\" exit_status:137 exited_at:{seconds:1751869064 nanos:4753296}" Jul 7 06:17:44.326824 kubelet[3270]: I0707 06:17:44.326584 3270 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-td8g2\" (UniqueName: \"kubernetes.io/projected/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-kube-api-access-td8g2\") pod \"b407ee20-16e1-433c-9d0d-b1ccd11db3d0\" (UID: \"b407ee20-16e1-433c-9d0d-b1ccd11db3d0\") " Jul 7 06:17:44.326824 kubelet[3270]: I0707 06:17:44.326631 3270 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-cilium-config-path\") pod \"b407ee20-16e1-433c-9d0d-b1ccd11db3d0\" (UID: \"b407ee20-16e1-433c-9d0d-b1ccd11db3d0\") " Jul 7 06:17:44.326824 kubelet[3270]: I0707 06:17:44.326678 3270 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-cni-path\") pod \"b407ee20-16e1-433c-9d0d-b1ccd11db3d0\" (UID: \"b407ee20-16e1-433c-9d0d-b1ccd11db3d0\") " Jul 7 06:17:44.326824 kubelet[3270]: I0707 06:17:44.326696 3270 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-cilium-run\") pod \"b407ee20-16e1-433c-9d0d-b1ccd11db3d0\" (UID: \"b407ee20-16e1-433c-9d0d-b1ccd11db3d0\") " Jul 7 06:17:44.326824 kubelet[3270]: I0707 06:17:44.326711 3270 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-xtables-lock\") pod \"b407ee20-16e1-433c-9d0d-b1ccd11db3d0\" (UID: \"b407ee20-16e1-433c-9d0d-b1ccd11db3d0\") " Jul 7 06:17:44.326824 kubelet[3270]: I0707 06:17:44.326724 3270 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-lib-modules\") pod \"b407ee20-16e1-433c-9d0d-b1ccd11db3d0\" (UID: \"b407ee20-16e1-433c-9d0d-b1ccd11db3d0\") " Jul 7 06:17:44.327359 kubelet[3270]: I0707 06:17:44.326739 3270 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-cilium-cgroup\") pod \"b407ee20-16e1-433c-9d0d-b1ccd11db3d0\" (UID: \"b407ee20-16e1-433c-9d0d-b1ccd11db3d0\") " Jul 7 06:17:44.327359 kubelet[3270]: I0707 06:17:44.326754 3270 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bb4f29e6-b2ba-421b-bd60-8b9e19cd539b-cilium-config-path\") pod \"bb4f29e6-b2ba-421b-bd60-8b9e19cd539b\" (UID: \"bb4f29e6-b2ba-421b-bd60-8b9e19cd539b\") " Jul 7 06:17:44.327359 kubelet[3270]: I0707 06:17:44.326768 3270 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-etc-cni-netd\") pod \"b407ee20-16e1-433c-9d0d-b1ccd11db3d0\" (UID: \"b407ee20-16e1-433c-9d0d-b1ccd11db3d0\") " Jul 7 06:17:44.327359 kubelet[3270]: I0707 06:17:44.326785 3270 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-clustermesh-secrets\") pod \"b407ee20-16e1-433c-9d0d-b1ccd11db3d0\" (UID: \"b407ee20-16e1-433c-9d0d-b1ccd11db3d0\") " Jul 7 06:17:44.328089 kubelet[3270]: I0707 06:17:44.326803 3270 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-hostproc\") pod \"b407ee20-16e1-433c-9d0d-b1ccd11db3d0\" (UID: \"b407ee20-16e1-433c-9d0d-b1ccd11db3d0\") " Jul 7 06:17:44.328089 kubelet[3270]: I0707 06:17:44.327566 3270 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-bpf-maps\") pod \"b407ee20-16e1-433c-9d0d-b1ccd11db3d0\" (UID: \"b407ee20-16e1-433c-9d0d-b1ccd11db3d0\") " Jul 7 06:17:44.328089 kubelet[3270]: I0707 06:17:44.327601 3270 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-hubble-tls\") pod \"b407ee20-16e1-433c-9d0d-b1ccd11db3d0\" (UID: \"b407ee20-16e1-433c-9d0d-b1ccd11db3d0\") " Jul 7 06:17:44.328089 kubelet[3270]: I0707 06:17:44.327622 3270 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjjhf\" (UniqueName: \"kubernetes.io/projected/bb4f29e6-b2ba-421b-bd60-8b9e19cd539b-kube-api-access-zjjhf\") pod \"bb4f29e6-b2ba-421b-bd60-8b9e19cd539b\" (UID: \"bb4f29e6-b2ba-421b-bd60-8b9e19cd539b\") " Jul 7 06:17:44.328089 kubelet[3270]: I0707 06:17:44.327729 3270 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-host-proc-sys-kernel\") pod \"b407ee20-16e1-433c-9d0d-b1ccd11db3d0\" (UID: \"b407ee20-16e1-433c-9d0d-b1ccd11db3d0\") " Jul 7 06:17:44.328089 kubelet[3270]: I0707 06:17:44.327743 3270 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-host-proc-sys-net\") pod \"b407ee20-16e1-433c-9d0d-b1ccd11db3d0\" (UID: \"b407ee20-16e1-433c-9d0d-b1ccd11db3d0\") " Jul 7 06:17:44.328460 kubelet[3270]: I0707 06:17:44.327803 3270 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b407ee20-16e1-433c-9d0d-b1ccd11db3d0" (UID: "b407ee20-16e1-433c-9d0d-b1ccd11db3d0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 06:17:44.329798 kubelet[3270]: I0707 06:17:44.329769 3270 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b407ee20-16e1-433c-9d0d-b1ccd11db3d0" (UID: "b407ee20-16e1-433c-9d0d-b1ccd11db3d0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 7 06:17:44.329864 kubelet[3270]: I0707 06:17:44.329828 3270 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-cni-path" (OuterVolumeSpecName: "cni-path") pod "b407ee20-16e1-433c-9d0d-b1ccd11db3d0" (UID: "b407ee20-16e1-433c-9d0d-b1ccd11db3d0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 06:17:44.329864 kubelet[3270]: I0707 06:17:44.329844 3270 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b407ee20-16e1-433c-9d0d-b1ccd11db3d0" (UID: "b407ee20-16e1-433c-9d0d-b1ccd11db3d0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 06:17:44.329928 kubelet[3270]: I0707 06:17:44.329863 3270 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b407ee20-16e1-433c-9d0d-b1ccd11db3d0" (UID: "b407ee20-16e1-433c-9d0d-b1ccd11db3d0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 06:17:44.329928 kubelet[3270]: I0707 06:17:44.329878 3270 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b407ee20-16e1-433c-9d0d-b1ccd11db3d0" (UID: "b407ee20-16e1-433c-9d0d-b1ccd11db3d0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 06:17:44.329928 kubelet[3270]: I0707 06:17:44.329900 3270 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b407ee20-16e1-433c-9d0d-b1ccd11db3d0" (UID: "b407ee20-16e1-433c-9d0d-b1ccd11db3d0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 06:17:44.330061 kubelet[3270]: I0707 06:17:44.330043 3270 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b407ee20-16e1-433c-9d0d-b1ccd11db3d0" (UID: "b407ee20-16e1-433c-9d0d-b1ccd11db3d0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 06:17:44.330103 kubelet[3270]: I0707 06:17:44.330066 3270 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b407ee20-16e1-433c-9d0d-b1ccd11db3d0" (UID: "b407ee20-16e1-433c-9d0d-b1ccd11db3d0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 06:17:44.330960 kubelet[3270]: I0707 06:17:44.330931 3270 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b407ee20-16e1-433c-9d0d-b1ccd11db3d0" (UID: "b407ee20-16e1-433c-9d0d-b1ccd11db3d0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 06:17:44.331721 kubelet[3270]: I0707 06:17:44.331698 3270 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-hostproc" (OuterVolumeSpecName: "hostproc") pod "b407ee20-16e1-433c-9d0d-b1ccd11db3d0" (UID: "b407ee20-16e1-433c-9d0d-b1ccd11db3d0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 06:17:44.334021 kubelet[3270]: I0707 06:17:44.333994 3270 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-kube-api-access-td8g2" (OuterVolumeSpecName: "kube-api-access-td8g2") pod "b407ee20-16e1-433c-9d0d-b1ccd11db3d0" (UID: "b407ee20-16e1-433c-9d0d-b1ccd11db3d0"). InnerVolumeSpecName "kube-api-access-td8g2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 7 06:17:44.335408 kubelet[3270]: I0707 06:17:44.335387 3270 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b407ee20-16e1-433c-9d0d-b1ccd11db3d0" (UID: "b407ee20-16e1-433c-9d0d-b1ccd11db3d0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 7 06:17:44.335604 kubelet[3270]: I0707 06:17:44.335495 3270 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb4f29e6-b2ba-421b-bd60-8b9e19cd539b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bb4f29e6-b2ba-421b-bd60-8b9e19cd539b" (UID: "bb4f29e6-b2ba-421b-bd60-8b9e19cd539b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 7 06:17:44.336157 kubelet[3270]: I0707 06:17:44.336131 3270 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb4f29e6-b2ba-421b-bd60-8b9e19cd539b-kube-api-access-zjjhf" (OuterVolumeSpecName: "kube-api-access-zjjhf") pod "bb4f29e6-b2ba-421b-bd60-8b9e19cd539b" (UID: "bb4f29e6-b2ba-421b-bd60-8b9e19cd539b"). InnerVolumeSpecName "kube-api-access-zjjhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 7 06:17:44.336582 kubelet[3270]: I0707 06:17:44.336558 3270 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b407ee20-16e1-433c-9d0d-b1ccd11db3d0" (UID: "b407ee20-16e1-433c-9d0d-b1ccd11db3d0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 7 06:17:44.428219 kubelet[3270]: I0707 06:17:44.428175 3270 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-host-proc-sys-kernel\") on node \"ip-172-31-23-116\" DevicePath \"\"" Jul 7 06:17:44.428219 kubelet[3270]: I0707 06:17:44.428211 3270 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-host-proc-sys-net\") on node \"ip-172-31-23-116\" DevicePath \"\"" Jul 7 06:17:44.428219 kubelet[3270]: I0707 06:17:44.428222 3270 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-cilium-config-path\") on node \"ip-172-31-23-116\" DevicePath \"\"" Jul 7 06:17:44.428219 kubelet[3270]: I0707 06:17:44.428232 3270 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-cni-path\") on node \"ip-172-31-23-116\" DevicePath \"\"" Jul 7 06:17:44.428461 kubelet[3270]: I0707 06:17:44.428240 3270 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-td8g2\" (UniqueName: \"kubernetes.io/projected/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-kube-api-access-td8g2\") on node \"ip-172-31-23-116\" DevicePath \"\"" Jul 7 06:17:44.428461 kubelet[3270]: I0707 06:17:44.428249 3270 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-xtables-lock\") on node \"ip-172-31-23-116\" DevicePath \"\"" Jul 7 06:17:44.428461 kubelet[3270]: I0707 06:17:44.428259 3270 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-lib-modules\") on node \"ip-172-31-23-116\" DevicePath \"\"" Jul 7 06:17:44.428461 kubelet[3270]: I0707 06:17:44.428269 3270 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-cilium-cgroup\") on node \"ip-172-31-23-116\" DevicePath \"\"" Jul 7 06:17:44.428461 kubelet[3270]: I0707 06:17:44.428278 3270 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-cilium-run\") on node \"ip-172-31-23-116\" DevicePath \"\"" Jul 7 06:17:44.428461 kubelet[3270]: I0707 06:17:44.428286 3270 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bb4f29e6-b2ba-421b-bd60-8b9e19cd539b-cilium-config-path\") on node \"ip-172-31-23-116\" DevicePath \"\"" Jul 7 06:17:44.428461 kubelet[3270]: I0707 06:17:44.428378 3270 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-clustermesh-secrets\") on node \"ip-172-31-23-116\" DevicePath \"\"" Jul 7 06:17:44.428461 kubelet[3270]: I0707 06:17:44.428389 3270 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-etc-cni-netd\") on node \"ip-172-31-23-116\" DevicePath \"\"" Jul 7 06:17:44.428702 kubelet[3270]: I0707 06:17:44.428398 3270 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-hostproc\") on node \"ip-172-31-23-116\" DevicePath \"\"" Jul 7 06:17:44.428702 kubelet[3270]: I0707 06:17:44.428405 3270 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-bpf-maps\") on node \"ip-172-31-23-116\" DevicePath \"\"" Jul 7 06:17:44.428702 kubelet[3270]: I0707 06:17:44.428413 3270 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b407ee20-16e1-433c-9d0d-b1ccd11db3d0-hubble-tls\") on node \"ip-172-31-23-116\" DevicePath \"\"" Jul 7 06:17:44.428702 kubelet[3270]: I0707 06:17:44.428420 3270 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjjhf\" (UniqueName: \"kubernetes.io/projected/bb4f29e6-b2ba-421b-bd60-8b9e19cd539b-kube-api-access-zjjhf\") on node \"ip-172-31-23-116\" DevicePath \"\"" Jul 7 06:17:44.754789 kubelet[3270]: I0707 06:17:44.754755 3270 scope.go:117] "RemoveContainer" containerID="0df31c409513493c5b7f230e8cc4276a99917c782b408f5ec1460ca06d4b7714" Jul 7 06:17:44.760183 containerd[1926]: time="2025-07-07T06:17:44.760092053Z" level=info msg="RemoveContainer for \"0df31c409513493c5b7f230e8cc4276a99917c782b408f5ec1460ca06d4b7714\"" Jul 7 06:17:44.762921 systemd[1]: Removed slice kubepods-burstable-podb407ee20_16e1_433c_9d0d_b1ccd11db3d0.slice - libcontainer container kubepods-burstable-podb407ee20_16e1_433c_9d0d_b1ccd11db3d0.slice. Jul 7 06:17:44.763037 systemd[1]: kubepods-burstable-podb407ee20_16e1_433c_9d0d_b1ccd11db3d0.slice: Consumed 7.867s CPU time, 193.7M memory peak, 74.1M read from disk, 13.3M written to disk. Jul 7 06:17:44.770575 systemd[1]: Removed slice kubepods-besteffort-podbb4f29e6_b2ba_421b_bd60_8b9e19cd539b.slice - libcontainer container kubepods-besteffort-podbb4f29e6_b2ba_421b_bd60_8b9e19cd539b.slice. Jul 7 06:17:44.770747 systemd[1]: kubepods-besteffort-podbb4f29e6_b2ba_421b_bd60_8b9e19cd539b.slice: Consumed 471ms CPU time, 37.1M memory peak, 15.4M read from disk, 4K written to disk. Jul 7 06:17:44.775743 containerd[1926]: time="2025-07-07T06:17:44.775616364Z" level=info msg="RemoveContainer for \"0df31c409513493c5b7f230e8cc4276a99917c782b408f5ec1460ca06d4b7714\" returns successfully" Jul 7 06:17:44.776203 kubelet[3270]: I0707 06:17:44.776129 3270 scope.go:117] "RemoveContainer" containerID="4e25f2a83f0594039efe9b1dea3a7010cf5c95c82529ee5ac9d78e677295a500" Jul 7 06:17:44.778392 containerd[1926]: time="2025-07-07T06:17:44.777926230Z" level=info msg="RemoveContainer for \"4e25f2a83f0594039efe9b1dea3a7010cf5c95c82529ee5ac9d78e677295a500\"" Jul 7 06:17:44.784799 containerd[1926]: time="2025-07-07T06:17:44.784745610Z" level=info msg="RemoveContainer for \"4e25f2a83f0594039efe9b1dea3a7010cf5c95c82529ee5ac9d78e677295a500\" returns successfully" Jul 7 06:17:44.787435 kubelet[3270]: I0707 06:17:44.787407 3270 scope.go:117] "RemoveContainer" containerID="eb3a8735ccacd0008494532bbf7b3bac751566ba563f967475de36c14f5e0eb8" Jul 7 06:17:44.790220 containerd[1926]: time="2025-07-07T06:17:44.790176647Z" level=info msg="RemoveContainer for \"eb3a8735ccacd0008494532bbf7b3bac751566ba563f967475de36c14f5e0eb8\"" Jul 7 06:17:44.797163 containerd[1926]: time="2025-07-07T06:17:44.797127251Z" level=info msg="RemoveContainer for \"eb3a8735ccacd0008494532bbf7b3bac751566ba563f967475de36c14f5e0eb8\" returns successfully" Jul 7 06:17:44.798224 kubelet[3270]: I0707 06:17:44.798178 3270 scope.go:117] "RemoveContainer" containerID="ad7399e8663af7b4f691195c035c1e25606917d75ad7e48a9500463519c4c1ea" Jul 7 06:17:44.801372 containerd[1926]: time="2025-07-07T06:17:44.801338946Z" level=info msg="RemoveContainer for \"ad7399e8663af7b4f691195c035c1e25606917d75ad7e48a9500463519c4c1ea\"" Jul 7 06:17:44.807084 containerd[1926]: time="2025-07-07T06:17:44.807042620Z" level=info msg="RemoveContainer for \"ad7399e8663af7b4f691195c035c1e25606917d75ad7e48a9500463519c4c1ea\" returns successfully" Jul 7 06:17:44.807367 kubelet[3270]: I0707 06:17:44.807304 3270 scope.go:117] "RemoveContainer" containerID="b1d62c57e0653a9e6c8a8fa01f97ed08d61c7834d0b6664fbba1db023d7c4954" Jul 7 06:17:44.809169 containerd[1926]: time="2025-07-07T06:17:44.809136289Z" level=info msg="RemoveContainer for \"b1d62c57e0653a9e6c8a8fa01f97ed08d61c7834d0b6664fbba1db023d7c4954\"" Jul 7 06:17:44.814524 containerd[1926]: time="2025-07-07T06:17:44.814462942Z" level=info msg="RemoveContainer for \"b1d62c57e0653a9e6c8a8fa01f97ed08d61c7834d0b6664fbba1db023d7c4954\" returns successfully" Jul 7 06:17:44.814969 kubelet[3270]: I0707 06:17:44.814697 3270 scope.go:117] "RemoveContainer" containerID="0df31c409513493c5b7f230e8cc4276a99917c782b408f5ec1460ca06d4b7714" Jul 7 06:17:44.817767 containerd[1926]: time="2025-07-07T06:17:44.815189893Z" level=error msg="ContainerStatus for \"0df31c409513493c5b7f230e8cc4276a99917c782b408f5ec1460ca06d4b7714\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0df31c409513493c5b7f230e8cc4276a99917c782b408f5ec1460ca06d4b7714\": not found" Jul 7 06:17:44.818774 kubelet[3270]: E0707 06:17:44.818723 3270 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0df31c409513493c5b7f230e8cc4276a99917c782b408f5ec1460ca06d4b7714\": not found" containerID="0df31c409513493c5b7f230e8cc4276a99917c782b408f5ec1460ca06d4b7714" Jul 7 06:17:44.820642 kubelet[3270]: I0707 06:17:44.820530 3270 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0df31c409513493c5b7f230e8cc4276a99917c782b408f5ec1460ca06d4b7714"} err="failed to get container status \"0df31c409513493c5b7f230e8cc4276a99917c782b408f5ec1460ca06d4b7714\": rpc error: code = NotFound desc = an error occurred when try to find container \"0df31c409513493c5b7f230e8cc4276a99917c782b408f5ec1460ca06d4b7714\": not found" Jul 7 06:17:44.820642 kubelet[3270]: I0707 06:17:44.820641 3270 scope.go:117] "RemoveContainer" containerID="4e25f2a83f0594039efe9b1dea3a7010cf5c95c82529ee5ac9d78e677295a500" Jul 7 06:17:44.820957 containerd[1926]: time="2025-07-07T06:17:44.820922954Z" level=error msg="ContainerStatus for \"4e25f2a83f0594039efe9b1dea3a7010cf5c95c82529ee5ac9d78e677295a500\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4e25f2a83f0594039efe9b1dea3a7010cf5c95c82529ee5ac9d78e677295a500\": not found" Jul 7 06:17:44.821145 kubelet[3270]: E0707 06:17:44.821098 3270 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4e25f2a83f0594039efe9b1dea3a7010cf5c95c82529ee5ac9d78e677295a500\": not found" containerID="4e25f2a83f0594039efe9b1dea3a7010cf5c95c82529ee5ac9d78e677295a500" Jul 7 06:17:44.821145 kubelet[3270]: I0707 06:17:44.821125 3270 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4e25f2a83f0594039efe9b1dea3a7010cf5c95c82529ee5ac9d78e677295a500"} err="failed to get container status \"4e25f2a83f0594039efe9b1dea3a7010cf5c95c82529ee5ac9d78e677295a500\": rpc error: code = NotFound desc = an error occurred when try to find container \"4e25f2a83f0594039efe9b1dea3a7010cf5c95c82529ee5ac9d78e677295a500\": not found" Jul 7 06:17:44.821145 kubelet[3270]: I0707 06:17:44.821142 3270 scope.go:117] "RemoveContainer" containerID="eb3a8735ccacd0008494532bbf7b3bac751566ba563f967475de36c14f5e0eb8" Jul 7 06:17:44.821472 containerd[1926]: time="2025-07-07T06:17:44.821322749Z" level=error msg="ContainerStatus for \"eb3a8735ccacd0008494532bbf7b3bac751566ba563f967475de36c14f5e0eb8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eb3a8735ccacd0008494532bbf7b3bac751566ba563f967475de36c14f5e0eb8\": not found" Jul 7 06:17:44.821509 kubelet[3270]: E0707 06:17:44.821478 3270 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eb3a8735ccacd0008494532bbf7b3bac751566ba563f967475de36c14f5e0eb8\": not found" containerID="eb3a8735ccacd0008494532bbf7b3bac751566ba563f967475de36c14f5e0eb8" Jul 7 06:17:44.821536 kubelet[3270]: I0707 06:17:44.821501 3270 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eb3a8735ccacd0008494532bbf7b3bac751566ba563f967475de36c14f5e0eb8"} err="failed to get container status \"eb3a8735ccacd0008494532bbf7b3bac751566ba563f967475de36c14f5e0eb8\": rpc error: code = NotFound desc = an error occurred when try to find container \"eb3a8735ccacd0008494532bbf7b3bac751566ba563f967475de36c14f5e0eb8\": not found" Jul 7 06:17:44.821536 kubelet[3270]: I0707 06:17:44.821518 3270 scope.go:117] "RemoveContainer" containerID="ad7399e8663af7b4f691195c035c1e25606917d75ad7e48a9500463519c4c1ea" Jul 7 06:17:44.821822 containerd[1926]: time="2025-07-07T06:17:44.821712046Z" level=error msg="ContainerStatus for \"ad7399e8663af7b4f691195c035c1e25606917d75ad7e48a9500463519c4c1ea\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ad7399e8663af7b4f691195c035c1e25606917d75ad7e48a9500463519c4c1ea\": not found" Jul 7 06:17:44.821922 kubelet[3270]: E0707 06:17:44.821895 3270 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ad7399e8663af7b4f691195c035c1e25606917d75ad7e48a9500463519c4c1ea\": not found" containerID="ad7399e8663af7b4f691195c035c1e25606917d75ad7e48a9500463519c4c1ea" Jul 7 06:17:44.821968 kubelet[3270]: I0707 06:17:44.821919 3270 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ad7399e8663af7b4f691195c035c1e25606917d75ad7e48a9500463519c4c1ea"} err="failed to get container status \"ad7399e8663af7b4f691195c035c1e25606917d75ad7e48a9500463519c4c1ea\": rpc error: code = NotFound desc = an error occurred when try to find container \"ad7399e8663af7b4f691195c035c1e25606917d75ad7e48a9500463519c4c1ea\": not found" Jul 7 06:17:44.821968 kubelet[3270]: I0707 06:17:44.821941 3270 scope.go:117] "RemoveContainer" containerID="b1d62c57e0653a9e6c8a8fa01f97ed08d61c7834d0b6664fbba1db023d7c4954" Jul 7 06:17:44.822173 containerd[1926]: time="2025-07-07T06:17:44.822076548Z" level=error msg="ContainerStatus for \"b1d62c57e0653a9e6c8a8fa01f97ed08d61c7834d0b6664fbba1db023d7c4954\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b1d62c57e0653a9e6c8a8fa01f97ed08d61c7834d0b6664fbba1db023d7c4954\": not found" Jul 7 06:17:44.822312 kubelet[3270]: E0707 06:17:44.822281 3270 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b1d62c57e0653a9e6c8a8fa01f97ed08d61c7834d0b6664fbba1db023d7c4954\": not found" containerID="b1d62c57e0653a9e6c8a8fa01f97ed08d61c7834d0b6664fbba1db023d7c4954" Jul 7 06:17:44.822312 kubelet[3270]: I0707 06:17:44.822304 3270 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b1d62c57e0653a9e6c8a8fa01f97ed08d61c7834d0b6664fbba1db023d7c4954"} err="failed to get container status \"b1d62c57e0653a9e6c8a8fa01f97ed08d61c7834d0b6664fbba1db023d7c4954\": rpc error: code = NotFound desc = an error occurred when try to find container \"b1d62c57e0653a9e6c8a8fa01f97ed08d61c7834d0b6664fbba1db023d7c4954\": not found" Jul 7 06:17:44.822507 kubelet[3270]: I0707 06:17:44.822319 3270 scope.go:117] "RemoveContainer" containerID="210ebafd824ea4aa3f29ab559a3aa052e98a843a43c40d9372ca787aad22b89e" Jul 7 06:17:44.823916 containerd[1926]: time="2025-07-07T06:17:44.823883045Z" level=info msg="RemoveContainer for \"210ebafd824ea4aa3f29ab559a3aa052e98a843a43c40d9372ca787aad22b89e\"" Jul 7 06:17:44.829580 containerd[1926]: time="2025-07-07T06:17:44.829543304Z" level=info msg="RemoveContainer for \"210ebafd824ea4aa3f29ab559a3aa052e98a843a43c40d9372ca787aad22b89e\" returns successfully" Jul 7 06:17:44.829819 kubelet[3270]: I0707 06:17:44.829791 3270 scope.go:117] "RemoveContainer" containerID="210ebafd824ea4aa3f29ab559a3aa052e98a843a43c40d9372ca787aad22b89e" Jul 7 06:17:44.830035 containerd[1926]: time="2025-07-07T06:17:44.830008002Z" level=error msg="ContainerStatus for \"210ebafd824ea4aa3f29ab559a3aa052e98a843a43c40d9372ca787aad22b89e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"210ebafd824ea4aa3f29ab559a3aa052e98a843a43c40d9372ca787aad22b89e\": not found" Jul 7 06:17:44.830164 kubelet[3270]: E0707 06:17:44.830140 3270 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"210ebafd824ea4aa3f29ab559a3aa052e98a843a43c40d9372ca787aad22b89e\": not found" containerID="210ebafd824ea4aa3f29ab559a3aa052e98a843a43c40d9372ca787aad22b89e" Jul 7 06:17:44.830207 kubelet[3270]: I0707 06:17:44.830170 3270 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"210ebafd824ea4aa3f29ab559a3aa052e98a843a43c40d9372ca787aad22b89e"} err="failed to get container status \"210ebafd824ea4aa3f29ab559a3aa052e98a843a43c40d9372ca787aad22b89e\": rpc error: code = NotFound desc = an error occurred when try to find container \"210ebafd824ea4aa3f29ab559a3aa052e98a843a43c40d9372ca787aad22b89e\": not found" Jul 7 06:17:44.927233 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-32f7754e42bcb40a038e49bedffc007640fc6c35b0089a271360344c84c16a00-shm.mount: Deactivated successfully. Jul 7 06:17:44.927360 systemd[1]: var-lib-kubelet-pods-b407ee20\x2d16e1\x2d433c\x2d9d0d\x2db1ccd11db3d0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 7 06:17:44.927443 systemd[1]: var-lib-kubelet-pods-b407ee20\x2d16e1\x2d433c\x2d9d0d\x2db1ccd11db3d0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 7 06:17:44.927506 systemd[1]: var-lib-kubelet-pods-bb4f29e6\x2db2ba\x2d421b\x2dbd60\x2d8b9e19cd539b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzjjhf.mount: Deactivated successfully. Jul 7 06:17:44.927575 systemd[1]: var-lib-kubelet-pods-b407ee20\x2d16e1\x2d433c\x2d9d0d\x2db1ccd11db3d0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtd8g2.mount: Deactivated successfully. Jul 7 06:17:45.362207 kubelet[3270]: I0707 06:17:45.362152 3270 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b407ee20-16e1-433c-9d0d-b1ccd11db3d0" path="/var/lib/kubelet/pods/b407ee20-16e1-433c-9d0d-b1ccd11db3d0/volumes" Jul 7 06:17:45.362763 kubelet[3270]: I0707 06:17:45.362723 3270 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb4f29e6-b2ba-421b-bd60-8b9e19cd539b" path="/var/lib/kubelet/pods/bb4f29e6-b2ba-421b-bd60-8b9e19cd539b/volumes" Jul 7 06:17:45.772769 sshd[4985]: Connection closed by 139.178.89.65 port 49628 Jul 7 06:17:45.773124 sshd-session[4983]: pam_unix(sshd:session): session closed for user core Jul 7 06:17:45.777843 systemd[1]: sshd@22-172.31.23.116:22-139.178.89.65:49628.service: Deactivated successfully. Jul 7 06:17:45.780523 systemd[1]: session-23.scope: Deactivated successfully. Jul 7 06:17:45.781949 systemd-logind[1875]: Session 23 logged out. Waiting for processes to exit. Jul 7 06:17:45.784512 systemd-logind[1875]: Removed session 23. Jul 7 06:17:45.812502 systemd[1]: Started sshd@23-172.31.23.116:22-139.178.89.65:49644.service - OpenSSH per-connection server daemon (139.178.89.65:49644). Jul 7 06:17:45.983188 sshd[5134]: Accepted publickey for core from 139.178.89.65 port 49644 ssh2: RSA SHA256:1kMr/NVCVvXsWyefEhF1pHl3N2KP5iBIRut6ncABJco Jul 7 06:17:45.985052 sshd-session[5134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:17:45.990249 systemd-logind[1875]: New session 24 of user core. Jul 7 06:17:45.996962 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 7 06:17:46.036248 containerd[1926]: time="2025-07-07T06:17:46.036131842Z" level=info msg="TaskExit event in podsandbox handler exit_status:137 exited_at:{seconds:1751869064 nanos:4753296}" Jul 7 06:17:46.095096 ntpd[1869]: Deleting interface #12 lxc_health, fe80::4423:4eff:fe96:7ef5%8#123, interface stats: received=0, sent=0, dropped=0, active_time=60 secs Jul 7 06:17:46.095457 ntpd[1869]: 7 Jul 06:17:46 ntpd[1869]: Deleting interface #12 lxc_health, fe80::4423:4eff:fe96:7ef5%8#123, interface stats: received=0, sent=0, dropped=0, active_time=60 secs Jul 7 06:17:46.529815 sshd[5136]: Connection closed by 139.178.89.65 port 49644 Jul 7 06:17:46.536819 sshd-session[5134]: pam_unix(sshd:session): session closed for user core Jul 7 06:17:46.543845 kubelet[3270]: E0707 06:17:46.542871 3270 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bb4f29e6-b2ba-421b-bd60-8b9e19cd539b" containerName="cilium-operator" Jul 7 06:17:46.543845 kubelet[3270]: E0707 06:17:46.542924 3270 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b407ee20-16e1-433c-9d0d-b1ccd11db3d0" containerName="clean-cilium-state" Jul 7 06:17:46.543845 kubelet[3270]: E0707 06:17:46.542934 3270 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b407ee20-16e1-433c-9d0d-b1ccd11db3d0" containerName="cilium-agent" Jul 7 06:17:46.543845 kubelet[3270]: E0707 06:17:46.542944 3270 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b407ee20-16e1-433c-9d0d-b1ccd11db3d0" containerName="mount-cgroup" Jul 7 06:17:46.543845 kubelet[3270]: E0707 06:17:46.542952 3270 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b407ee20-16e1-433c-9d0d-b1ccd11db3d0" containerName="apply-sysctl-overwrites" Jul 7 06:17:46.543845 kubelet[3270]: E0707 06:17:46.542961 3270 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b407ee20-16e1-433c-9d0d-b1ccd11db3d0" containerName="mount-bpf-fs" Jul 7 06:17:46.543845 kubelet[3270]: I0707 06:17:46.543022 3270 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb4f29e6-b2ba-421b-bd60-8b9e19cd539b" containerName="cilium-operator" Jul 7 06:17:46.543845 kubelet[3270]: I0707 06:17:46.543032 3270 memory_manager.go:354] "RemoveStaleState removing state" podUID="b407ee20-16e1-433c-9d0d-b1ccd11db3d0" containerName="cilium-agent" Jul 7 06:17:46.544503 systemd[1]: sshd@23-172.31.23.116:22-139.178.89.65:49644.service: Deactivated successfully. Jul 7 06:17:46.550527 systemd[1]: session-24.scope: Deactivated successfully. Jul 7 06:17:46.557492 systemd-logind[1875]: Session 24 logged out. Waiting for processes to exit. Jul 7 06:17:46.584228 systemd[1]: Started sshd@24-172.31.23.116:22-139.178.89.65:49656.service - OpenSSH per-connection server daemon (139.178.89.65:49656). Jul 7 06:17:46.589218 systemd-logind[1875]: Removed session 24. Jul 7 06:17:46.603124 systemd[1]: Created slice kubepods-burstable-pod097eeaf5_4082_4845_a357_5fda5b2f4611.slice - libcontainer container kubepods-burstable-pod097eeaf5_4082_4845_a357_5fda5b2f4611.slice. Jul 7 06:17:46.641781 kubelet[3270]: I0707 06:17:46.641743 3270 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/097eeaf5-4082-4845-a357-5fda5b2f4611-cilium-cgroup\") pod \"cilium-m6ngh\" (UID: \"097eeaf5-4082-4845-a357-5fda5b2f4611\") " pod="kube-system/cilium-m6ngh" Jul 7 06:17:46.643316 kubelet[3270]: I0707 06:17:46.643161 3270 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/097eeaf5-4082-4845-a357-5fda5b2f4611-cni-path\") pod \"cilium-m6ngh\" (UID: \"097eeaf5-4082-4845-a357-5fda5b2f4611\") " pod="kube-system/cilium-m6ngh" Jul 7 06:17:46.643316 kubelet[3270]: I0707 06:17:46.643212 3270 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/097eeaf5-4082-4845-a357-5fda5b2f4611-bpf-maps\") pod \"cilium-m6ngh\" (UID: \"097eeaf5-4082-4845-a357-5fda5b2f4611\") " pod="kube-system/cilium-m6ngh" Jul 7 06:17:46.643316 kubelet[3270]: I0707 06:17:46.643253 3270 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/097eeaf5-4082-4845-a357-5fda5b2f4611-cilium-config-path\") pod \"cilium-m6ngh\" (UID: \"097eeaf5-4082-4845-a357-5fda5b2f4611\") " pod="kube-system/cilium-m6ngh" Jul 7 06:17:46.643316 kubelet[3270]: I0707 06:17:46.643278 3270 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/097eeaf5-4082-4845-a357-5fda5b2f4611-cilium-ipsec-secrets\") pod \"cilium-m6ngh\" (UID: \"097eeaf5-4082-4845-a357-5fda5b2f4611\") " pod="kube-system/cilium-m6ngh" Jul 7 06:17:46.643733 kubelet[3270]: I0707 06:17:46.643620 3270 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/097eeaf5-4082-4845-a357-5fda5b2f4611-etc-cni-netd\") pod \"cilium-m6ngh\" (UID: \"097eeaf5-4082-4845-a357-5fda5b2f4611\") " pod="kube-system/cilium-m6ngh" Jul 7 06:17:46.643733 kubelet[3270]: I0707 06:17:46.643676 3270 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/097eeaf5-4082-4845-a357-5fda5b2f4611-xtables-lock\") pod \"cilium-m6ngh\" (UID: \"097eeaf5-4082-4845-a357-5fda5b2f4611\") " pod="kube-system/cilium-m6ngh" Jul 7 06:17:46.643733 kubelet[3270]: I0707 06:17:46.643701 3270 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scwrm\" (UniqueName: \"kubernetes.io/projected/097eeaf5-4082-4845-a357-5fda5b2f4611-kube-api-access-scwrm\") pod \"cilium-m6ngh\" (UID: \"097eeaf5-4082-4845-a357-5fda5b2f4611\") " pod="kube-system/cilium-m6ngh" Jul 7 06:17:46.643989 kubelet[3270]: I0707 06:17:46.643836 3270 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/097eeaf5-4082-4845-a357-5fda5b2f4611-clustermesh-secrets\") pod \"cilium-m6ngh\" (UID: \"097eeaf5-4082-4845-a357-5fda5b2f4611\") " pod="kube-system/cilium-m6ngh" Jul 7 06:17:46.643989 kubelet[3270]: I0707 06:17:46.643862 3270 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/097eeaf5-4082-4845-a357-5fda5b2f4611-host-proc-sys-kernel\") pod \"cilium-m6ngh\" (UID: \"097eeaf5-4082-4845-a357-5fda5b2f4611\") " pod="kube-system/cilium-m6ngh" Jul 7 06:17:46.644844 kubelet[3270]: I0707 06:17:46.644730 3270 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/097eeaf5-4082-4845-a357-5fda5b2f4611-cilium-run\") pod \"cilium-m6ngh\" (UID: \"097eeaf5-4082-4845-a357-5fda5b2f4611\") " pod="kube-system/cilium-m6ngh" Jul 7 06:17:46.644844 kubelet[3270]: I0707 06:17:46.644782 3270 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/097eeaf5-4082-4845-a357-5fda5b2f4611-hubble-tls\") pod \"cilium-m6ngh\" (UID: \"097eeaf5-4082-4845-a357-5fda5b2f4611\") " pod="kube-system/cilium-m6ngh" Jul 7 06:17:46.645107 kubelet[3270]: I0707 06:17:46.644825 3270 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/097eeaf5-4082-4845-a357-5fda5b2f4611-host-proc-sys-net\") pod \"cilium-m6ngh\" (UID: \"097eeaf5-4082-4845-a357-5fda5b2f4611\") " pod="kube-system/cilium-m6ngh" Jul 7 06:17:46.645107 kubelet[3270]: I0707 06:17:46.645024 3270 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/097eeaf5-4082-4845-a357-5fda5b2f4611-hostproc\") pod \"cilium-m6ngh\" (UID: \"097eeaf5-4082-4845-a357-5fda5b2f4611\") " pod="kube-system/cilium-m6ngh" Jul 7 06:17:46.645107 kubelet[3270]: I0707 06:17:46.645064 3270 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/097eeaf5-4082-4845-a357-5fda5b2f4611-lib-modules\") pod \"cilium-m6ngh\" (UID: \"097eeaf5-4082-4845-a357-5fda5b2f4611\") " pod="kube-system/cilium-m6ngh" Jul 7 06:17:46.778493 sshd[5146]: Accepted publickey for core from 139.178.89.65 port 49656 ssh2: RSA SHA256:1kMr/NVCVvXsWyefEhF1pHl3N2KP5iBIRut6ncABJco Jul 7 06:17:46.780157 sshd-session[5146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:17:46.786724 systemd-logind[1875]: New session 25 of user core. Jul 7 06:17:46.791872 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 7 06:17:46.912748 sshd[5154]: Connection closed by 139.178.89.65 port 49656 Jul 7 06:17:46.914207 containerd[1926]: time="2025-07-07T06:17:46.912601288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m6ngh,Uid:097eeaf5-4082-4845-a357-5fda5b2f4611,Namespace:kube-system,Attempt:0,}" Jul 7 06:17:46.912945 sshd-session[5146]: pam_unix(sshd:session): session closed for user core Jul 7 06:17:46.917501 systemd[1]: sshd@24-172.31.23.116:22-139.178.89.65:49656.service: Deactivated successfully. Jul 7 06:17:46.919898 systemd[1]: session-25.scope: Deactivated successfully. Jul 7 06:17:46.921397 systemd-logind[1875]: Session 25 logged out. Waiting for processes to exit. Jul 7 06:17:46.923204 systemd-logind[1875]: Removed session 25. Jul 7 06:17:46.946460 containerd[1926]: time="2025-07-07T06:17:46.946324604Z" level=info msg="connecting to shim 718dc76b157f771b9cbc9cec2bb424d40277d8317075220d1fda823d473d7dfc" address="unix:///run/containerd/s/8f105856b7014ac983873cf3ccb41a8f782e7bba5243b2435dc2f2b8f757f66a" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:17:46.957983 systemd[1]: Started sshd@25-172.31.23.116:22-139.178.89.65:49668.service - OpenSSH per-connection server daemon (139.178.89.65:49668). Jul 7 06:17:46.985115 systemd[1]: Started cri-containerd-718dc76b157f771b9cbc9cec2bb424d40277d8317075220d1fda823d473d7dfc.scope - libcontainer container 718dc76b157f771b9cbc9cec2bb424d40277d8317075220d1fda823d473d7dfc. Jul 7 06:17:47.027442 containerd[1926]: time="2025-07-07T06:17:47.027345372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m6ngh,Uid:097eeaf5-4082-4845-a357-5fda5b2f4611,Namespace:kube-system,Attempt:0,} returns sandbox id \"718dc76b157f771b9cbc9cec2bb424d40277d8317075220d1fda823d473d7dfc\"" Jul 7 06:17:47.031603 containerd[1926]: time="2025-07-07T06:17:47.030476595Z" level=info msg="CreateContainer within sandbox \"718dc76b157f771b9cbc9cec2bb424d40277d8317075220d1fda823d473d7dfc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 7 06:17:47.046398 containerd[1926]: time="2025-07-07T06:17:47.046364434Z" level=info msg="Container d9b35b8668ebe2853447a9d94a53c95b8c879d1df780cfc05a25abaa813da35c: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:17:47.059173 containerd[1926]: time="2025-07-07T06:17:47.059141195Z" level=info msg="CreateContainer within sandbox \"718dc76b157f771b9cbc9cec2bb424d40277d8317075220d1fda823d473d7dfc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d9b35b8668ebe2853447a9d94a53c95b8c879d1df780cfc05a25abaa813da35c\"" Jul 7 06:17:47.060812 containerd[1926]: time="2025-07-07T06:17:47.060786350Z" level=info msg="StartContainer for \"d9b35b8668ebe2853447a9d94a53c95b8c879d1df780cfc05a25abaa813da35c\"" Jul 7 06:17:47.062781 containerd[1926]: time="2025-07-07T06:17:47.062742869Z" level=info msg="connecting to shim d9b35b8668ebe2853447a9d94a53c95b8c879d1df780cfc05a25abaa813da35c" address="unix:///run/containerd/s/8f105856b7014ac983873cf3ccb41a8f782e7bba5243b2435dc2f2b8f757f66a" protocol=ttrpc version=3 Jul 7 06:17:47.084900 systemd[1]: Started cri-containerd-d9b35b8668ebe2853447a9d94a53c95b8c879d1df780cfc05a25abaa813da35c.scope - libcontainer container d9b35b8668ebe2853447a9d94a53c95b8c879d1df780cfc05a25abaa813da35c. Jul 7 06:17:47.121908 containerd[1926]: time="2025-07-07T06:17:47.121869610Z" level=info msg="StartContainer for \"d9b35b8668ebe2853447a9d94a53c95b8c879d1df780cfc05a25abaa813da35c\" returns successfully" Jul 7 06:17:47.143932 systemd[1]: cri-containerd-d9b35b8668ebe2853447a9d94a53c95b8c879d1df780cfc05a25abaa813da35c.scope: Deactivated successfully. Jul 7 06:17:47.144202 systemd[1]: cri-containerd-d9b35b8668ebe2853447a9d94a53c95b8c879d1df780cfc05a25abaa813da35c.scope: Consumed 23ms CPU time, 9.8M memory peak, 3.4M read from disk. Jul 7 06:17:47.146355 containerd[1926]: time="2025-07-07T06:17:47.146243468Z" level=info msg="received exit event container_id:\"d9b35b8668ebe2853447a9d94a53c95b8c879d1df780cfc05a25abaa813da35c\" id:\"d9b35b8668ebe2853447a9d94a53c95b8c879d1df780cfc05a25abaa813da35c\" pid:5220 exited_at:{seconds:1751869067 nanos:145841364}" Jul 7 06:17:47.146440 containerd[1926]: time="2025-07-07T06:17:47.146353111Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d9b35b8668ebe2853447a9d94a53c95b8c879d1df780cfc05a25abaa813da35c\" id:\"d9b35b8668ebe2853447a9d94a53c95b8c879d1df780cfc05a25abaa813da35c\" pid:5220 exited_at:{seconds:1751869067 nanos:145841364}" Jul 7 06:17:47.163104 sshd[5179]: Accepted publickey for core from 139.178.89.65 port 49668 ssh2: RSA SHA256:1kMr/NVCVvXsWyefEhF1pHl3N2KP5iBIRut6ncABJco Jul 7 06:17:47.165640 sshd-session[5179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:17:47.172714 systemd-logind[1875]: New session 26 of user core. Jul 7 06:17:47.183891 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 7 06:17:47.513504 kubelet[3270]: E0707 06:17:47.513459 3270 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 7 06:17:47.758671 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1088581148.mount: Deactivated successfully. Jul 7 06:17:47.776196 containerd[1926]: time="2025-07-07T06:17:47.775897669Z" level=info msg="CreateContainer within sandbox \"718dc76b157f771b9cbc9cec2bb424d40277d8317075220d1fda823d473d7dfc\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 7 06:17:47.793072 containerd[1926]: time="2025-07-07T06:17:47.793016221Z" level=info msg="Container 8210590a4d291093a6eba54ae41a357e14a58fd89e47886dc2ed5ec3852adb43: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:17:47.823615 containerd[1926]: time="2025-07-07T06:17:47.823561848Z" level=info msg="CreateContainer within sandbox \"718dc76b157f771b9cbc9cec2bb424d40277d8317075220d1fda823d473d7dfc\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8210590a4d291093a6eba54ae41a357e14a58fd89e47886dc2ed5ec3852adb43\"" Jul 7 06:17:47.824416 containerd[1926]: time="2025-07-07T06:17:47.824368926Z" level=info msg="StartContainer for \"8210590a4d291093a6eba54ae41a357e14a58fd89e47886dc2ed5ec3852adb43\"" Jul 7 06:17:47.826006 containerd[1926]: time="2025-07-07T06:17:47.825957466Z" level=info msg="connecting to shim 8210590a4d291093a6eba54ae41a357e14a58fd89e47886dc2ed5ec3852adb43" address="unix:///run/containerd/s/8f105856b7014ac983873cf3ccb41a8f782e7bba5243b2435dc2f2b8f757f66a" protocol=ttrpc version=3 Jul 7 06:17:47.856891 systemd[1]: Started cri-containerd-8210590a4d291093a6eba54ae41a357e14a58fd89e47886dc2ed5ec3852adb43.scope - libcontainer container 8210590a4d291093a6eba54ae41a357e14a58fd89e47886dc2ed5ec3852adb43. Jul 7 06:17:47.893698 containerd[1926]: time="2025-07-07T06:17:47.892973837Z" level=info msg="StartContainer for \"8210590a4d291093a6eba54ae41a357e14a58fd89e47886dc2ed5ec3852adb43\" returns successfully" Jul 7 06:17:47.907393 systemd[1]: cri-containerd-8210590a4d291093a6eba54ae41a357e14a58fd89e47886dc2ed5ec3852adb43.scope: Deactivated successfully. Jul 7 06:17:47.907723 systemd[1]: cri-containerd-8210590a4d291093a6eba54ae41a357e14a58fd89e47886dc2ed5ec3852adb43.scope: Consumed 20ms CPU time, 7.5M memory peak, 2.1M read from disk. Jul 7 06:17:47.909214 containerd[1926]: time="2025-07-07T06:17:47.909098706Z" level=info msg="received exit event container_id:\"8210590a4d291093a6eba54ae41a357e14a58fd89e47886dc2ed5ec3852adb43\" id:\"8210590a4d291093a6eba54ae41a357e14a58fd89e47886dc2ed5ec3852adb43\" pid:5269 exited_at:{seconds:1751869067 nanos:908907979}" Jul 7 06:17:47.909311 containerd[1926]: time="2025-07-07T06:17:47.909232575Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8210590a4d291093a6eba54ae41a357e14a58fd89e47886dc2ed5ec3852adb43\" id:\"8210590a4d291093a6eba54ae41a357e14a58fd89e47886dc2ed5ec3852adb43\" pid:5269 exited_at:{seconds:1751869067 nanos:908907979}" Jul 7 06:17:47.934179 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8210590a4d291093a6eba54ae41a357e14a58fd89e47886dc2ed5ec3852adb43-rootfs.mount: Deactivated successfully. Jul 7 06:17:48.780620 containerd[1926]: time="2025-07-07T06:17:48.780543264Z" level=info msg="CreateContainer within sandbox \"718dc76b157f771b9cbc9cec2bb424d40277d8317075220d1fda823d473d7dfc\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 7 06:17:48.797343 containerd[1926]: time="2025-07-07T06:17:48.794676237Z" level=info msg="Container 512ca9f8bdc0b986094e60aa9c30e56f6783384e48f2233a5fa28da50c074d9c: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:17:48.812423 containerd[1926]: time="2025-07-07T06:17:48.812222855Z" level=info msg="CreateContainer within sandbox \"718dc76b157f771b9cbc9cec2bb424d40277d8317075220d1fda823d473d7dfc\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"512ca9f8bdc0b986094e60aa9c30e56f6783384e48f2233a5fa28da50c074d9c\"" Jul 7 06:17:48.813085 containerd[1926]: time="2025-07-07T06:17:48.813059167Z" level=info msg="StartContainer for \"512ca9f8bdc0b986094e60aa9c30e56f6783384e48f2233a5fa28da50c074d9c\"" Jul 7 06:17:48.815793 containerd[1926]: time="2025-07-07T06:17:48.815749792Z" level=info msg="connecting to shim 512ca9f8bdc0b986094e60aa9c30e56f6783384e48f2233a5fa28da50c074d9c" address="unix:///run/containerd/s/8f105856b7014ac983873cf3ccb41a8f782e7bba5243b2435dc2f2b8f757f66a" protocol=ttrpc version=3 Jul 7 06:17:48.847914 systemd[1]: Started cri-containerd-512ca9f8bdc0b986094e60aa9c30e56f6783384e48f2233a5fa28da50c074d9c.scope - libcontainer container 512ca9f8bdc0b986094e60aa9c30e56f6783384e48f2233a5fa28da50c074d9c. Jul 7 06:17:48.889619 containerd[1926]: time="2025-07-07T06:17:48.889556639Z" level=info msg="StartContainer for \"512ca9f8bdc0b986094e60aa9c30e56f6783384e48f2233a5fa28da50c074d9c\" returns successfully" Jul 7 06:17:48.894134 systemd[1]: cri-containerd-512ca9f8bdc0b986094e60aa9c30e56f6783384e48f2233a5fa28da50c074d9c.scope: Deactivated successfully. Jul 7 06:17:48.896855 containerd[1926]: time="2025-07-07T06:17:48.896793636Z" level=info msg="TaskExit event in podsandbox handler container_id:\"512ca9f8bdc0b986094e60aa9c30e56f6783384e48f2233a5fa28da50c074d9c\" id:\"512ca9f8bdc0b986094e60aa9c30e56f6783384e48f2233a5fa28da50c074d9c\" pid:5315 exited_at:{seconds:1751869068 nanos:896213704}" Jul 7 06:17:48.897060 containerd[1926]: time="2025-07-07T06:17:48.896822517Z" level=info msg="received exit event container_id:\"512ca9f8bdc0b986094e60aa9c30e56f6783384e48f2233a5fa28da50c074d9c\" id:\"512ca9f8bdc0b986094e60aa9c30e56f6783384e48f2233a5fa28da50c074d9c\" pid:5315 exited_at:{seconds:1751869068 nanos:896213704}" Jul 7 06:17:48.930569 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-512ca9f8bdc0b986094e60aa9c30e56f6783384e48f2233a5fa28da50c074d9c-rootfs.mount: Deactivated successfully. Jul 7 06:17:49.785745 containerd[1926]: time="2025-07-07T06:17:49.785668200Z" level=info msg="CreateContainer within sandbox \"718dc76b157f771b9cbc9cec2bb424d40277d8317075220d1fda823d473d7dfc\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 7 06:17:49.804226 containerd[1926]: time="2025-07-07T06:17:49.804179259Z" level=info msg="Container 1162928ebdfaca89331cf3e543253da75c71816999a859e7f67e779f4659c4e8: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:17:49.815625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1629367026.mount: Deactivated successfully. Jul 7 06:17:49.824201 containerd[1926]: time="2025-07-07T06:17:49.824153599Z" level=info msg="CreateContainer within sandbox \"718dc76b157f771b9cbc9cec2bb424d40277d8317075220d1fda823d473d7dfc\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1162928ebdfaca89331cf3e543253da75c71816999a859e7f67e779f4659c4e8\"" Jul 7 06:17:49.825183 containerd[1926]: time="2025-07-07T06:17:49.824970289Z" level=info msg="StartContainer for \"1162928ebdfaca89331cf3e543253da75c71816999a859e7f67e779f4659c4e8\"" Jul 7 06:17:49.826730 containerd[1926]: time="2025-07-07T06:17:49.826633148Z" level=info msg="connecting to shim 1162928ebdfaca89331cf3e543253da75c71816999a859e7f67e779f4659c4e8" address="unix:///run/containerd/s/8f105856b7014ac983873cf3ccb41a8f782e7bba5243b2435dc2f2b8f757f66a" protocol=ttrpc version=3 Jul 7 06:17:49.859110 systemd[1]: Started cri-containerd-1162928ebdfaca89331cf3e543253da75c71816999a859e7f67e779f4659c4e8.scope - libcontainer container 1162928ebdfaca89331cf3e543253da75c71816999a859e7f67e779f4659c4e8. Jul 7 06:17:49.914427 containerd[1926]: time="2025-07-07T06:17:49.914362598Z" level=info msg="StartContainer for \"1162928ebdfaca89331cf3e543253da75c71816999a859e7f67e779f4659c4e8\" returns successfully" Jul 7 06:17:49.921863 systemd[1]: cri-containerd-1162928ebdfaca89331cf3e543253da75c71816999a859e7f67e779f4659c4e8.scope: Deactivated successfully. Jul 7 06:17:49.924323 containerd[1926]: time="2025-07-07T06:17:49.924247409Z" level=info msg="received exit event container_id:\"1162928ebdfaca89331cf3e543253da75c71816999a859e7f67e779f4659c4e8\" id:\"1162928ebdfaca89331cf3e543253da75c71816999a859e7f67e779f4659c4e8\" pid:5354 exited_at:{seconds:1751869069 nanos:923499076}" Jul 7 06:17:49.925076 containerd[1926]: time="2025-07-07T06:17:49.925050349Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1162928ebdfaca89331cf3e543253da75c71816999a859e7f67e779f4659c4e8\" id:\"1162928ebdfaca89331cf3e543253da75c71816999a859e7f67e779f4659c4e8\" pid:5354 exited_at:{seconds:1751869069 nanos:923499076}" Jul 7 06:17:49.958887 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1162928ebdfaca89331cf3e543253da75c71816999a859e7f67e779f4659c4e8-rootfs.mount: Deactivated successfully. Jul 7 06:17:50.118155 kubelet[3270]: I0707 06:17:50.117930 3270 setters.go:600] "Node became not ready" node="ip-172-31-23-116" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-07T06:17:50Z","lastTransitionTime":"2025-07-07T06:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 7 06:17:50.802739 containerd[1926]: time="2025-07-07T06:17:50.802689756Z" level=info msg="CreateContainer within sandbox \"718dc76b157f771b9cbc9cec2bb424d40277d8317075220d1fda823d473d7dfc\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 7 06:17:50.821297 containerd[1926]: time="2025-07-07T06:17:50.821151899Z" level=info msg="Container ecb8f6689b16b23e59904d0090d15aee128a854b3daa5423bd712430e653ae1e: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:17:50.845123 containerd[1926]: time="2025-07-07T06:17:50.845074910Z" level=info msg="CreateContainer within sandbox \"718dc76b157f771b9cbc9cec2bb424d40277d8317075220d1fda823d473d7dfc\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ecb8f6689b16b23e59904d0090d15aee128a854b3daa5423bd712430e653ae1e\"" Jul 7 06:17:50.847307 containerd[1926]: time="2025-07-07T06:17:50.846002549Z" level=info msg="StartContainer for \"ecb8f6689b16b23e59904d0090d15aee128a854b3daa5423bd712430e653ae1e\"" Jul 7 06:17:50.847307 containerd[1926]: time="2025-07-07T06:17:50.847171168Z" level=info msg="connecting to shim ecb8f6689b16b23e59904d0090d15aee128a854b3daa5423bd712430e653ae1e" address="unix:///run/containerd/s/8f105856b7014ac983873cf3ccb41a8f782e7bba5243b2435dc2f2b8f757f66a" protocol=ttrpc version=3 Jul 7 06:17:50.877969 systemd[1]: Started cri-containerd-ecb8f6689b16b23e59904d0090d15aee128a854b3daa5423bd712430e653ae1e.scope - libcontainer container ecb8f6689b16b23e59904d0090d15aee128a854b3daa5423bd712430e653ae1e. Jul 7 06:17:50.920631 containerd[1926]: time="2025-07-07T06:17:50.920587900Z" level=info msg="StartContainer for \"ecb8f6689b16b23e59904d0090d15aee128a854b3daa5423bd712430e653ae1e\" returns successfully" Jul 7 06:17:51.045899 containerd[1926]: time="2025-07-07T06:17:51.045863324Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ecb8f6689b16b23e59904d0090d15aee128a854b3daa5423bd712430e653ae1e\" id:\"e2701e15938ec840fd06de14fcebc76952616a8c3a1b0f1ba76bf65b347a0691\" pid:5420 exited_at:{seconds:1751869071 nanos:45556354}" Jul 7 06:17:51.751684 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jul 7 06:17:51.820628 kubelet[3270]: I0707 06:17:51.820218 3270 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-m6ngh" podStartSLOduration=5.820199329 podStartE2EDuration="5.820199329s" podCreationTimestamp="2025-07-07 06:17:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:17:51.819246481 +0000 UTC m=+94.646199458" watchObservedRunningTime="2025-07-07 06:17:51.820199329 +0000 UTC m=+94.647152306" Jul 7 06:17:53.927989 containerd[1926]: time="2025-07-07T06:17:53.927943567Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ecb8f6689b16b23e59904d0090d15aee128a854b3daa5423bd712430e653ae1e\" id:\"4e0daff996f792fbb7a41e2ae178336f407d72f19429d10b9c86986909a8c868\" pid:5593 exit_status:1 exited_at:{seconds:1751869073 nanos:926839564}" Jul 7 06:17:54.911475 (udev-worker)[5909]: Network interface NamePolicy= disabled on kernel command line. Jul 7 06:17:54.918900 systemd-networkd[1826]: lxc_health: Link UP Jul 7 06:17:54.930925 (udev-worker)[5910]: Network interface NamePolicy= disabled on kernel command line. Jul 7 06:17:54.932771 systemd-networkd[1826]: lxc_health: Gained carrier Jul 7 06:17:56.215459 containerd[1926]: time="2025-07-07T06:17:56.215410888Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ecb8f6689b16b23e59904d0090d15aee128a854b3daa5423bd712430e653ae1e\" id:\"893ce2b71621e23efc2212ea370ce6cce833edb69f9e64158a0e2493f0532379\" pid:5941 exited_at:{seconds:1751869076 nanos:213519383}" Jul 7 06:17:56.899815 systemd-networkd[1826]: lxc_health: Gained IPv6LL Jul 7 06:17:58.429476 containerd[1926]: time="2025-07-07T06:17:58.429374749Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ecb8f6689b16b23e59904d0090d15aee128a854b3daa5423bd712430e653ae1e\" id:\"30bb65c02ddfca9143065d9c4fa5554b3996b4eeeadfcd7a9c1324cf1a2c61c8\" pid:5972 exited_at:{seconds:1751869078 nanos:428172688}" Jul 7 06:17:59.096720 ntpd[1869]: Listen normally on 15 lxc_health [fe80::5c7a:3aff:fe87:158c%14]:123 Jul 7 06:17:59.097247 ntpd[1869]: 7 Jul 06:17:59 ntpd[1869]: Listen normally on 15 lxc_health [fe80::5c7a:3aff:fe87:158c%14]:123 Jul 7 06:18:00.549573 containerd[1926]: time="2025-07-07T06:18:00.549524302Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ecb8f6689b16b23e59904d0090d15aee128a854b3daa5423bd712430e653ae1e\" id:\"cac56ee17c4d82ae3b1350c0b33a4c2095480019d83ea8a9f467c366606c3a59\" pid:6005 exited_at:{seconds:1751869080 nanos:548738890}" Jul 7 06:18:02.700880 containerd[1926]: time="2025-07-07T06:18:02.700811280Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ecb8f6689b16b23e59904d0090d15aee128a854b3daa5423bd712430e653ae1e\" id:\"d9a88526d6bff33c168d8fc9ccc75631ef549234ca3f3bb1871bec5f232fb7a8\" pid:6030 exited_at:{seconds:1751869082 nanos:699017184}" Jul 7 06:18:02.728936 sshd[5251]: Connection closed by 139.178.89.65 port 49668 Jul 7 06:18:02.730190 sshd-session[5179]: pam_unix(sshd:session): session closed for user core Jul 7 06:18:02.753187 systemd[1]: sshd@25-172.31.23.116:22-139.178.89.65:49668.service: Deactivated successfully. Jul 7 06:18:02.755947 systemd[1]: session-26.scope: Deactivated successfully. Jul 7 06:18:02.758003 systemd-logind[1875]: Session 26 logged out. Waiting for processes to exit. Jul 7 06:18:02.760417 systemd-logind[1875]: Removed session 26. Jul 7 06:18:17.329018 systemd[1]: cri-containerd-aab461a0244d51fec1efbe925561ab31c832d52ffd66b42d137842073c17e4d6.scope: Deactivated successfully. Jul 7 06:18:17.330905 systemd[1]: cri-containerd-aab461a0244d51fec1efbe925561ab31c832d52ffd66b42d137842073c17e4d6.scope: Consumed 3.009s CPU time, 69.4M memory peak, 20.5M read from disk. Jul 7 06:18:17.332742 containerd[1926]: time="2025-07-07T06:18:17.331344212Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aab461a0244d51fec1efbe925561ab31c832d52ffd66b42d137842073c17e4d6\" id:\"aab461a0244d51fec1efbe925561ab31c832d52ffd66b42d137842073c17e4d6\" pid:3104 exit_status:1 exited_at:{seconds:1751869097 nanos:330467063}" Jul 7 06:18:17.332742 containerd[1926]: time="2025-07-07T06:18:17.331408787Z" level=info msg="received exit event container_id:\"aab461a0244d51fec1efbe925561ab31c832d52ffd66b42d137842073c17e4d6\" id:\"aab461a0244d51fec1efbe925561ab31c832d52ffd66b42d137842073c17e4d6\" pid:3104 exit_status:1 exited_at:{seconds:1751869097 nanos:330467063}" Jul 7 06:18:17.335675 containerd[1926]: time="2025-07-07T06:18:17.335073004Z" level=info msg="StopPodSandbox for \"32f7754e42bcb40a038e49bedffc007640fc6c35b0089a271360344c84c16a00\"" Jul 7 06:18:17.335986 containerd[1926]: time="2025-07-07T06:18:17.335908597Z" level=info msg="TearDown network for sandbox \"32f7754e42bcb40a038e49bedffc007640fc6c35b0089a271360344c84c16a00\" successfully" Jul 7 06:18:17.335986 containerd[1926]: time="2025-07-07T06:18:17.335926574Z" level=info msg="StopPodSandbox for \"32f7754e42bcb40a038e49bedffc007640fc6c35b0089a271360344c84c16a00\" returns successfully" Jul 7 06:18:17.336582 containerd[1926]: time="2025-07-07T06:18:17.336562461Z" level=info msg="RemovePodSandbox for \"32f7754e42bcb40a038e49bedffc007640fc6c35b0089a271360344c84c16a00\"" Jul 7 06:18:17.337591 containerd[1926]: time="2025-07-07T06:18:17.337339685Z" level=info msg="Forcibly stopping sandbox \"32f7754e42bcb40a038e49bedffc007640fc6c35b0089a271360344c84c16a00\"" Jul 7 06:18:17.337591 containerd[1926]: time="2025-07-07T06:18:17.337425089Z" level=info msg="TearDown network for sandbox \"32f7754e42bcb40a038e49bedffc007640fc6c35b0089a271360344c84c16a00\" successfully" Jul 7 06:18:17.351052 containerd[1926]: time="2025-07-07T06:18:17.351000210Z" level=info msg="Ensure that sandbox 32f7754e42bcb40a038e49bedffc007640fc6c35b0089a271360344c84c16a00 in task-service has been cleanup successfully" Jul 7 06:18:17.357969 containerd[1926]: time="2025-07-07T06:18:17.357278255Z" level=info msg="RemovePodSandbox \"32f7754e42bcb40a038e49bedffc007640fc6c35b0089a271360344c84c16a00\" returns successfully" Jul 7 06:18:17.358945 containerd[1926]: time="2025-07-07T06:18:17.358915257Z" level=info msg="StopPodSandbox for \"ae6cfc246bf73122fb6a9b318b614ab1cd5114dc234d41b40df43ddb9a2077be\"" Jul 7 06:18:17.359101 containerd[1926]: time="2025-07-07T06:18:17.359085100Z" level=info msg="TearDown network for sandbox \"ae6cfc246bf73122fb6a9b318b614ab1cd5114dc234d41b40df43ddb9a2077be\" successfully" Jul 7 06:18:17.359136 containerd[1926]: time="2025-07-07T06:18:17.359101034Z" level=info msg="StopPodSandbox for \"ae6cfc246bf73122fb6a9b318b614ab1cd5114dc234d41b40df43ddb9a2077be\" returns successfully" Jul 7 06:18:17.365708 containerd[1926]: time="2025-07-07T06:18:17.365453097Z" level=info msg="RemovePodSandbox for \"ae6cfc246bf73122fb6a9b318b614ab1cd5114dc234d41b40df43ddb9a2077be\"" Jul 7 06:18:17.365708 containerd[1926]: time="2025-07-07T06:18:17.365497997Z" level=info msg="Forcibly stopping sandbox \"ae6cfc246bf73122fb6a9b318b614ab1cd5114dc234d41b40df43ddb9a2077be\"" Jul 7 06:18:17.366054 containerd[1926]: time="2025-07-07T06:18:17.366036114Z" level=info msg="TearDown network for sandbox \"ae6cfc246bf73122fb6a9b318b614ab1cd5114dc234d41b40df43ddb9a2077be\" successfully" Jul 7 06:18:17.368563 containerd[1926]: time="2025-07-07T06:18:17.368539454Z" level=info msg="Ensure that sandbox ae6cfc246bf73122fb6a9b318b614ab1cd5114dc234d41b40df43ddb9a2077be in task-service has been cleanup successfully" Jul 7 06:18:17.371372 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aab461a0244d51fec1efbe925561ab31c832d52ffd66b42d137842073c17e4d6-rootfs.mount: Deactivated successfully. Jul 7 06:18:17.395899 containerd[1926]: time="2025-07-07T06:18:17.395839565Z" level=info msg="RemovePodSandbox \"ae6cfc246bf73122fb6a9b318b614ab1cd5114dc234d41b40df43ddb9a2077be\" returns successfully" Jul 7 06:18:17.873478 kubelet[3270]: I0707 06:18:17.873118 3270 scope.go:117] "RemoveContainer" containerID="aab461a0244d51fec1efbe925561ab31c832d52ffd66b42d137842073c17e4d6" Jul 7 06:18:17.875746 containerd[1926]: time="2025-07-07T06:18:17.875712954Z" level=info msg="CreateContainer within sandbox \"b36399912801631363094753b4c1af163945e642372137e4683681914de944f5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 7 06:18:17.894665 containerd[1926]: time="2025-07-07T06:18:17.892469945Z" level=info msg="Container 81005a8faed0be45c8fd111c82bcfd4261132dc18737fff6df5c068494972988: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:18:17.898415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount516598142.mount: Deactivated successfully. Jul 7 06:18:17.906021 containerd[1926]: time="2025-07-07T06:18:17.905958319Z" level=info msg="CreateContainer within sandbox \"b36399912801631363094753b4c1af163945e642372137e4683681914de944f5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"81005a8faed0be45c8fd111c82bcfd4261132dc18737fff6df5c068494972988\"" Jul 7 06:18:17.907682 containerd[1926]: time="2025-07-07T06:18:17.906419625Z" level=info msg="StartContainer for \"81005a8faed0be45c8fd111c82bcfd4261132dc18737fff6df5c068494972988\"" Jul 7 06:18:17.907682 containerd[1926]: time="2025-07-07T06:18:17.907456408Z" level=info msg="connecting to shim 81005a8faed0be45c8fd111c82bcfd4261132dc18737fff6df5c068494972988" address="unix:///run/containerd/s/0d070009a2f79052aca058b8e950f3b5daeaf10a7a90b2d49c243eaf6837d478" protocol=ttrpc version=3 Jul 7 06:18:17.938902 systemd[1]: Started cri-containerd-81005a8faed0be45c8fd111c82bcfd4261132dc18737fff6df5c068494972988.scope - libcontainer container 81005a8faed0be45c8fd111c82bcfd4261132dc18737fff6df5c068494972988. Jul 7 06:18:18.021694 containerd[1926]: time="2025-07-07T06:18:18.021588734Z" level=info msg="StartContainer for \"81005a8faed0be45c8fd111c82bcfd4261132dc18737fff6df5c068494972988\" returns successfully" Jul 7 06:18:19.961592 kubelet[3270]: E0707 06:18:19.961542 3270 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ip-172-31-23-116)" Jul 7 06:18:21.598376 systemd[1]: cri-containerd-0ea2d199ba6f2d7681d98446731b231c6c589652426b05fb573959c46423bda8.scope: Deactivated successfully. Jul 7 06:18:21.598687 systemd[1]: cri-containerd-0ea2d199ba6f2d7681d98446731b231c6c589652426b05fb573959c46423bda8.scope: Consumed 1.889s CPU time, 26.7M memory peak, 9.3M read from disk. Jul 7 06:18:21.600935 containerd[1926]: time="2025-07-07T06:18:21.600892136Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0ea2d199ba6f2d7681d98446731b231c6c589652426b05fb573959c46423bda8\" id:\"0ea2d199ba6f2d7681d98446731b231c6c589652426b05fb573959c46423bda8\" pid:3118 exit_status:1 exited_at:{seconds:1751869101 nanos:600502621}" Jul 7 06:18:21.601380 containerd[1926]: time="2025-07-07T06:18:21.600959016Z" level=info msg="received exit event container_id:\"0ea2d199ba6f2d7681d98446731b231c6c589652426b05fb573959c46423bda8\" id:\"0ea2d199ba6f2d7681d98446731b231c6c589652426b05fb573959c46423bda8\" pid:3118 exit_status:1 exited_at:{seconds:1751869101 nanos:600502621}" Jul 7 06:18:21.628168 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ea2d199ba6f2d7681d98446731b231c6c589652426b05fb573959c46423bda8-rootfs.mount: Deactivated successfully. Jul 7 06:18:21.885916 kubelet[3270]: I0707 06:18:21.885892 3270 scope.go:117] "RemoveContainer" containerID="0ea2d199ba6f2d7681d98446731b231c6c589652426b05fb573959c46423bda8" Jul 7 06:18:21.888265 containerd[1926]: time="2025-07-07T06:18:21.888200565Z" level=info msg="CreateContainer within sandbox \"f9bdba2ea7fd0a0daf39469dabb75f2a5ff5600c94408f18e86380ecc257d55b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 7 06:18:21.910754 containerd[1926]: time="2025-07-07T06:18:21.910033495Z" level=info msg="Container 9308000e1b93436518cef42576fd8ab20ff61eada92b845f8257b4ba00a96137: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:18:21.914792 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1179114954.mount: Deactivated successfully. Jul 7 06:18:21.924997 containerd[1926]: time="2025-07-07T06:18:21.924940406Z" level=info msg="CreateContainer within sandbox \"f9bdba2ea7fd0a0daf39469dabb75f2a5ff5600c94408f18e86380ecc257d55b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"9308000e1b93436518cef42576fd8ab20ff61eada92b845f8257b4ba00a96137\"" Jul 7 06:18:21.925366 containerd[1926]: time="2025-07-07T06:18:21.925336944Z" level=info msg="StartContainer for \"9308000e1b93436518cef42576fd8ab20ff61eada92b845f8257b4ba00a96137\"" Jul 7 06:18:21.926599 containerd[1926]: time="2025-07-07T06:18:21.926556047Z" level=info msg="connecting to shim 9308000e1b93436518cef42576fd8ab20ff61eada92b845f8257b4ba00a96137" address="unix:///run/containerd/s/d69bffc31af5728bc4b8ff3c6589a6dc4ffbfe1d18784936b70a8f8a51d65039" protocol=ttrpc version=3 Jul 7 06:18:21.950840 systemd[1]: Started cri-containerd-9308000e1b93436518cef42576fd8ab20ff61eada92b845f8257b4ba00a96137.scope - libcontainer container 9308000e1b93436518cef42576fd8ab20ff61eada92b845f8257b4ba00a96137. Jul 7 06:18:22.009600 containerd[1926]: time="2025-07-07T06:18:22.009562274Z" level=info msg="StartContainer for \"9308000e1b93436518cef42576fd8ab20ff61eada92b845f8257b4ba00a96137\" returns successfully" Jul 7 06:18:29.962399 kubelet[3270]: E0707 06:18:29.962043 3270 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-116?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"