Aug 13 00:38:54.934546 kernel: Linux version 6.12.40-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 21:42:48 -00 2025 Aug 13 00:38:54.934598 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 00:38:54.934614 kernel: BIOS-provided physical RAM map: Aug 13 00:38:54.934626 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Aug 13 00:38:54.934637 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Aug 13 00:38:54.934648 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Aug 13 00:38:54.934663 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Aug 13 00:38:54.934674 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Aug 13 00:38:54.934689 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Aug 13 00:38:54.934700 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Aug 13 00:38:54.934712 kernel: NX (Execute Disable) protection: active Aug 13 00:38:54.934724 kernel: APIC: Static calls initialized Aug 13 00:38:54.934736 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Aug 13 00:38:54.934749 kernel: extended physical RAM map: Aug 13 00:38:54.934768 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Aug 13 00:38:54.934781 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Aug 13 00:38:54.934795 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Aug 13 00:38:54.934809 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Aug 13 00:38:54.934822 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Aug 13 00:38:54.934835 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Aug 13 00:38:54.934849 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Aug 13 00:38:54.934862 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Aug 13 00:38:54.934875 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Aug 13 00:38:54.934888 kernel: efi: EFI v2.7 by EDK II Aug 13 00:38:54.934904 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003518 Aug 13 00:38:54.934917 kernel: secureboot: Secure boot disabled Aug 13 00:38:54.934930 kernel: SMBIOS 2.7 present. Aug 13 00:38:54.934943 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Aug 13 00:38:54.934956 kernel: DMI: Memory slots populated: 1/1 Aug 13 00:38:54.934969 kernel: Hypervisor detected: KVM Aug 13 00:38:54.934983 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 00:38:54.934996 kernel: kvm-clock: using sched offset of 5446265904 cycles Aug 13 00:38:54.935010 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 00:38:54.935024 kernel: tsc: Detected 2500.006 MHz processor Aug 13 00:38:54.935038 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 00:38:54.935054 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 00:38:54.935068 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Aug 13 00:38:54.935082 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Aug 13 00:38:54.935095 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 00:38:54.935109 kernel: Using GB pages for direct mapping Aug 13 00:38:54.935128 kernel: ACPI: Early table checksum verification disabled Aug 13 00:38:54.935145 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Aug 13 00:38:54.935159 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Aug 13 00:38:54.935174 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Aug 13 00:38:54.935189 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Aug 13 00:38:54.935203 kernel: ACPI: FACS 0x00000000789D0000 000040 Aug 13 00:38:54.935218 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Aug 13 00:38:54.935232 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Aug 13 00:38:54.935246 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Aug 13 00:38:54.935263 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Aug 13 00:38:54.935278 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Aug 13 00:38:54.935293 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Aug 13 00:38:54.935307 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Aug 13 00:38:54.935321 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Aug 13 00:38:54.935336 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Aug 13 00:38:54.935350 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Aug 13 00:38:54.935365 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Aug 13 00:38:54.935382 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Aug 13 00:38:54.935396 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Aug 13 00:38:54.935411 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Aug 13 00:38:54.935425 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Aug 13 00:38:54.935439 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Aug 13 00:38:54.935454 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Aug 13 00:38:54.935468 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Aug 13 00:38:54.935482 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Aug 13 00:38:54.935496 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Aug 13 00:38:54.935514 kernel: NUMA: Initialized distance table, cnt=1 Aug 13 00:38:54.935541 kernel: NODE_DATA(0) allocated [mem 0x7a8eddc0-0x7a8f4fff] Aug 13 00:38:54.935565 kernel: Zone ranges: Aug 13 00:38:54.935580 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 00:38:54.935594 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Aug 13 00:38:54.935608 kernel: Normal empty Aug 13 00:38:54.935622 kernel: Device empty Aug 13 00:38:54.935637 kernel: Movable zone start for each node Aug 13 00:38:54.935652 kernel: Early memory node ranges Aug 13 00:38:54.935666 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Aug 13 00:38:54.935683 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Aug 13 00:38:54.935698 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Aug 13 00:38:54.935712 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Aug 13 00:38:54.935727 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 00:38:54.935740 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Aug 13 00:38:54.935755 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Aug 13 00:38:54.935769 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Aug 13 00:38:54.935784 kernel: ACPI: PM-Timer IO Port: 0xb008 Aug 13 00:38:54.935798 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 00:38:54.935815 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Aug 13 00:38:54.935830 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 00:38:54.935844 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 00:38:54.935858 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 00:38:54.935873 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 00:38:54.935888 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 00:38:54.935902 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 00:38:54.935917 kernel: TSC deadline timer available Aug 13 00:38:54.935931 kernel: CPU topo: Max. logical packages: 1 Aug 13 00:38:54.935948 kernel: CPU topo: Max. logical dies: 1 Aug 13 00:38:54.935963 kernel: CPU topo: Max. dies per package: 1 Aug 13 00:38:54.935977 kernel: CPU topo: Max. threads per core: 2 Aug 13 00:38:54.935991 kernel: CPU topo: Num. cores per package: 1 Aug 13 00:38:54.936005 kernel: CPU topo: Num. threads per package: 2 Aug 13 00:38:54.936020 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Aug 13 00:38:54.936034 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 13 00:38:54.936049 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Aug 13 00:38:54.936063 kernel: Booting paravirtualized kernel on KVM Aug 13 00:38:54.936077 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 00:38:54.936096 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 13 00:38:54.936111 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Aug 13 00:38:54.936125 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Aug 13 00:38:54.936139 kernel: pcpu-alloc: [0] 0 1 Aug 13 00:38:54.936153 kernel: kvm-guest: PV spinlocks enabled Aug 13 00:38:54.936168 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 00:38:54.936185 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 00:38:54.936200 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 00:38:54.936218 kernel: random: crng init done Aug 13 00:38:54.936232 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 00:38:54.936247 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 13 00:38:54.936261 kernel: Fallback order for Node 0: 0 Aug 13 00:38:54.936276 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Aug 13 00:38:54.936291 kernel: Policy zone: DMA32 Aug 13 00:38:54.936319 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 00:38:54.936335 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 00:38:54.936350 kernel: Kernel/User page tables isolation: enabled Aug 13 00:38:54.936366 kernel: ftrace: allocating 40098 entries in 157 pages Aug 13 00:38:54.936381 kernel: ftrace: allocated 157 pages with 5 groups Aug 13 00:38:54.936399 kernel: Dynamic Preempt: voluntary Aug 13 00:38:54.936414 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 00:38:54.936431 kernel: rcu: RCU event tracing is enabled. Aug 13 00:38:54.936447 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 00:38:54.936463 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 00:38:54.936479 kernel: Rude variant of Tasks RCU enabled. Aug 13 00:38:54.936497 kernel: Tracing variant of Tasks RCU enabled. Aug 13 00:38:54.936512 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 00:38:54.936626 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 00:38:54.936645 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:38:54.936661 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:38:54.936676 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:38:54.936692 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 13 00:38:54.936707 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 00:38:54.936727 kernel: Console: colour dummy device 80x25 Aug 13 00:38:54.936742 kernel: printk: legacy console [tty0] enabled Aug 13 00:38:54.936757 kernel: printk: legacy console [ttyS0] enabled Aug 13 00:38:54.936772 kernel: ACPI: Core revision 20240827 Aug 13 00:38:54.936788 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Aug 13 00:38:54.936804 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 00:38:54.936819 kernel: x2apic enabled Aug 13 00:38:54.936835 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 00:38:54.936851 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093fa6a7c, max_idle_ns: 440795295209 ns Aug 13 00:38:54.936870 kernel: Calibrating delay loop (skipped) preset value.. 5000.01 BogoMIPS (lpj=2500006) Aug 13 00:38:54.936885 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Aug 13 00:38:54.936901 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Aug 13 00:38:54.936917 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 00:38:54.936932 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 00:38:54.936947 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 00:38:54.936963 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Aug 13 00:38:54.936978 kernel: RETBleed: Vulnerable Aug 13 00:38:54.936993 kernel: Speculative Store Bypass: Vulnerable Aug 13 00:38:54.937007 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 00:38:54.937025 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 00:38:54.937050 kernel: GDS: Unknown: Dependent on hypervisor status Aug 13 00:38:54.937065 kernel: ITS: Mitigation: Aligned branch/return thunks Aug 13 00:38:54.937081 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 00:38:54.937096 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 00:38:54.937111 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 00:38:54.937127 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Aug 13 00:38:54.937142 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Aug 13 00:38:54.937156 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Aug 13 00:38:54.937172 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Aug 13 00:38:54.937187 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Aug 13 00:38:54.937205 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Aug 13 00:38:54.937220 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 00:38:54.937236 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Aug 13 00:38:54.937250 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Aug 13 00:38:54.937266 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Aug 13 00:38:54.937281 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Aug 13 00:38:54.937296 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Aug 13 00:38:54.937311 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Aug 13 00:38:54.937327 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Aug 13 00:38:54.937342 kernel: Freeing SMP alternatives memory: 32K Aug 13 00:38:54.937357 kernel: pid_max: default: 32768 minimum: 301 Aug 13 00:38:54.937373 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Aug 13 00:38:54.937391 kernel: landlock: Up and running. Aug 13 00:38:54.937406 kernel: SELinux: Initializing. Aug 13 00:38:54.937421 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 00:38:54.937437 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 00:38:54.937452 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Aug 13 00:38:54.937467 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Aug 13 00:38:54.937483 kernel: signal: max sigframe size: 3632 Aug 13 00:38:54.937498 kernel: rcu: Hierarchical SRCU implementation. Aug 13 00:38:54.937514 kernel: rcu: Max phase no-delay instances is 400. Aug 13 00:38:54.939176 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Aug 13 00:38:54.939215 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 13 00:38:54.939230 kernel: smp: Bringing up secondary CPUs ... Aug 13 00:38:54.939245 kernel: smpboot: x86: Booting SMP configuration: Aug 13 00:38:54.939261 kernel: .... node #0, CPUs: #1 Aug 13 00:38:54.939277 kernel: Transient Scheduler Attacks: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Aug 13 00:38:54.939294 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Aug 13 00:38:54.939309 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 00:38:54.939323 kernel: smpboot: Total of 2 processors activated (10000.02 BogoMIPS) Aug 13 00:38:54.939342 kernel: Memory: 1908052K/2037804K available (14336K kernel code, 2430K rwdata, 9960K rodata, 54444K init, 2524K bss, 125188K reserved, 0K cma-reserved) Aug 13 00:38:54.939357 kernel: devtmpfs: initialized Aug 13 00:38:54.939372 kernel: x86/mm: Memory block size: 128MB Aug 13 00:38:54.939387 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Aug 13 00:38:54.939403 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 00:38:54.939418 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 00:38:54.939433 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 00:38:54.939450 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 00:38:54.939466 kernel: audit: initializing netlink subsys (disabled) Aug 13 00:38:54.939486 kernel: audit: type=2000 audit(1755045533.395:1): state=initialized audit_enabled=0 res=1 Aug 13 00:38:54.939502 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 00:38:54.939518 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 00:38:54.939569 kernel: cpuidle: using governor menu Aug 13 00:38:54.939585 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 00:38:54.939602 kernel: dca service started, version 1.12.1 Aug 13 00:38:54.939618 kernel: PCI: Using configuration type 1 for base access Aug 13 00:38:54.939635 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 00:38:54.939651 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 00:38:54.939671 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 00:38:54.939687 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 00:38:54.939702 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 00:38:54.939715 kernel: ACPI: Added _OSI(Module Device) Aug 13 00:38:54.939730 kernel: ACPI: Added _OSI(Processor Device) Aug 13 00:38:54.939745 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 00:38:54.939761 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Aug 13 00:38:54.939775 kernel: ACPI: Interpreter enabled Aug 13 00:38:54.939789 kernel: ACPI: PM: (supports S0 S5) Aug 13 00:38:54.939808 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 00:38:54.939825 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 00:38:54.939841 kernel: PCI: Using E820 reservations for host bridge windows Aug 13 00:38:54.939856 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Aug 13 00:38:54.939872 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 00:38:54.940125 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Aug 13 00:38:54.940279 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Aug 13 00:38:54.940423 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Aug 13 00:38:54.940443 kernel: acpiphp: Slot [3] registered Aug 13 00:38:54.940459 kernel: acpiphp: Slot [4] registered Aug 13 00:38:54.940474 kernel: acpiphp: Slot [5] registered Aug 13 00:38:54.940489 kernel: acpiphp: Slot [6] registered Aug 13 00:38:54.940503 kernel: acpiphp: Slot [7] registered Aug 13 00:38:54.940518 kernel: acpiphp: Slot [8] registered Aug 13 00:38:54.940558 kernel: acpiphp: Slot [9] registered Aug 13 00:38:54.940574 kernel: acpiphp: Slot [10] registered Aug 13 00:38:54.940589 kernel: acpiphp: Slot [11] registered Aug 13 00:38:54.940608 kernel: acpiphp: Slot [12] registered Aug 13 00:38:54.940624 kernel: acpiphp: Slot [13] registered Aug 13 00:38:54.940639 kernel: acpiphp: Slot [14] registered Aug 13 00:38:54.940654 kernel: acpiphp: Slot [15] registered Aug 13 00:38:54.940669 kernel: acpiphp: Slot [16] registered Aug 13 00:38:54.940684 kernel: acpiphp: Slot [17] registered Aug 13 00:38:54.940699 kernel: acpiphp: Slot [18] registered Aug 13 00:38:54.940715 kernel: acpiphp: Slot [19] registered Aug 13 00:38:54.940730 kernel: acpiphp: Slot [20] registered Aug 13 00:38:54.940748 kernel: acpiphp: Slot [21] registered Aug 13 00:38:54.940763 kernel: acpiphp: Slot [22] registered Aug 13 00:38:54.940778 kernel: acpiphp: Slot [23] registered Aug 13 00:38:54.940793 kernel: acpiphp: Slot [24] registered Aug 13 00:38:54.940809 kernel: acpiphp: Slot [25] registered Aug 13 00:38:54.940824 kernel: acpiphp: Slot [26] registered Aug 13 00:38:54.940839 kernel: acpiphp: Slot [27] registered Aug 13 00:38:54.940855 kernel: acpiphp: Slot [28] registered Aug 13 00:38:54.940870 kernel: acpiphp: Slot [29] registered Aug 13 00:38:54.940885 kernel: acpiphp: Slot [30] registered Aug 13 00:38:54.940902 kernel: acpiphp: Slot [31] registered Aug 13 00:38:54.940918 kernel: PCI host bridge to bus 0000:00 Aug 13 00:38:54.941085 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 00:38:54.941213 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 00:38:54.941335 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 00:38:54.941453 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Aug 13 00:38:54.945665 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Aug 13 00:38:54.945834 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 00:38:54.946012 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Aug 13 00:38:54.946174 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Aug 13 00:38:54.946322 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Aug 13 00:38:54.946469 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Aug 13 00:38:54.946665 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Aug 13 00:38:54.946819 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Aug 13 00:38:54.946965 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Aug 13 00:38:54.947097 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Aug 13 00:38:54.947226 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Aug 13 00:38:54.947357 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Aug 13 00:38:54.947495 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Aug 13 00:38:54.947658 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Aug 13 00:38:54.947801 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Aug 13 00:38:54.947936 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 00:38:54.948080 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Aug 13 00:38:54.948215 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Aug 13 00:38:54.948362 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Aug 13 00:38:54.948497 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Aug 13 00:38:54.948521 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 00:38:54.949348 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 00:38:54.949369 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 00:38:54.949385 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 00:38:54.949401 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Aug 13 00:38:54.949417 kernel: iommu: Default domain type: Translated Aug 13 00:38:54.949433 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 00:38:54.949449 kernel: efivars: Registered efivars operations Aug 13 00:38:54.949465 kernel: PCI: Using ACPI for IRQ routing Aug 13 00:38:54.949485 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 00:38:54.949501 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Aug 13 00:38:54.949517 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Aug 13 00:38:54.949551 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Aug 13 00:38:54.949728 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Aug 13 00:38:54.949868 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Aug 13 00:38:54.950006 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 00:38:54.950026 kernel: vgaarb: loaded Aug 13 00:38:54.950043 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Aug 13 00:38:54.950064 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Aug 13 00:38:54.950079 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 00:38:54.950095 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 00:38:54.950111 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 00:38:54.950126 kernel: pnp: PnP ACPI init Aug 13 00:38:54.950142 kernel: pnp: PnP ACPI: found 5 devices Aug 13 00:38:54.950158 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 00:38:54.950174 kernel: NET: Registered PF_INET protocol family Aug 13 00:38:54.950190 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 00:38:54.950209 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Aug 13 00:38:54.950225 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 00:38:54.950241 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 00:38:54.950257 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Aug 13 00:38:54.950272 kernel: TCP: Hash tables configured (established 16384 bind 16384) Aug 13 00:38:54.950288 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 00:38:54.950303 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 00:38:54.950319 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 00:38:54.950338 kernel: NET: Registered PF_XDP protocol family Aug 13 00:38:54.950467 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 00:38:54.950697 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 00:38:54.950821 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 00:38:54.950937 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Aug 13 00:38:54.951052 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Aug 13 00:38:54.951204 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Aug 13 00:38:54.951227 kernel: PCI: CLS 0 bytes, default 64 Aug 13 00:38:54.951248 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 13 00:38:54.951266 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093fa6a7c, max_idle_ns: 440795295209 ns Aug 13 00:38:54.951283 kernel: clocksource: Switched to clocksource tsc Aug 13 00:38:54.951300 kernel: Initialise system trusted keyrings Aug 13 00:38:54.951317 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Aug 13 00:38:54.951334 kernel: Key type asymmetric registered Aug 13 00:38:54.951348 kernel: Asymmetric key parser 'x509' registered Aug 13 00:38:54.951365 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 13 00:38:54.951382 kernel: io scheduler mq-deadline registered Aug 13 00:38:54.951401 kernel: io scheduler kyber registered Aug 13 00:38:54.951418 kernel: io scheduler bfq registered Aug 13 00:38:54.951435 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 00:38:54.951451 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 00:38:54.951466 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 00:38:54.951482 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 00:38:54.951498 kernel: i8042: Warning: Keylock active Aug 13 00:38:54.951513 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 00:38:54.951548 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 00:38:54.951696 kernel: rtc_cmos 00:00: RTC can wake from S4 Aug 13 00:38:54.951820 kernel: rtc_cmos 00:00: registered as rtc0 Aug 13 00:38:54.951941 kernel: rtc_cmos 00:00: setting system clock to 2025-08-13T00:38:54 UTC (1755045534) Aug 13 00:38:54.952060 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Aug 13 00:38:54.952079 kernel: intel_pstate: CPU model not supported Aug 13 00:38:54.952119 kernel: efifb: probing for efifb Aug 13 00:38:54.952138 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Aug 13 00:38:54.952155 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Aug 13 00:38:54.952174 kernel: efifb: scrolling: redraw Aug 13 00:38:54.952190 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Aug 13 00:38:54.952207 kernel: Console: switching to colour frame buffer device 100x37 Aug 13 00:38:54.952223 kernel: fb0: EFI VGA frame buffer device Aug 13 00:38:54.952240 kernel: pstore: Using crash dump compression: deflate Aug 13 00:38:54.952257 kernel: pstore: Registered efi_pstore as persistent store backend Aug 13 00:38:54.952276 kernel: NET: Registered PF_INET6 protocol family Aug 13 00:38:54.952293 kernel: Segment Routing with IPv6 Aug 13 00:38:54.952309 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 00:38:54.952329 kernel: NET: Registered PF_PACKET protocol family Aug 13 00:38:54.952345 kernel: Key type dns_resolver registered Aug 13 00:38:54.952361 kernel: IPI shorthand broadcast: enabled Aug 13 00:38:54.952378 kernel: sched_clock: Marking stable (2725002336, 149534538)->(2970072575, -95535701) Aug 13 00:38:54.952395 kernel: registered taskstats version 1 Aug 13 00:38:54.952411 kernel: Loading compiled-in X.509 certificates Aug 13 00:38:54.952428 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.40-flatcar: dee0b464d3f7f8d09744a2392f69dde258bc95c0' Aug 13 00:38:54.952444 kernel: Demotion targets for Node 0: null Aug 13 00:38:54.952460 kernel: Key type .fscrypt registered Aug 13 00:38:54.952479 kernel: Key type fscrypt-provisioning registered Aug 13 00:38:54.952496 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 00:38:54.952513 kernel: ima: Allocated hash algorithm: sha1 Aug 13 00:38:54.952551 kernel: ima: No architecture policies found Aug 13 00:38:54.952566 kernel: clk: Disabling unused clocks Aug 13 00:38:54.952582 kernel: Warning: unable to open an initial console. Aug 13 00:38:54.952599 kernel: Freeing unused kernel image (initmem) memory: 54444K Aug 13 00:38:54.952615 kernel: Write protecting the kernel read-only data: 24576k Aug 13 00:38:54.952632 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Aug 13 00:38:54.952652 kernel: Run /init as init process Aug 13 00:38:54.952671 kernel: with arguments: Aug 13 00:38:54.952688 kernel: /init Aug 13 00:38:54.952704 kernel: with environment: Aug 13 00:38:54.952719 kernel: HOME=/ Aug 13 00:38:54.952736 kernel: TERM=linux Aug 13 00:38:54.952755 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 00:38:54.952773 systemd[1]: Successfully made /usr/ read-only. Aug 13 00:38:54.952795 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 00:38:54.952813 systemd[1]: Detected virtualization amazon. Aug 13 00:38:54.952830 systemd[1]: Detected architecture x86-64. Aug 13 00:38:54.952847 systemd[1]: Running in initrd. Aug 13 00:38:54.952864 systemd[1]: No hostname configured, using default hostname. Aug 13 00:38:54.952884 systemd[1]: Hostname set to . Aug 13 00:38:54.952898 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:38:54.952916 systemd[1]: Queued start job for default target initrd.target. Aug 13 00:38:54.952933 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:38:54.952951 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:38:54.952970 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 00:38:54.952988 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:38:54.953005 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 00:38:54.953037 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 00:38:54.953056 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 00:38:54.953074 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 00:38:54.953092 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:38:54.953109 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:38:54.953126 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:38:54.953146 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:38:54.953164 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:38:54.953182 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:38:54.953199 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:38:54.953217 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:38:54.953235 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 00:38:54.953252 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 13 00:38:54.953270 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:38:54.953288 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:38:54.953308 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:38:54.953325 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:38:54.953342 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 00:38:54.953360 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:38:54.953378 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 00:38:54.953395 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Aug 13 00:38:54.953411 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 00:38:54.953425 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:38:54.953448 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:38:54.953468 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:38:54.953489 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 00:38:54.953511 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:38:54.953548 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 00:38:54.953621 systemd-journald[207]: Collecting audit messages is disabled. Aug 13 00:38:54.953659 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:38:54.953677 systemd-journald[207]: Journal started Aug 13 00:38:54.953713 systemd-journald[207]: Runtime Journal (/run/log/journal/ec21df5a83450f1ddcd247b1c879c0a7) is 4.8M, max 38.4M, 33.6M free. Aug 13 00:38:54.926456 systemd-modules-load[208]: Inserted module 'overlay' Aug 13 00:38:54.963943 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:38:54.965204 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:38:54.976875 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:38:54.980110 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 00:38:54.982687 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:38:54.986811 kernel: Bridge firewalling registered Aug 13 00:38:54.984070 systemd-modules-load[208]: Inserted module 'br_netfilter' Aug 13 00:38:54.988511 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:38:54.998814 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:38:55.009669 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 00:38:55.002698 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:38:55.013997 systemd-tmpfiles[224]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Aug 13 00:38:55.019389 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:38:55.025499 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:38:55.033079 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:38:55.039204 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 00:38:55.041190 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:38:55.045711 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:38:55.055723 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:38:55.074199 dracut-cmdline[244]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 00:38:55.104526 systemd-resolved[246]: Positive Trust Anchors: Aug 13 00:38:55.104555 systemd-resolved[246]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:38:55.104615 systemd-resolved[246]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:38:55.112981 systemd-resolved[246]: Defaulting to hostname 'linux'. Aug 13 00:38:55.116182 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:38:55.116926 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:38:55.172569 kernel: SCSI subsystem initialized Aug 13 00:38:55.182563 kernel: Loading iSCSI transport class v2.0-870. Aug 13 00:38:55.195572 kernel: iscsi: registered transport (tcp) Aug 13 00:38:55.219258 kernel: iscsi: registered transport (qla4xxx) Aug 13 00:38:55.219343 kernel: QLogic iSCSI HBA Driver Aug 13 00:38:55.238643 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 00:38:55.258981 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:38:55.260270 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 00:38:55.308252 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 00:38:55.310353 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 00:38:55.362592 kernel: raid6: avx512x4 gen() 17901 MB/s Aug 13 00:38:55.380564 kernel: raid6: avx512x2 gen() 18065 MB/s Aug 13 00:38:55.398582 kernel: raid6: avx512x1 gen() 17024 MB/s Aug 13 00:38:55.416557 kernel: raid6: avx2x4 gen() 18004 MB/s Aug 13 00:38:55.434564 kernel: raid6: avx2x2 gen() 17944 MB/s Aug 13 00:38:55.452858 kernel: raid6: avx2x1 gen() 13668 MB/s Aug 13 00:38:55.452918 kernel: raid6: using algorithm avx512x2 gen() 18065 MB/s Aug 13 00:38:55.472067 kernel: raid6: .... xor() 24306 MB/s, rmw enabled Aug 13 00:38:55.472149 kernel: raid6: using avx512x2 recovery algorithm Aug 13 00:38:55.493565 kernel: xor: automatically using best checksumming function avx Aug 13 00:38:55.662564 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 00:38:55.670024 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:38:55.672228 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:38:55.704196 systemd-udevd[456]: Using default interface naming scheme 'v255'. Aug 13 00:38:55.711086 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:38:55.715233 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 00:38:55.743634 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation Aug 13 00:38:55.774645 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:38:55.776505 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:38:55.832207 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:38:55.838034 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 00:38:55.932574 kernel: nvme nvme0: pci function 0000:00:04.0 Aug 13 00:38:55.932849 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 00:38:55.935122 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Aug 13 00:38:55.944076 kernel: ena 0000:00:05.0: ENA device version: 0.10 Aug 13 00:38:55.944369 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Aug 13 00:38:55.952555 kernel: nvme nvme0: 2/0/0 default/read/poll queues Aug 13 00:38:55.957563 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Aug 13 00:38:55.964719 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 00:38:55.964799 kernel: GPT:9289727 != 16777215 Aug 13 00:38:55.964819 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:75:d7:6d:5c:6f Aug 13 00:38:55.965097 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 00:38:55.968707 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:38:55.973216 kernel: GPT:9289727 != 16777215 Aug 13 00:38:55.973252 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 00:38:55.973274 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 13 00:38:55.969115 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:38:55.974676 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:38:55.977832 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:38:55.981071 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 00:38:55.985131 (udev-worker)[514]: Network interface NamePolicy= disabled on kernel command line. Aug 13 00:38:55.998600 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Aug 13 00:38:56.003380 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:38:56.003523 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:38:56.009647 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:38:56.014704 kernel: AES CTR mode by8 optimization enabled Aug 13 00:38:56.051969 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:38:56.060557 kernel: nvme nvme0: using unchecked data buffer Aug 13 00:38:56.208644 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Aug 13 00:38:56.209646 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 00:38:56.229665 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Aug 13 00:38:56.241601 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Aug 13 00:38:56.251005 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Aug 13 00:38:56.251574 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Aug 13 00:38:56.253180 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:38:56.254261 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:38:56.255385 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:38:56.257201 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 00:38:56.261710 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 00:38:56.284497 disk-uuid[691]: Primary Header is updated. Aug 13 00:38:56.284497 disk-uuid[691]: Secondary Entries is updated. Aug 13 00:38:56.284497 disk-uuid[691]: Secondary Header is updated. Aug 13 00:38:56.291998 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:38:56.295567 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 13 00:38:57.311552 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 13 00:38:57.312144 disk-uuid[694]: The operation has completed successfully. Aug 13 00:38:57.455267 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 00:38:57.455392 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 00:38:57.493643 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 00:38:57.519553 sh[959]: Success Aug 13 00:38:57.545675 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 00:38:57.545756 kernel: device-mapper: uevent: version 1.0.3 Aug 13 00:38:57.546797 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Aug 13 00:38:57.558555 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Aug 13 00:38:57.671230 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 00:38:57.675630 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 00:38:57.687359 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 00:38:57.713310 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Aug 13 00:38:57.713379 kernel: BTRFS: device fsid 0c0338fb-9434-41c1-99a2-737cbe2351c4 devid 1 transid 44 /dev/mapper/usr (254:0) scanned by mount (982) Aug 13 00:38:57.719776 kernel: BTRFS info (device dm-0): first mount of filesystem 0c0338fb-9434-41c1-99a2-737cbe2351c4 Aug 13 00:38:57.719981 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:38:57.722415 kernel: BTRFS info (device dm-0): using free-space-tree Aug 13 00:38:57.864570 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 00:38:57.865904 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Aug 13 00:38:57.866755 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 00:38:57.867861 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 00:38:57.870694 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 00:38:57.912599 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1017) Aug 13 00:38:57.917670 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:38:57.917742 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:38:57.919983 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Aug 13 00:38:57.934551 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:38:57.935122 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 00:38:57.937935 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 00:38:57.976937 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:38:57.979569 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:38:58.018369 systemd-networkd[1151]: lo: Link UP Aug 13 00:38:58.018381 systemd-networkd[1151]: lo: Gained carrier Aug 13 00:38:58.020085 systemd-networkd[1151]: Enumeration completed Aug 13 00:38:58.020499 systemd-networkd[1151]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:38:58.020505 systemd-networkd[1151]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:38:58.021700 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:38:58.023258 systemd[1]: Reached target network.target - Network. Aug 13 00:38:58.024646 systemd-networkd[1151]: eth0: Link UP Aug 13 00:38:58.024651 systemd-networkd[1151]: eth0: Gained carrier Aug 13 00:38:58.024670 systemd-networkd[1151]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:38:58.035642 systemd-networkd[1151]: eth0: DHCPv4 address 172.31.31.138/20, gateway 172.31.16.1 acquired from 172.31.16.1 Aug 13 00:38:58.649963 ignition[1100]: Ignition 2.21.0 Aug 13 00:38:58.649982 ignition[1100]: Stage: fetch-offline Aug 13 00:38:58.650225 ignition[1100]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:38:58.650237 ignition[1100]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 00:38:58.650661 ignition[1100]: Ignition finished successfully Aug 13 00:38:58.657345 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:38:58.664409 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 00:38:58.715527 ignition[1161]: Ignition 2.21.0 Aug 13 00:38:58.715569 ignition[1161]: Stage: fetch Aug 13 00:38:58.718177 ignition[1161]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:38:58.718206 ignition[1161]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 00:38:58.718380 ignition[1161]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 00:38:58.753988 ignition[1161]: PUT result: OK Aug 13 00:38:58.761889 ignition[1161]: parsed url from cmdline: "" Aug 13 00:38:58.761900 ignition[1161]: no config URL provided Aug 13 00:38:58.761912 ignition[1161]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:38:58.761929 ignition[1161]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:38:58.761964 ignition[1161]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 00:38:58.762853 ignition[1161]: PUT result: OK Aug 13 00:38:58.762920 ignition[1161]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Aug 13 00:38:58.763895 ignition[1161]: GET result: OK Aug 13 00:38:58.764003 ignition[1161]: parsing config with SHA512: 367c8156983339ac7bf095c36cf920df181ec6e9793656f0f64096b188ac5b9399effd80cb8a4b655741643363bdb2dca9b84e8e3b0708190546284d950ef044 Aug 13 00:38:58.770115 unknown[1161]: fetched base config from "system" Aug 13 00:38:58.770130 unknown[1161]: fetched base config from "system" Aug 13 00:38:58.770742 ignition[1161]: fetch: fetch complete Aug 13 00:38:58.770138 unknown[1161]: fetched user config from "aws" Aug 13 00:38:58.770749 ignition[1161]: fetch: fetch passed Aug 13 00:38:58.773596 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 00:38:58.770815 ignition[1161]: Ignition finished successfully Aug 13 00:38:58.776043 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 00:38:58.810876 ignition[1167]: Ignition 2.21.0 Aug 13 00:38:58.810896 ignition[1167]: Stage: kargs Aug 13 00:38:58.811282 ignition[1167]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:38:58.811295 ignition[1167]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 00:38:58.811414 ignition[1167]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 00:38:58.813191 ignition[1167]: PUT result: OK Aug 13 00:38:58.819066 ignition[1167]: kargs: kargs passed Aug 13 00:38:58.819325 ignition[1167]: Ignition finished successfully Aug 13 00:38:58.821582 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 00:38:58.823892 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 00:38:58.864959 ignition[1174]: Ignition 2.21.0 Aug 13 00:38:58.864974 ignition[1174]: Stage: disks Aug 13 00:38:58.865384 ignition[1174]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:38:58.865397 ignition[1174]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 00:38:58.868341 ignition[1174]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 00:38:58.869891 ignition[1174]: PUT result: OK Aug 13 00:38:58.878588 ignition[1174]: disks: disks passed Aug 13 00:38:58.879246 ignition[1174]: Ignition finished successfully Aug 13 00:38:58.881174 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 00:38:58.881873 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 00:38:58.882335 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 00:38:58.882950 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:38:58.883558 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:38:58.884165 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:38:58.885929 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 00:38:58.941697 systemd-fsck[1182]: ROOT: clean, 15/553520 files, 52789/553472 blocks Aug 13 00:38:58.944293 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 00:38:58.946788 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 00:38:59.087828 systemd-networkd[1151]: eth0: Gained IPv6LL Aug 13 00:38:59.109563 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 069caac6-7833-4acd-8940-01a7ff7d1281 r/w with ordered data mode. Quota mode: none. Aug 13 00:38:59.110743 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 00:38:59.112157 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 00:38:59.114727 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:38:59.117277 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 00:38:59.120106 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 00:38:59.120883 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 00:38:59.120921 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:38:59.134168 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 00:38:59.136200 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 00:38:59.155574 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1201) Aug 13 00:38:59.158555 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:38:59.158616 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:38:59.160943 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Aug 13 00:38:59.170081 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:38:59.662055 initrd-setup-root[1225]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 00:38:59.689379 initrd-setup-root[1232]: cut: /sysroot/etc/group: No such file or directory Aug 13 00:38:59.716385 initrd-setup-root[1239]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 00:38:59.750091 initrd-setup-root[1246]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 00:39:00.280757 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 00:39:00.284084 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 00:39:00.286669 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 00:39:00.334307 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 00:39:00.345562 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:39:00.431953 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 00:39:00.434065 ignition[1313]: INFO : Ignition 2.21.0 Aug 13 00:39:00.434065 ignition[1313]: INFO : Stage: mount Aug 13 00:39:00.451069 ignition[1313]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:39:00.451069 ignition[1313]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 00:39:00.451069 ignition[1313]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 00:39:00.451069 ignition[1313]: INFO : PUT result: OK Aug 13 00:39:00.457034 ignition[1313]: INFO : mount: mount passed Aug 13 00:39:00.457772 ignition[1313]: INFO : Ignition finished successfully Aug 13 00:39:00.461976 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 00:39:00.463790 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 00:39:00.507822 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:39:00.603961 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1326) Aug 13 00:39:00.618579 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:39:00.618661 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:39:00.623904 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Aug 13 00:39:00.638484 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:39:00.735919 ignition[1342]: INFO : Ignition 2.21.0 Aug 13 00:39:00.735919 ignition[1342]: INFO : Stage: files Aug 13 00:39:00.744212 ignition[1342]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:39:00.744212 ignition[1342]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 00:39:00.744212 ignition[1342]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 00:39:00.744212 ignition[1342]: INFO : PUT result: OK Aug 13 00:39:00.769807 ignition[1342]: DEBUG : files: compiled without relabeling support, skipping Aug 13 00:39:00.781767 ignition[1342]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 00:39:00.781767 ignition[1342]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 00:39:00.789079 ignition[1342]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 00:39:00.792074 ignition[1342]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 00:39:00.792074 ignition[1342]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 00:39:00.790362 unknown[1342]: wrote ssh authorized keys file for user: core Aug 13 00:39:00.805362 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 00:39:00.806670 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 13 00:39:00.870785 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 00:39:01.168557 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 00:39:01.168557 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 00:39:01.168557 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Aug 13 00:39:01.394032 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 00:39:01.619485 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 00:39:01.619485 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 13 00:39:01.622397 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 00:39:01.622397 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:39:01.622397 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:39:01.622397 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:39:01.622397 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:39:01.622397 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:39:01.622397 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:39:01.630121 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:39:01.630121 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:39:01.630121 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:39:01.630121 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:39:01.630121 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:39:01.630121 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Aug 13 00:39:02.029517 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 13 00:39:02.661463 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:39:02.661463 ignition[1342]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Aug 13 00:39:02.663595 ignition[1342]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:39:02.668202 ignition[1342]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:39:02.668202 ignition[1342]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Aug 13 00:39:02.668202 ignition[1342]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Aug 13 00:39:02.672602 ignition[1342]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 00:39:02.672602 ignition[1342]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:39:02.672602 ignition[1342]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:39:02.672602 ignition[1342]: INFO : files: files passed Aug 13 00:39:02.672602 ignition[1342]: INFO : Ignition finished successfully Aug 13 00:39:02.672807 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 00:39:02.675417 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 00:39:02.679694 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 00:39:02.708097 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 00:39:02.708427 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 00:39:02.720177 initrd-setup-root-after-ignition[1373]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:39:02.722233 initrd-setup-root-after-ignition[1373]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:39:02.723601 initrd-setup-root-after-ignition[1377]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:39:02.724779 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:39:02.725615 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 00:39:02.727642 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 00:39:02.775819 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 00:39:02.775932 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 00:39:02.777456 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 00:39:02.778336 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 00:39:02.779194 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 00:39:02.780134 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 00:39:02.802928 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:39:02.805402 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 00:39:02.838474 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:39:02.839255 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:39:02.840353 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 00:39:02.841359 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 00:39:02.841628 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:39:02.842764 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 00:39:02.843688 systemd[1]: Stopped target basic.target - Basic System. Aug 13 00:39:02.844547 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 00:39:02.845432 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:39:02.846228 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 00:39:02.846968 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Aug 13 00:39:02.847809 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 00:39:02.848597 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:39:02.849479 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 00:39:02.850702 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 00:39:02.851474 systemd[1]: Stopped target swap.target - Swaps. Aug 13 00:39:02.852183 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 00:39:02.852367 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:39:02.853613 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:39:02.854428 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:39:02.855115 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 00:39:02.855259 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:39:02.855922 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 00:39:02.856101 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 00:39:02.857596 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 00:39:02.857848 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:39:02.858580 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 00:39:02.858784 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 00:39:02.861643 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 00:39:02.862323 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 00:39:02.862591 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:39:02.865934 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 00:39:02.869697 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 00:39:02.869989 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:39:02.871064 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 00:39:02.871273 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:39:02.878952 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 00:39:02.882901 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 00:39:02.898476 ignition[1397]: INFO : Ignition 2.21.0 Aug 13 00:39:02.898476 ignition[1397]: INFO : Stage: umount Aug 13 00:39:02.901548 ignition[1397]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:39:02.901548 ignition[1397]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 00:39:02.901548 ignition[1397]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 00:39:02.901548 ignition[1397]: INFO : PUT result: OK Aug 13 00:39:02.906952 ignition[1397]: INFO : umount: umount passed Aug 13 00:39:02.907594 ignition[1397]: INFO : Ignition finished successfully Aug 13 00:39:02.909892 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 00:39:02.910618 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 00:39:02.912527 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 00:39:02.914154 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 00:39:02.914674 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 00:39:02.915153 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 00:39:02.915214 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 00:39:02.915812 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 00:39:02.915865 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 00:39:02.916448 systemd[1]: Stopped target network.target - Network. Aug 13 00:39:02.917161 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 00:39:02.917226 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:39:02.917836 systemd[1]: Stopped target paths.target - Path Units. Aug 13 00:39:02.918405 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 00:39:02.921627 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:39:02.922072 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 00:39:02.923214 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 00:39:02.924085 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 00:39:02.924145 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:39:02.924613 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 00:39:02.924660 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:39:02.925357 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 00:39:02.925436 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 00:39:02.926054 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 00:39:02.926113 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 00:39:02.926892 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 00:39:02.927497 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 00:39:02.933713 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 00:39:02.933848 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 00:39:02.938666 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 13 00:39:02.938936 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:39:02.939026 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 00:39:02.941294 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 13 00:39:02.941966 systemd[1]: Stopped target network-pre.target - Preparation for Network. Aug 13 00:39:02.942387 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 00:39:02.942428 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:39:02.943930 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 00:39:02.944259 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 00:39:02.944309 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:39:02.944698 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:39:02.944737 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:39:02.945311 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 00:39:02.945351 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 00:39:02.946103 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 00:39:02.946149 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:39:02.947690 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:39:02.952687 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 00:39:02.952791 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 13 00:39:02.965054 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 00:39:02.965281 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:39:02.969753 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 00:39:02.969875 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 00:39:02.971747 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 00:39:02.971798 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:39:02.972286 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 00:39:02.972355 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:39:02.973636 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 00:39:02.973709 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 00:39:02.975049 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:39:02.975125 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:39:02.978499 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 00:39:02.979604 systemd[1]: systemd-network-generator.service: Deactivated successfully. Aug 13 00:39:02.979683 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:39:02.982713 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 00:39:02.982796 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:39:02.984697 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:39:02.984774 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:39:02.989102 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Aug 13 00:39:02.989197 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Aug 13 00:39:02.989264 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 00:39:02.989897 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 00:39:02.990042 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 00:39:02.998990 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 00:39:02.999127 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 00:39:03.042494 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 00:39:03.042668 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 00:39:03.044172 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 00:39:03.045456 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 00:39:03.045585 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 00:39:03.047471 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 00:39:03.071087 systemd[1]: Switching root. Aug 13 00:39:03.146209 systemd-journald[207]: Journal stopped Aug 13 00:39:05.331408 systemd-journald[207]: Received SIGTERM from PID 1 (systemd). Aug 13 00:39:05.331507 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 00:39:05.332327 kernel: SELinux: policy capability open_perms=1 Aug 13 00:39:05.332377 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 00:39:05.332412 kernel: SELinux: policy capability always_check_network=0 Aug 13 00:39:05.332431 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 00:39:05.332450 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 00:39:05.332469 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 00:39:05.332488 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 00:39:05.332506 kernel: SELinux: policy capability userspace_initial_context=0 Aug 13 00:39:05.332525 kernel: audit: type=1403 audit(1755045543.685:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 00:39:05.332565 systemd[1]: Successfully loaded SELinux policy in 106.761ms. Aug 13 00:39:05.332603 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.028ms. Aug 13 00:39:05.332630 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 00:39:05.332652 systemd[1]: Detected virtualization amazon. Aug 13 00:39:05.332672 systemd[1]: Detected architecture x86-64. Aug 13 00:39:05.332692 systemd[1]: Detected first boot. Aug 13 00:39:05.332712 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:39:05.332732 zram_generator::config[1440]: No configuration found. Aug 13 00:39:05.332752 kernel: Guest personality initialized and is inactive Aug 13 00:39:05.332768 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Aug 13 00:39:05.332788 kernel: Initialized host personality Aug 13 00:39:05.332805 kernel: NET: Registered PF_VSOCK protocol family Aug 13 00:39:05.332826 systemd[1]: Populated /etc with preset unit settings. Aug 13 00:39:05.332846 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 13 00:39:05.332865 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 00:39:05.332883 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 00:39:05.332921 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 00:39:05.332940 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 00:39:05.332961 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 00:39:05.332980 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 00:39:05.333000 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 00:39:05.333019 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 00:39:05.333037 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 00:39:05.333057 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 00:39:05.333076 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 00:39:05.333094 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:39:05.333113 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:39:05.333135 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 00:39:05.333154 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 00:39:05.333173 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 00:39:05.333192 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:39:05.333209 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 00:39:05.333228 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:39:05.333246 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:39:05.333264 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 00:39:05.333287 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 00:39:05.333305 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 00:39:05.333323 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 00:39:05.333342 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:39:05.333360 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:39:05.333378 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:39:05.333397 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:39:05.333421 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 00:39:05.333441 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 00:39:05.333463 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 13 00:39:05.333482 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:39:05.333500 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:39:05.333519 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:39:05.334590 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 00:39:05.334627 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 00:39:05.334648 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 00:39:05.334669 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 00:39:05.334694 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:39:05.334722 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 00:39:05.334746 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 00:39:05.334771 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 00:39:05.334798 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 00:39:05.334823 systemd[1]: Reached target machines.target - Containers. Aug 13 00:39:05.334847 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 00:39:05.334872 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:39:05.334897 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:39:05.334926 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 00:39:05.334950 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:39:05.334975 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:39:05.334998 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:39:05.335022 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 00:39:05.335046 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:39:05.335072 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 00:39:05.335097 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 00:39:05.335120 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 00:39:05.335147 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 00:39:05.335171 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 00:39:05.335197 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:39:05.335221 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:39:05.335245 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:39:05.335268 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 00:39:05.335292 kernel: loop: module loaded Aug 13 00:39:05.335317 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 00:39:05.335346 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 13 00:39:05.335370 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:39:05.335401 kernel: fuse: init (API version 7.41) Aug 13 00:39:05.335425 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 00:39:05.335450 systemd[1]: Stopped verity-setup.service. Aug 13 00:39:05.335473 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:39:05.335499 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 00:39:05.335523 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 00:39:05.336705 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 00:39:05.336748 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 00:39:05.336773 kernel: ACPI: bus type drm_connector registered Aug 13 00:39:05.336792 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 00:39:05.336811 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 00:39:05.336831 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:39:05.336850 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 00:39:05.336870 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 00:39:05.336898 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:39:05.336917 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:39:05.336935 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:39:05.336956 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:39:05.336975 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:39:05.336993 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:39:05.337013 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 00:39:05.337032 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 00:39:05.337051 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:39:05.337069 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:39:05.337087 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:39:05.337107 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:39:05.337172 systemd-journald[1519]: Collecting audit messages is disabled. Aug 13 00:39:05.337209 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 00:39:05.337228 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 13 00:39:05.337247 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 00:39:05.337270 systemd-journald[1519]: Journal started Aug 13 00:39:05.337305 systemd-journald[1519]: Runtime Journal (/run/log/journal/ec21df5a83450f1ddcd247b1c879c0a7) is 4.8M, max 38.4M, 33.6M free. Aug 13 00:39:04.869315 systemd[1]: Queued start job for default target multi-user.target. Aug 13 00:39:04.882064 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Aug 13 00:39:04.882612 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 00:39:05.346575 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 00:39:05.349571 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 00:39:05.357601 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 00:39:05.357685 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:39:05.366556 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 13 00:39:05.376227 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 00:39:05.376332 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:39:05.381559 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 00:39:05.386561 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:39:05.396460 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 00:39:05.397586 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:39:05.405562 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:39:05.416570 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 00:39:05.424565 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:39:05.429017 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 00:39:05.430951 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 00:39:05.433228 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 00:39:05.448613 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 00:39:05.473192 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:39:05.476181 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 00:39:05.479242 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 00:39:05.483504 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 13 00:39:05.487304 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 00:39:05.490997 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:39:05.511368 systemd-journald[1519]: Time spent on flushing to /var/log/journal/ec21df5a83450f1ddcd247b1c879c0a7 is 59.617ms for 1023 entries. Aug 13 00:39:05.511368 systemd-journald[1519]: System Journal (/var/log/journal/ec21df5a83450f1ddcd247b1c879c0a7) is 8M, max 195.6M, 187.6M free. Aug 13 00:39:05.587643 systemd-journald[1519]: Received client request to flush runtime journal. Aug 13 00:39:05.587734 kernel: loop0: detected capacity change from 0 to 146240 Aug 13 00:39:05.593725 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 00:39:05.601119 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 13 00:39:05.625859 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 00:39:05.633699 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:39:05.665887 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 00:39:05.698176 kernel: loop1: detected capacity change from 0 to 113872 Aug 13 00:39:05.718958 systemd-tmpfiles[1590]: ACLs are not supported, ignoring. Aug 13 00:39:05.718985 systemd-tmpfiles[1590]: ACLs are not supported, ignoring. Aug 13 00:39:05.727301 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:39:05.829566 kernel: loop2: detected capacity change from 0 to 72352 Aug 13 00:39:05.884371 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 00:39:05.935557 kernel: loop3: detected capacity change from 0 to 221472 Aug 13 00:39:05.988565 kernel: loop4: detected capacity change from 0 to 146240 Aug 13 00:39:06.050594 kernel: loop5: detected capacity change from 0 to 113872 Aug 13 00:39:06.070571 kernel: loop6: detected capacity change from 0 to 72352 Aug 13 00:39:06.088569 kernel: loop7: detected capacity change from 0 to 221472 Aug 13 00:39:06.127227 (sd-merge)[1598]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Aug 13 00:39:06.128553 (sd-merge)[1598]: Merged extensions into '/usr'. Aug 13 00:39:06.136083 systemd[1]: Reload requested from client PID 1555 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 00:39:06.136108 systemd[1]: Reloading... Aug 13 00:39:06.247580 zram_generator::config[1620]: No configuration found. Aug 13 00:39:06.424747 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:39:06.554781 systemd[1]: Reloading finished in 417 ms. Aug 13 00:39:06.576641 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 00:39:06.577670 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 00:39:06.588838 systemd[1]: Starting ensure-sysext.service... Aug 13 00:39:06.591675 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:39:06.595913 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:39:06.631685 systemd[1]: Reload requested from client PID 1676 ('systemctl') (unit ensure-sysext.service)... Aug 13 00:39:06.631707 systemd[1]: Reloading... Aug 13 00:39:06.652256 systemd-udevd[1678]: Using default interface naming scheme 'v255'. Aug 13 00:39:06.659779 systemd-tmpfiles[1677]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Aug 13 00:39:06.660181 systemd-tmpfiles[1677]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Aug 13 00:39:06.660712 systemd-tmpfiles[1677]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 00:39:06.661336 systemd-tmpfiles[1677]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 00:39:06.662809 systemd-tmpfiles[1677]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 00:39:06.663348 systemd-tmpfiles[1677]: ACLs are not supported, ignoring. Aug 13 00:39:06.663509 systemd-tmpfiles[1677]: ACLs are not supported, ignoring. Aug 13 00:39:06.669264 systemd-tmpfiles[1677]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:39:06.669384 systemd-tmpfiles[1677]: Skipping /boot Aug 13 00:39:06.703117 systemd-tmpfiles[1677]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:39:06.703577 systemd-tmpfiles[1677]: Skipping /boot Aug 13 00:39:06.739562 zram_generator::config[1703]: No configuration found. Aug 13 00:39:07.055712 (udev-worker)[1739]: Network interface NamePolicy= disabled on kernel command line. Aug 13 00:39:07.103682 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:39:07.119710 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 00:39:07.255673 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Aug 13 00:39:07.274556 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Aug 13 00:39:07.294552 kernel: ACPI: button: Power Button [PWRF] Aug 13 00:39:07.304569 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Aug 13 00:39:07.323192 ldconfig[1551]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 00:39:07.325667 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 00:39:07.326077 systemd[1]: Reloading finished in 693 ms. Aug 13 00:39:07.329555 kernel: ACPI: button: Sleep Button [SLPF] Aug 13 00:39:07.341877 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:39:07.344113 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 00:39:07.345526 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:39:07.395608 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:39:07.399116 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 00:39:07.405924 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 00:39:07.407448 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:39:07.410339 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:39:07.419746 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:39:07.423389 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:39:07.425657 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:39:07.425859 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:39:07.431655 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 00:39:07.455840 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:39:07.469941 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:39:07.474484 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 00:39:07.475293 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:39:07.483321 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:39:07.483638 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:39:07.483877 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:39:07.484007 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:39:07.484130 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:39:07.493637 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:39:07.495598 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:39:07.496888 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:39:07.497134 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:39:07.511159 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:39:07.511550 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:39:07.513589 systemd[1]: Finished ensure-sysext.service. Aug 13 00:39:07.515341 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:39:07.516721 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:39:07.518607 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:39:07.519831 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:39:07.519874 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:39:07.519946 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:39:07.520016 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:39:07.520062 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 00:39:07.525575 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 00:39:07.526615 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:39:07.554650 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 00:39:07.556461 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:39:07.557429 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:39:07.612323 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 00:39:07.618623 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 00:39:07.646683 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 00:39:07.647886 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:39:07.662591 augenrules[1910]: No rules Aug 13 00:39:07.671944 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:39:07.672276 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 00:39:07.687406 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 00:39:07.778806 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:39:07.847514 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 00:39:07.911197 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Aug 13 00:39:07.915265 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 00:39:07.940571 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:39:07.996614 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 00:39:08.036467 systemd-resolved[1841]: Positive Trust Anchors: Aug 13 00:39:08.036921 systemd-resolved[1841]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:39:08.036997 systemd-resolved[1841]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:39:08.039579 systemd-networkd[1839]: lo: Link UP Aug 13 00:39:08.039588 systemd-networkd[1839]: lo: Gained carrier Aug 13 00:39:08.041391 systemd-networkd[1839]: Enumeration completed Aug 13 00:39:08.041557 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:39:08.043005 systemd-networkd[1839]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:39:08.043020 systemd-networkd[1839]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:39:08.043959 systemd-resolved[1841]: Defaulting to hostname 'linux'. Aug 13 00:39:08.045437 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 13 00:39:08.047601 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 00:39:08.048956 systemd-networkd[1839]: eth0: Link UP Aug 13 00:39:08.049258 systemd-networkd[1839]: eth0: Gained carrier Aug 13 00:39:08.049292 systemd-networkd[1839]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:39:08.055056 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:39:08.055876 systemd[1]: Reached target network.target - Network. Aug 13 00:39:08.056496 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:39:08.056956 systemd-networkd[1839]: eth0: DHCPv4 address 172.31.31.138/20, gateway 172.31.16.1 acquired from 172.31.16.1 Aug 13 00:39:08.057095 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:39:08.058851 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 00:39:08.059319 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 00:39:08.060630 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Aug 13 00:39:08.061238 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 00:39:08.061770 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 00:39:08.062166 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 00:39:08.062577 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 00:39:08.062619 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:39:08.063099 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:39:08.067206 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 00:39:08.069169 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 00:39:08.073870 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 13 00:39:08.074819 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 13 00:39:08.075370 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 13 00:39:08.078199 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 00:39:08.079137 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 13 00:39:08.080388 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 00:39:08.081952 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:39:08.082364 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:39:08.082810 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:39:08.082846 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:39:08.088656 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 00:39:08.091020 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 00:39:08.093775 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 00:39:08.096136 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 00:39:08.099570 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 00:39:08.103745 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 00:39:08.104657 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 00:39:08.109765 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Aug 13 00:39:08.113821 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 00:39:08.125153 systemd[1]: Started ntpd.service - Network Time Service. Aug 13 00:39:08.135837 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 00:39:08.156634 systemd[1]: Starting setup-oem.service - Setup OEM... Aug 13 00:39:08.161105 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 00:39:08.174594 jq[1961]: false Aug 13 00:39:08.169711 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 00:39:08.178816 google_oslogin_nss_cache[1963]: oslogin_cache_refresh[1963]: Refreshing passwd entry cache Aug 13 00:39:08.177919 oslogin_cache_refresh[1963]: Refreshing passwd entry cache Aug 13 00:39:08.186060 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 00:39:08.191208 google_oslogin_nss_cache[1963]: oslogin_cache_refresh[1963]: Failure getting users, quitting Aug 13 00:39:08.191208 google_oslogin_nss_cache[1963]: oslogin_cache_refresh[1963]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 13 00:39:08.191208 google_oslogin_nss_cache[1963]: oslogin_cache_refresh[1963]: Refreshing group entry cache Aug 13 00:39:08.186734 oslogin_cache_refresh[1963]: Failure getting users, quitting Aug 13 00:39:08.189037 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 00:39:08.186758 oslogin_cache_refresh[1963]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 13 00:39:08.189747 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 00:39:08.186814 oslogin_cache_refresh[1963]: Refreshing group entry cache Aug 13 00:39:08.194560 google_oslogin_nss_cache[1963]: oslogin_cache_refresh[1963]: Failure getting groups, quitting Aug 13 00:39:08.194560 google_oslogin_nss_cache[1963]: oslogin_cache_refresh[1963]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 13 00:39:08.192399 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 00:39:08.192073 oslogin_cache_refresh[1963]: Failure getting groups, quitting Aug 13 00:39:08.192089 oslogin_cache_refresh[1963]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 13 00:39:08.197356 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 00:39:08.201592 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 13 00:39:08.209274 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 00:39:08.210349 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 00:39:08.210955 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 00:39:08.211372 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Aug 13 00:39:08.211636 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Aug 13 00:39:08.253566 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 00:39:08.268617 extend-filesystems[1962]: Found /dev/nvme0n1p6 Aug 13 00:39:08.274869 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 00:39:08.279036 (ntainerd)[1995]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 00:39:08.279383 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 00:39:08.289075 update_engine[1976]: I20250813 00:39:08.288980 1976 main.cc:92] Flatcar Update Engine starting Aug 13 00:39:08.294698 extend-filesystems[1962]: Found /dev/nvme0n1p9 Aug 13 00:39:08.303595 extend-filesystems[1962]: Checking size of /dev/nvme0n1p9 Aug 13 00:39:08.307908 jq[1977]: true Aug 13 00:39:08.342945 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 00:39:08.344508 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 00:39:08.348696 tar[1981]: linux-amd64/helm Aug 13 00:39:08.371453 extend-filesystems[1962]: Resized partition /dev/nvme0n1p9 Aug 13 00:39:08.373842 systemd[1]: Finished setup-oem.service - Setup OEM. Aug 13 00:39:08.389941 jq[2008]: true Aug 13 00:39:08.395791 coreos-metadata[1958]: Aug 13 00:39:08.393 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Aug 13 00:39:08.399454 coreos-metadata[1958]: Aug 13 00:39:08.398 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Aug 13 00:39:08.401398 extend-filesystems[2023]: resize2fs 1.47.2 (1-Jan-2025) Aug 13 00:39:08.402895 coreos-metadata[1958]: Aug 13 00:39:08.402 INFO Fetch successful Aug 13 00:39:08.402895 coreos-metadata[1958]: Aug 13 00:39:08.402 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Aug 13 00:39:08.403707 coreos-metadata[1958]: Aug 13 00:39:08.403 INFO Fetch successful Aug 13 00:39:08.403707 coreos-metadata[1958]: Aug 13 00:39:08.403 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Aug 13 00:39:08.404433 coreos-metadata[1958]: Aug 13 00:39:08.404 INFO Fetch successful Aug 13 00:39:08.404433 coreos-metadata[1958]: Aug 13 00:39:08.404 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Aug 13 00:39:08.410745 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Aug 13 00:39:08.410865 coreos-metadata[1958]: Aug 13 00:39:08.409 INFO Fetch successful Aug 13 00:39:08.410865 coreos-metadata[1958]: Aug 13 00:39:08.409 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Aug 13 00:39:08.410865 coreos-metadata[1958]: Aug 13 00:39:08.410 INFO Fetch failed with 404: resource not found Aug 13 00:39:08.410865 coreos-metadata[1958]: Aug 13 00:39:08.410 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Aug 13 00:39:08.411991 coreos-metadata[1958]: Aug 13 00:39:08.411 INFO Fetch successful Aug 13 00:39:08.411991 coreos-metadata[1958]: Aug 13 00:39:08.411 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Aug 13 00:39:08.423357 coreos-metadata[1958]: Aug 13 00:39:08.416 INFO Fetch successful Aug 13 00:39:08.423357 coreos-metadata[1958]: Aug 13 00:39:08.416 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Aug 13 00:39:08.424060 coreos-metadata[1958]: Aug 13 00:39:08.423 INFO Fetch successful Aug 13 00:39:08.424060 coreos-metadata[1958]: Aug 13 00:39:08.424 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Aug 13 00:39:08.425193 coreos-metadata[1958]: Aug 13 00:39:08.425 INFO Fetch successful Aug 13 00:39:08.425193 coreos-metadata[1958]: Aug 13 00:39:08.425 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Aug 13 00:39:08.433772 coreos-metadata[1958]: Aug 13 00:39:08.428 INFO Fetch successful Aug 13 00:39:08.449964 dbus-daemon[1959]: [system] SELinux support is enabled Aug 13 00:39:08.450451 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 00:39:08.457086 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 00:39:08.457133 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 00:39:08.458407 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 00:39:08.458437 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 00:39:08.465807 dbus-daemon[1959]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1839 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 13 00:39:08.467036 ntpd[1965]: 13 Aug 00:39:08 ntpd[1965]: ntpd 4.2.8p17@1.4004-o Tue Aug 12 20:57:09 UTC 2025 (1): Starting Aug 13 00:39:08.467036 ntpd[1965]: 13 Aug 00:39:08 ntpd[1965]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Aug 13 00:39:08.467036 ntpd[1965]: 13 Aug 00:39:08 ntpd[1965]: ---------------------------------------------------- Aug 13 00:39:08.467036 ntpd[1965]: 13 Aug 00:39:08 ntpd[1965]: ntp-4 is maintained by Network Time Foundation, Aug 13 00:39:08.467036 ntpd[1965]: 13 Aug 00:39:08 ntpd[1965]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Aug 13 00:39:08.467036 ntpd[1965]: 13 Aug 00:39:08 ntpd[1965]: corporation. Support and training for ntp-4 are Aug 13 00:39:08.467036 ntpd[1965]: 13 Aug 00:39:08 ntpd[1965]: available at https://www.nwtime.org/support Aug 13 00:39:08.467036 ntpd[1965]: 13 Aug 00:39:08 ntpd[1965]: ---------------------------------------------------- Aug 13 00:39:08.466293 ntpd[1965]: ntpd 4.2.8p17@1.4004-o Tue Aug 12 20:57:09 UTC 2025 (1): Starting Aug 13 00:39:08.466317 ntpd[1965]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Aug 13 00:39:08.466328 ntpd[1965]: ---------------------------------------------------- Aug 13 00:39:08.466338 ntpd[1965]: ntp-4 is maintained by Network Time Foundation, Aug 13 00:39:08.466347 ntpd[1965]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Aug 13 00:39:08.466355 ntpd[1965]: corporation. Support and training for ntp-4 are Aug 13 00:39:08.466364 ntpd[1965]: available at https://www.nwtime.org/support Aug 13 00:39:08.466373 ntpd[1965]: ---------------------------------------------------- Aug 13 00:39:08.472635 dbus-daemon[1959]: [system] Successfully activated service 'org.freedesktop.systemd1' Aug 13 00:39:08.475560 ntpd[1965]: 13 Aug 00:39:08 ntpd[1965]: proto: precision = 0.059 usec (-24) Aug 13 00:39:08.473114 ntpd[1965]: proto: precision = 0.059 usec (-24) Aug 13 00:39:08.476243 ntpd[1965]: basedate set to 2025-07-31 Aug 13 00:39:08.477684 ntpd[1965]: 13 Aug 00:39:08 ntpd[1965]: basedate set to 2025-07-31 Aug 13 00:39:08.477684 ntpd[1965]: 13 Aug 00:39:08 ntpd[1965]: gps base set to 2025-08-03 (week 2378) Aug 13 00:39:08.476269 ntpd[1965]: gps base set to 2025-08-03 (week 2378) Aug 13 00:39:08.478449 ntpd[1965]: Listen and drop on 0 v6wildcard [::]:123 Aug 13 00:39:08.479654 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Aug 13 00:39:08.480415 ntpd[1965]: 13 Aug 00:39:08 ntpd[1965]: Listen and drop on 0 v6wildcard [::]:123 Aug 13 00:39:08.480415 ntpd[1965]: 13 Aug 00:39:08 ntpd[1965]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Aug 13 00:39:08.480415 ntpd[1965]: 13 Aug 00:39:08 ntpd[1965]: Listen normally on 2 lo 127.0.0.1:123 Aug 13 00:39:08.480415 ntpd[1965]: 13 Aug 00:39:08 ntpd[1965]: Listen normally on 3 eth0 172.31.31.138:123 Aug 13 00:39:08.480415 ntpd[1965]: 13 Aug 00:39:08 ntpd[1965]: Listen normally on 4 lo [::1]:123 Aug 13 00:39:08.480415 ntpd[1965]: 13 Aug 00:39:08 ntpd[1965]: bind(21) AF_INET6 fe80::475:d7ff:fe6d:5c6f%2#123 flags 0x11 failed: Cannot assign requested address Aug 13 00:39:08.480415 ntpd[1965]: 13 Aug 00:39:08 ntpd[1965]: unable to create socket on eth0 (5) for fe80::475:d7ff:fe6d:5c6f%2#123 Aug 13 00:39:08.480415 ntpd[1965]: 13 Aug 00:39:08 ntpd[1965]: failed to init interface for address fe80::475:d7ff:fe6d:5c6f%2 Aug 13 00:39:08.480415 ntpd[1965]: 13 Aug 00:39:08 ntpd[1965]: Listening on routing socket on fd #21 for interface updates Aug 13 00:39:08.478505 ntpd[1965]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Aug 13 00:39:08.490593 ntpd[1965]: 13 Aug 00:39:08 ntpd[1965]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 13 00:39:08.490593 ntpd[1965]: 13 Aug 00:39:08 ntpd[1965]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 13 00:39:08.485734 systemd[1]: Started update-engine.service - Update Engine. Aug 13 00:39:08.478703 ntpd[1965]: Listen normally on 2 lo 127.0.0.1:123 Aug 13 00:39:08.493340 update_engine[1976]: I20250813 00:39:08.491916 1976 update_check_scheduler.cc:74] Next update check in 9m49s Aug 13 00:39:08.490788 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 00:39:08.478741 ntpd[1965]: Listen normally on 3 eth0 172.31.31.138:123 Aug 13 00:39:08.478781 ntpd[1965]: Listen normally on 4 lo [::1]:123 Aug 13 00:39:08.478827 ntpd[1965]: bind(21) AF_INET6 fe80::475:d7ff:fe6d:5c6f%2#123 flags 0x11 failed: Cannot assign requested address Aug 13 00:39:08.478848 ntpd[1965]: unable to create socket on eth0 (5) for fe80::475:d7ff:fe6d:5c6f%2#123 Aug 13 00:39:08.478864 ntpd[1965]: failed to init interface for address fe80::475:d7ff:fe6d:5c6f%2 Aug 13 00:39:08.478897 ntpd[1965]: Listening on routing socket on fd #21 for interface updates Aug 13 00:39:08.481949 ntpd[1965]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 13 00:39:08.481982 ntpd[1965]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 13 00:39:08.551347 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Aug 13 00:39:08.570423 extend-filesystems[2023]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Aug 13 00:39:08.570423 extend-filesystems[2023]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 00:39:08.570423 extend-filesystems[2023]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Aug 13 00:39:08.570154 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 00:39:08.602177 bash[2049]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:39:08.602358 extend-filesystems[1962]: Resized filesystem in /dev/nvme0n1p9 Aug 13 00:39:08.571513 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 00:39:08.591446 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 00:39:08.597492 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 00:39:08.598995 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 00:39:08.603847 systemd[1]: Starting sshkeys.service... Aug 13 00:39:08.691291 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 13 00:39:08.700987 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 13 00:39:08.729586 systemd-logind[1973]: Watching system buttons on /dev/input/event2 (Power Button) Aug 13 00:39:08.729630 systemd-logind[1973]: Watching system buttons on /dev/input/event3 (Sleep Button) Aug 13 00:39:08.729653 systemd-logind[1973]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 00:39:08.730638 systemd-logind[1973]: New seat seat0. Aug 13 00:39:08.732133 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 00:39:08.858300 coreos-metadata[2069]: Aug 13 00:39:08.858 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Aug 13 00:39:08.861413 coreos-metadata[2069]: Aug 13 00:39:08.861 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Aug 13 00:39:08.863647 coreos-metadata[2069]: Aug 13 00:39:08.863 INFO Fetch successful Aug 13 00:39:08.863744 coreos-metadata[2069]: Aug 13 00:39:08.863 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Aug 13 00:39:08.866090 coreos-metadata[2069]: Aug 13 00:39:08.865 INFO Fetch successful Aug 13 00:39:08.873402 unknown[2069]: wrote ssh authorized keys file for user: core Aug 13 00:39:08.916046 locksmithd[2040]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 00:39:08.939268 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Aug 13 00:39:08.946039 dbus-daemon[1959]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 13 00:39:08.949005 dbus-daemon[1959]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2037 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 13 00:39:08.957233 update-ssh-keys[2113]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:39:08.958034 systemd[1]: Starting polkit.service - Authorization Manager... Aug 13 00:39:08.960229 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 13 00:39:08.986418 systemd[1]: Finished sshkeys.service. Aug 13 00:39:09.312671 polkitd[2127]: Started polkitd version 126 Aug 13 00:39:09.333965 polkitd[2127]: Loading rules from directory /etc/polkit-1/rules.d Aug 13 00:39:09.334923 polkitd[2127]: Loading rules from directory /run/polkit-1/rules.d Aug 13 00:39:09.334996 polkitd[2127]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Aug 13 00:39:09.335447 polkitd[2127]: Loading rules from directory /usr/local/share/polkit-1/rules.d Aug 13 00:39:09.335478 polkitd[2127]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Aug 13 00:39:09.335551 polkitd[2127]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 13 00:39:09.336823 polkitd[2127]: Finished loading, compiling and executing 2 rules Aug 13 00:39:09.337699 systemd[1]: Started polkit.service - Authorization Manager. Aug 13 00:39:09.339925 dbus-daemon[1959]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 13 00:39:09.340562 polkitd[2127]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 13 00:39:09.346974 containerd[1995]: time="2025-08-13T00:39:09Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Aug 13 00:39:09.348253 containerd[1995]: time="2025-08-13T00:39:09.348210481Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Aug 13 00:39:09.375245 systemd-hostnamed[2037]: Hostname set to (transient) Aug 13 00:39:09.375928 systemd-resolved[1841]: System hostname changed to 'ip-172-31-31-138'. Aug 13 00:39:09.381618 containerd[1995]: time="2025-08-13T00:39:09.381319952Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.373µs" Aug 13 00:39:09.381618 containerd[1995]: time="2025-08-13T00:39:09.381362388Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Aug 13 00:39:09.381618 containerd[1995]: time="2025-08-13T00:39:09.381388196Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Aug 13 00:39:09.385751 containerd[1995]: time="2025-08-13T00:39:09.385704030Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Aug 13 00:39:09.385751 containerd[1995]: time="2025-08-13T00:39:09.385757297Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Aug 13 00:39:09.385906 containerd[1995]: time="2025-08-13T00:39:09.385789272Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 13 00:39:09.385906 containerd[1995]: time="2025-08-13T00:39:09.385875884Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 13 00:39:09.385906 containerd[1995]: time="2025-08-13T00:39:09.385896412Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 13 00:39:09.386549 containerd[1995]: time="2025-08-13T00:39:09.386220449Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 13 00:39:09.386549 containerd[1995]: time="2025-08-13T00:39:09.386255317Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 13 00:39:09.386549 containerd[1995]: time="2025-08-13T00:39:09.386272724Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 13 00:39:09.386549 containerd[1995]: time="2025-08-13T00:39:09.386286764Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Aug 13 00:39:09.386549 containerd[1995]: time="2025-08-13T00:39:09.386389191Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Aug 13 00:39:09.386768 containerd[1995]: time="2025-08-13T00:39:09.386691468Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 13 00:39:09.386768 containerd[1995]: time="2025-08-13T00:39:09.386741695Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 13 00:39:09.386768 containerd[1995]: time="2025-08-13T00:39:09.386759730Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Aug 13 00:39:09.387059 containerd[1995]: time="2025-08-13T00:39:09.386812402Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Aug 13 00:39:09.387434 containerd[1995]: time="2025-08-13T00:39:09.387185449Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Aug 13 00:39:09.387434 containerd[1995]: time="2025-08-13T00:39:09.387280724Z" level=info msg="metadata content store policy set" policy=shared Aug 13 00:39:09.393645 containerd[1995]: time="2025-08-13T00:39:09.393598546Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Aug 13 00:39:09.394216 containerd[1995]: time="2025-08-13T00:39:09.394128427Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Aug 13 00:39:09.394758 containerd[1995]: time="2025-08-13T00:39:09.394159014Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Aug 13 00:39:09.394758 containerd[1995]: time="2025-08-13T00:39:09.394361572Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Aug 13 00:39:09.394758 containerd[1995]: time="2025-08-13T00:39:09.394381001Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Aug 13 00:39:09.394758 containerd[1995]: time="2025-08-13T00:39:09.394396835Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Aug 13 00:39:09.394758 containerd[1995]: time="2025-08-13T00:39:09.394424597Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Aug 13 00:39:09.394758 containerd[1995]: time="2025-08-13T00:39:09.394442089Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Aug 13 00:39:09.394758 containerd[1995]: time="2025-08-13T00:39:09.394458880Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Aug 13 00:39:09.394758 containerd[1995]: time="2025-08-13T00:39:09.394473002Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Aug 13 00:39:09.394758 containerd[1995]: time="2025-08-13T00:39:09.394486039Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Aug 13 00:39:09.394758 containerd[1995]: time="2025-08-13T00:39:09.394518684Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Aug 13 00:39:09.396262 containerd[1995]: time="2025-08-13T00:39:09.395597912Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Aug 13 00:39:09.396262 containerd[1995]: time="2025-08-13T00:39:09.395634314Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Aug 13 00:39:09.396262 containerd[1995]: time="2025-08-13T00:39:09.395658174Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Aug 13 00:39:09.396262 containerd[1995]: time="2025-08-13T00:39:09.395682245Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Aug 13 00:39:09.396262 containerd[1995]: time="2025-08-13T00:39:09.395697401Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Aug 13 00:39:09.396262 containerd[1995]: time="2025-08-13T00:39:09.395712341Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Aug 13 00:39:09.396262 containerd[1995]: time="2025-08-13T00:39:09.395728873Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Aug 13 00:39:09.396262 containerd[1995]: time="2025-08-13T00:39:09.395742794Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Aug 13 00:39:09.396262 containerd[1995]: time="2025-08-13T00:39:09.395759880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Aug 13 00:39:09.396262 containerd[1995]: time="2025-08-13T00:39:09.395774408Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Aug 13 00:39:09.396262 containerd[1995]: time="2025-08-13T00:39:09.395788774Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Aug 13 00:39:09.396262 containerd[1995]: time="2025-08-13T00:39:09.395876987Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Aug 13 00:39:09.396262 containerd[1995]: time="2025-08-13T00:39:09.395895196Z" level=info msg="Start snapshots syncer" Aug 13 00:39:09.397554 containerd[1995]: time="2025-08-13T00:39:09.397091110Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Aug 13 00:39:09.398173 containerd[1995]: time="2025-08-13T00:39:09.397863607Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Aug 13 00:39:09.398601 containerd[1995]: time="2025-08-13T00:39:09.398579512Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Aug 13 00:39:09.399823 containerd[1995]: time="2025-08-13T00:39:09.399221295Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Aug 13 00:39:09.399823 containerd[1995]: time="2025-08-13T00:39:09.399402242Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Aug 13 00:39:09.399823 containerd[1995]: time="2025-08-13T00:39:09.399432065Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Aug 13 00:39:09.399823 containerd[1995]: time="2025-08-13T00:39:09.399447381Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Aug 13 00:39:09.399823 containerd[1995]: time="2025-08-13T00:39:09.399462659Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Aug 13 00:39:09.399823 containerd[1995]: time="2025-08-13T00:39:09.399486639Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Aug 13 00:39:09.399823 containerd[1995]: time="2025-08-13T00:39:09.399501710Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Aug 13 00:39:09.399823 containerd[1995]: time="2025-08-13T00:39:09.399516606Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Aug 13 00:39:09.400498 containerd[1995]: time="2025-08-13T00:39:09.400410092Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Aug 13 00:39:09.400498 containerd[1995]: time="2025-08-13T00:39:09.400438500Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Aug 13 00:39:09.400498 containerd[1995]: time="2025-08-13T00:39:09.400457847Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Aug 13 00:39:09.401587 containerd[1995]: time="2025-08-13T00:39:09.401559141Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 13 00:39:09.401647 containerd[1995]: time="2025-08-13T00:39:09.401617987Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 13 00:39:09.401647 containerd[1995]: time="2025-08-13T00:39:09.401633926Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 13 00:39:09.401730 containerd[1995]: time="2025-08-13T00:39:09.401649409Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 13 00:39:09.401730 containerd[1995]: time="2025-08-13T00:39:09.401662671Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Aug 13 00:39:09.401730 containerd[1995]: time="2025-08-13T00:39:09.401677082Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Aug 13 00:39:09.401730 containerd[1995]: time="2025-08-13T00:39:09.401699600Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Aug 13 00:39:09.401730 containerd[1995]: time="2025-08-13T00:39:09.401721863Z" level=info msg="runtime interface created" Aug 13 00:39:09.401730 containerd[1995]: time="2025-08-13T00:39:09.401729191Z" level=info msg="created NRI interface" Aug 13 00:39:09.401919 containerd[1995]: time="2025-08-13T00:39:09.401741476Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Aug 13 00:39:09.401919 containerd[1995]: time="2025-08-13T00:39:09.401764264Z" level=info msg="Connect containerd service" Aug 13 00:39:09.401919 containerd[1995]: time="2025-08-13T00:39:09.401825339Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 00:39:09.408558 containerd[1995]: time="2025-08-13T00:39:09.408458641Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:39:09.411890 sshd_keygen[2014]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 00:39:09.460099 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 00:39:09.466829 ntpd[1965]: bind(24) AF_INET6 fe80::475:d7ff:fe6d:5c6f%2#123 flags 0x11 failed: Cannot assign requested address Aug 13 00:39:09.466872 ntpd[1965]: unable to create socket on eth0 (6) for fe80::475:d7ff:fe6d:5c6f%2#123 Aug 13 00:39:09.467203 ntpd[1965]: 13 Aug 00:39:09 ntpd[1965]: bind(24) AF_INET6 fe80::475:d7ff:fe6d:5c6f%2#123 flags 0x11 failed: Cannot assign requested address Aug 13 00:39:09.467203 ntpd[1965]: 13 Aug 00:39:09 ntpd[1965]: unable to create socket on eth0 (6) for fe80::475:d7ff:fe6d:5c6f%2#123 Aug 13 00:39:09.467203 ntpd[1965]: 13 Aug 00:39:09 ntpd[1965]: failed to init interface for address fe80::475:d7ff:fe6d:5c6f%2 Aug 13 00:39:09.466887 ntpd[1965]: failed to init interface for address fe80::475:d7ff:fe6d:5c6f%2 Aug 13 00:39:09.467324 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 00:39:09.472760 systemd[1]: Started sshd@0-172.31.31.138:22-139.178.68.195:44716.service - OpenSSH per-connection server daemon (139.178.68.195:44716). Aug 13 00:39:09.498261 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 00:39:09.498566 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 00:39:09.504927 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 00:39:09.563726 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 00:39:09.568994 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 00:39:09.596242 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 00:39:09.598201 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 00:39:09.742213 tar[1981]: linux-amd64/LICENSE Aug 13 00:39:09.742213 tar[1981]: linux-amd64/README.md Aug 13 00:39:09.761211 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 00:39:09.775675 systemd-networkd[1839]: eth0: Gained IPv6LL Aug 13 00:39:09.780618 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 00:39:09.782844 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 00:39:09.788195 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Aug 13 00:39:09.794154 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:39:09.807665 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 00:39:09.815842 containerd[1995]: time="2025-08-13T00:39:09.814930981Z" level=info msg="Start subscribing containerd event" Aug 13 00:39:09.815842 containerd[1995]: time="2025-08-13T00:39:09.815009582Z" level=info msg="Start recovering state" Aug 13 00:39:09.815842 containerd[1995]: time="2025-08-13T00:39:09.815131271Z" level=info msg="Start event monitor" Aug 13 00:39:09.815842 containerd[1995]: time="2025-08-13T00:39:09.815149301Z" level=info msg="Start cni network conf syncer for default" Aug 13 00:39:09.815842 containerd[1995]: time="2025-08-13T00:39:09.815166191Z" level=info msg="Start streaming server" Aug 13 00:39:09.815842 containerd[1995]: time="2025-08-13T00:39:09.815183856Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Aug 13 00:39:09.815842 containerd[1995]: time="2025-08-13T00:39:09.815194535Z" level=info msg="runtime interface starting up..." Aug 13 00:39:09.815842 containerd[1995]: time="2025-08-13T00:39:09.815202599Z" level=info msg="starting plugins..." Aug 13 00:39:09.815842 containerd[1995]: time="2025-08-13T00:39:09.815218239Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Aug 13 00:39:09.815842 containerd[1995]: time="2025-08-13T00:39:09.815694791Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 00:39:09.815842 containerd[1995]: time="2025-08-13T00:39:09.815753757Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 00:39:09.815904 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 00:39:09.816422 containerd[1995]: time="2025-08-13T00:39:09.816394010Z" level=info msg="containerd successfully booted in 0.471822s" Aug 13 00:39:09.821173 sshd[2185]: Accepted publickey for core from 139.178.68.195 port 44716 ssh2: RSA SHA256:2C5UUUFKFtbeXpxut91iAvg9/kHC7TPoVPANvS2Tr9A Aug 13 00:39:09.824100 sshd-session[2185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:39:09.855375 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 00:39:09.861541 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 00:39:09.892462 systemd-logind[1973]: New session 1 of user core. Aug 13 00:39:09.893476 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 00:39:09.910926 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 00:39:09.918854 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 00:39:09.932514 amazon-ssm-agent[2207]: Initializing new seelog logger Aug 13 00:39:09.935821 amazon-ssm-agent[2207]: New Seelog Logger Creation Complete Aug 13 00:39:09.935821 amazon-ssm-agent[2207]: 2025/08/13 00:39:09 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:39:09.935821 amazon-ssm-agent[2207]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:39:09.935821 amazon-ssm-agent[2207]: 2025/08/13 00:39:09 processing appconfig overrides Aug 13 00:39:09.937421 amazon-ssm-agent[2207]: 2025/08/13 00:39:09 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:39:09.937421 amazon-ssm-agent[2207]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:39:09.937421 amazon-ssm-agent[2207]: 2025/08/13 00:39:09 processing appconfig overrides Aug 13 00:39:09.937290 (systemd)[2226]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:39:09.937890 amazon-ssm-agent[2207]: 2025/08/13 00:39:09 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:39:09.937890 amazon-ssm-agent[2207]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:39:09.937960 amazon-ssm-agent[2207]: 2025/08/13 00:39:09 processing appconfig overrides Aug 13 00:39:09.939681 amazon-ssm-agent[2207]: 2025-08-13 00:39:09.9362 INFO Proxy environment variables: Aug 13 00:39:09.942604 amazon-ssm-agent[2207]: 2025/08/13 00:39:09 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:39:09.942604 amazon-ssm-agent[2207]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:39:09.942977 amazon-ssm-agent[2207]: 2025/08/13 00:39:09 processing appconfig overrides Aug 13 00:39:09.943192 systemd-logind[1973]: New session c1 of user core. Aug 13 00:39:10.039644 amazon-ssm-agent[2207]: 2025-08-13 00:39:09.9363 INFO https_proxy: Aug 13 00:39:10.139554 amazon-ssm-agent[2207]: 2025-08-13 00:39:09.9363 INFO http_proxy: Aug 13 00:39:10.223064 systemd[2226]: Queued start job for default target default.target. Aug 13 00:39:10.229704 systemd[2226]: Created slice app.slice - User Application Slice. Aug 13 00:39:10.229739 systemd[2226]: Reached target paths.target - Paths. Aug 13 00:39:10.229781 systemd[2226]: Reached target timers.target - Timers. Aug 13 00:39:10.233656 systemd[2226]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 00:39:10.236709 amazon-ssm-agent[2207]: 2025-08-13 00:39:09.9363 INFO no_proxy: Aug 13 00:39:10.265866 systemd[2226]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 00:39:10.268927 systemd[2226]: Reached target sockets.target - Sockets. Aug 13 00:39:10.269024 systemd[2226]: Reached target basic.target - Basic System. Aug 13 00:39:10.269079 systemd[2226]: Reached target default.target - Main User Target. Aug 13 00:39:10.269123 systemd[2226]: Startup finished in 306ms. Aug 13 00:39:10.269131 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 00:39:10.277826 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 00:39:10.335047 amazon-ssm-agent[2207]: 2025-08-13 00:39:09.9365 INFO Checking if agent identity type OnPrem can be assumed Aug 13 00:39:10.435244 systemd[1]: Started sshd@1-172.31.31.138:22-139.178.68.195:44720.service - OpenSSH per-connection server daemon (139.178.68.195:44720). Aug 13 00:39:10.437565 amazon-ssm-agent[2207]: 2025-08-13 00:39:09.9367 INFO Checking if agent identity type EC2 can be assumed Aug 13 00:39:10.536219 amazon-ssm-agent[2207]: 2025-08-13 00:39:09.9978 INFO Agent will take identity from EC2 Aug 13 00:39:10.541913 amazon-ssm-agent[2207]: 2025/08/13 00:39:10 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:39:10.541913 amazon-ssm-agent[2207]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:39:10.542037 amazon-ssm-agent[2207]: 2025/08/13 00:39:10 processing appconfig overrides Aug 13 00:39:10.567055 amazon-ssm-agent[2207]: 2025-08-13 00:39:09.9995 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Aug 13 00:39:10.567055 amazon-ssm-agent[2207]: 2025-08-13 00:39:09.9995 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Aug 13 00:39:10.567055 amazon-ssm-agent[2207]: 2025-08-13 00:39:09.9995 INFO [amazon-ssm-agent] Starting Core Agent Aug 13 00:39:10.567055 amazon-ssm-agent[2207]: 2025-08-13 00:39:09.9995 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Aug 13 00:39:10.567055 amazon-ssm-agent[2207]: 2025-08-13 00:39:09.9995 INFO [Registrar] Starting registrar module Aug 13 00:39:10.567055 amazon-ssm-agent[2207]: 2025-08-13 00:39:10.0016 INFO [EC2Identity] Checking disk for registration info Aug 13 00:39:10.567055 amazon-ssm-agent[2207]: 2025-08-13 00:39:10.0016 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Aug 13 00:39:10.567055 amazon-ssm-agent[2207]: 2025-08-13 00:39:10.0016 INFO [EC2Identity] Generating registration keypair Aug 13 00:39:10.567431 amazon-ssm-agent[2207]: 2025-08-13 00:39:10.4960 INFO [EC2Identity] Checking write access before registering Aug 13 00:39:10.567431 amazon-ssm-agent[2207]: 2025-08-13 00:39:10.4965 INFO [EC2Identity] Registering EC2 instance with Systems Manager Aug 13 00:39:10.567431 amazon-ssm-agent[2207]: 2025-08-13 00:39:10.5417 INFO [EC2Identity] EC2 registration was successful. Aug 13 00:39:10.567431 amazon-ssm-agent[2207]: 2025-08-13 00:39:10.5417 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Aug 13 00:39:10.567431 amazon-ssm-agent[2207]: 2025-08-13 00:39:10.5418 INFO [CredentialRefresher] credentialRefresher has started Aug 13 00:39:10.567431 amazon-ssm-agent[2207]: 2025-08-13 00:39:10.5418 INFO [CredentialRefresher] Starting credentials refresher loop Aug 13 00:39:10.567431 amazon-ssm-agent[2207]: 2025-08-13 00:39:10.5668 INFO EC2RoleProvider Successfully connected with instance profile role credentials Aug 13 00:39:10.567431 amazon-ssm-agent[2207]: 2025-08-13 00:39:10.5670 INFO [CredentialRefresher] Credentials ready Aug 13 00:39:10.635108 amazon-ssm-agent[2207]: 2025-08-13 00:39:10.5671 INFO [CredentialRefresher] Next credential rotation will be in 29.999994148633334 minutes Aug 13 00:39:10.637355 sshd[2239]: Accepted publickey for core from 139.178.68.195 port 44720 ssh2: RSA SHA256:2C5UUUFKFtbeXpxut91iAvg9/kHC7TPoVPANvS2Tr9A Aug 13 00:39:10.638508 sshd-session[2239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:39:10.645623 systemd-logind[1973]: New session 2 of user core. Aug 13 00:39:10.651796 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 00:39:10.787042 sshd[2241]: Connection closed by 139.178.68.195 port 44720 Aug 13 00:39:10.786730 sshd-session[2239]: pam_unix(sshd:session): session closed for user core Aug 13 00:39:10.790735 systemd[1]: sshd@1-172.31.31.138:22-139.178.68.195:44720.service: Deactivated successfully. Aug 13 00:39:10.792665 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 00:39:10.796184 systemd-logind[1973]: Session 2 logged out. Waiting for processes to exit. Aug 13 00:39:10.797724 systemd-logind[1973]: Removed session 2. Aug 13 00:39:10.823869 systemd[1]: Started sshd@2-172.31.31.138:22-139.178.68.195:44726.service - OpenSSH per-connection server daemon (139.178.68.195:44726). Aug 13 00:39:11.005258 sshd[2247]: Accepted publickey for core from 139.178.68.195 port 44726 ssh2: RSA SHA256:2C5UUUFKFtbeXpxut91iAvg9/kHC7TPoVPANvS2Tr9A Aug 13 00:39:11.006076 sshd-session[2247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:39:11.012441 systemd-logind[1973]: New session 3 of user core. Aug 13 00:39:11.017754 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 00:39:11.139391 sshd[2249]: Connection closed by 139.178.68.195 port 44726 Aug 13 00:39:11.139946 sshd-session[2247]: pam_unix(sshd:session): session closed for user core Aug 13 00:39:11.144296 systemd-logind[1973]: Session 3 logged out. Waiting for processes to exit. Aug 13 00:39:11.145966 systemd[1]: sshd@2-172.31.31.138:22-139.178.68.195:44726.service: Deactivated successfully. Aug 13 00:39:11.148635 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 00:39:11.151450 systemd-logind[1973]: Removed session 3. Aug 13 00:39:11.569568 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:39:11.572277 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 00:39:11.573961 systemd[1]: Startup finished in 2.822s (kernel) + 8.965s (initrd) + 7.991s (userspace) = 19.780s. Aug 13 00:39:11.583403 (kubelet)[2261]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:39:11.587416 amazon-ssm-agent[2207]: 2025-08-13 00:39:11.5872 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Aug 13 00:39:11.689444 amazon-ssm-agent[2207]: 2025-08-13 00:39:11.6089 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2262) started Aug 13 00:39:11.790803 amazon-ssm-agent[2207]: 2025-08-13 00:39:11.6089 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Aug 13 00:39:12.466761 ntpd[1965]: Listen normally on 7 eth0 [fe80::475:d7ff:fe6d:5c6f%2]:123 Aug 13 00:39:12.467686 ntpd[1965]: 13 Aug 00:39:12 ntpd[1965]: Listen normally on 7 eth0 [fe80::475:d7ff:fe6d:5c6f%2]:123 Aug 13 00:39:12.553356 kubelet[2261]: E0813 00:39:12.553300 2261 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:39:12.554977 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:39:12.555137 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:39:12.555431 systemd[1]: kubelet.service: Consumed 1.074s CPU time, 264.7M memory peak. Aug 13 00:39:17.263019 systemd-resolved[1841]: Clock change detected. Flushing caches. Aug 13 00:39:22.979501 systemd[1]: Started sshd@3-172.31.31.138:22-139.178.68.195:36222.service - OpenSSH per-connection server daemon (139.178.68.195:36222). Aug 13 00:39:23.164303 sshd[2285]: Accepted publickey for core from 139.178.68.195 port 36222 ssh2: RSA SHA256:2C5UUUFKFtbeXpxut91iAvg9/kHC7TPoVPANvS2Tr9A Aug 13 00:39:23.165827 sshd-session[2285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:39:23.171096 systemd-logind[1973]: New session 4 of user core. Aug 13 00:39:23.177835 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 00:39:23.294731 sshd[2287]: Connection closed by 139.178.68.195 port 36222 Aug 13 00:39:23.295300 sshd-session[2285]: pam_unix(sshd:session): session closed for user core Aug 13 00:39:23.298623 systemd[1]: sshd@3-172.31.31.138:22-139.178.68.195:36222.service: Deactivated successfully. Aug 13 00:39:23.300262 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 00:39:23.302618 systemd-logind[1973]: Session 4 logged out. Waiting for processes to exit. Aug 13 00:39:23.303907 systemd-logind[1973]: Removed session 4. Aug 13 00:39:23.329824 systemd[1]: Started sshd@4-172.31.31.138:22-139.178.68.195:36236.service - OpenSSH per-connection server daemon (139.178.68.195:36236). Aug 13 00:39:23.500949 sshd[2293]: Accepted publickey for core from 139.178.68.195 port 36236 ssh2: RSA SHA256:2C5UUUFKFtbeXpxut91iAvg9/kHC7TPoVPANvS2Tr9A Aug 13 00:39:23.502391 sshd-session[2293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:39:23.508982 systemd-logind[1973]: New session 5 of user core. Aug 13 00:39:23.514805 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 00:39:23.630598 sshd[2295]: Connection closed by 139.178.68.195 port 36236 Aug 13 00:39:23.631725 sshd-session[2293]: pam_unix(sshd:session): session closed for user core Aug 13 00:39:23.635339 systemd[1]: sshd@4-172.31.31.138:22-139.178.68.195:36236.service: Deactivated successfully. Aug 13 00:39:23.637696 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 00:39:23.640024 systemd-logind[1973]: Session 5 logged out. Waiting for processes to exit. Aug 13 00:39:23.641383 systemd-logind[1973]: Removed session 5. Aug 13 00:39:23.667178 systemd[1]: Started sshd@5-172.31.31.138:22-139.178.68.195:36244.service - OpenSSH per-connection server daemon (139.178.68.195:36244). Aug 13 00:39:23.841441 sshd[2301]: Accepted publickey for core from 139.178.68.195 port 36244 ssh2: RSA SHA256:2C5UUUFKFtbeXpxut91iAvg9/kHC7TPoVPANvS2Tr9A Aug 13 00:39:23.842757 sshd-session[2301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:39:23.848205 systemd-logind[1973]: New session 6 of user core. Aug 13 00:39:23.850735 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 00:39:23.971655 sshd[2303]: Connection closed by 139.178.68.195 port 36244 Aug 13 00:39:23.972221 sshd-session[2301]: pam_unix(sshd:session): session closed for user core Aug 13 00:39:23.976406 systemd[1]: sshd@5-172.31.31.138:22-139.178.68.195:36244.service: Deactivated successfully. Aug 13 00:39:23.978539 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 00:39:23.979675 systemd-logind[1973]: Session 6 logged out. Waiting for processes to exit. Aug 13 00:39:23.981434 systemd-logind[1973]: Removed session 6. Aug 13 00:39:24.013969 systemd[1]: Started sshd@6-172.31.31.138:22-139.178.68.195:36252.service - OpenSSH per-connection server daemon (139.178.68.195:36252). Aug 13 00:39:24.189965 sshd[2309]: Accepted publickey for core from 139.178.68.195 port 36252 ssh2: RSA SHA256:2C5UUUFKFtbeXpxut91iAvg9/kHC7TPoVPANvS2Tr9A Aug 13 00:39:24.191389 sshd-session[2309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:39:24.196634 systemd-logind[1973]: New session 7 of user core. Aug 13 00:39:24.203817 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 00:39:24.339691 sudo[2312]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 00:39:24.339984 sudo[2312]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:39:24.356102 sudo[2312]: pam_unix(sudo:session): session closed for user root Aug 13 00:39:24.378455 sshd[2311]: Connection closed by 139.178.68.195 port 36252 Aug 13 00:39:24.379253 sshd-session[2309]: pam_unix(sshd:session): session closed for user core Aug 13 00:39:24.383686 systemd[1]: sshd@6-172.31.31.138:22-139.178.68.195:36252.service: Deactivated successfully. Aug 13 00:39:24.385505 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 00:39:24.387360 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 00:39:24.388506 systemd-logind[1973]: Session 7 logged out. Waiting for processes to exit. Aug 13 00:39:24.391523 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:39:24.393514 systemd-logind[1973]: Removed session 7. Aug 13 00:39:24.415841 systemd[1]: Started sshd@7-172.31.31.138:22-139.178.68.195:36268.service - OpenSSH per-connection server daemon (139.178.68.195:36268). Aug 13 00:39:24.595334 sshd[2321]: Accepted publickey for core from 139.178.68.195 port 36268 ssh2: RSA SHA256:2C5UUUFKFtbeXpxut91iAvg9/kHC7TPoVPANvS2Tr9A Aug 13 00:39:24.597978 sshd-session[2321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:39:24.613438 systemd-logind[1973]: New session 8 of user core. Aug 13 00:39:24.619829 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 00:39:24.663818 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:39:24.673991 (kubelet)[2329]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:39:24.720866 sudo[2335]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 00:39:24.721284 sudo[2335]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:39:24.729134 kubelet[2329]: E0813 00:39:24.728930 2329 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:39:24.730635 sudo[2335]: pam_unix(sudo:session): session closed for user root Aug 13 00:39:24.734994 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:39:24.735209 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:39:24.736661 systemd[1]: kubelet.service: Consumed 189ms CPU time, 110.8M memory peak. Aug 13 00:39:24.738264 sudo[2334]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 13 00:39:24.738689 sudo[2334]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:39:24.749789 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 00:39:24.795310 augenrules[2359]: No rules Aug 13 00:39:24.796802 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:39:24.797066 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 00:39:24.798293 sudo[2334]: pam_unix(sudo:session): session closed for user root Aug 13 00:39:24.821689 sshd[2323]: Connection closed by 139.178.68.195 port 36268 Aug 13 00:39:24.822217 sshd-session[2321]: pam_unix(sshd:session): session closed for user core Aug 13 00:39:24.826889 systemd[1]: sshd@7-172.31.31.138:22-139.178.68.195:36268.service: Deactivated successfully. Aug 13 00:39:24.829084 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 00:39:24.830096 systemd-logind[1973]: Session 8 logged out. Waiting for processes to exit. Aug 13 00:39:24.831686 systemd-logind[1973]: Removed session 8. Aug 13 00:39:24.857158 systemd[1]: Started sshd@8-172.31.31.138:22-139.178.68.195:36274.service - OpenSSH per-connection server daemon (139.178.68.195:36274). Aug 13 00:39:25.030778 sshd[2368]: Accepted publickey for core from 139.178.68.195 port 36274 ssh2: RSA SHA256:2C5UUUFKFtbeXpxut91iAvg9/kHC7TPoVPANvS2Tr9A Aug 13 00:39:25.033121 sshd-session[2368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:39:25.039142 systemd-logind[1973]: New session 9 of user core. Aug 13 00:39:25.043811 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 00:39:25.141842 sudo[2371]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 00:39:25.142119 sudo[2371]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:39:25.822021 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 00:39:25.845195 (dockerd)[2391]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 00:39:26.292236 dockerd[2391]: time="2025-08-13T00:39:26.292169978Z" level=info msg="Starting up" Aug 13 00:39:26.295772 dockerd[2391]: time="2025-08-13T00:39:26.295232564Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Aug 13 00:39:26.346404 systemd[1]: var-lib-docker-metacopy\x2dcheck2663209073-merged.mount: Deactivated successfully. Aug 13 00:39:26.367832 dockerd[2391]: time="2025-08-13T00:39:26.367765293Z" level=info msg="Loading containers: start." Aug 13 00:39:26.379615 kernel: Initializing XFRM netlink socket Aug 13 00:39:26.643494 (udev-worker)[2412]: Network interface NamePolicy= disabled on kernel command line. Aug 13 00:39:26.693200 systemd-networkd[1839]: docker0: Link UP Aug 13 00:39:26.699318 dockerd[2391]: time="2025-08-13T00:39:26.699259148Z" level=info msg="Loading containers: done." Aug 13 00:39:26.712502 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1242515358-merged.mount: Deactivated successfully. Aug 13 00:39:26.721305 dockerd[2391]: time="2025-08-13T00:39:26.720960591Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 00:39:26.721305 dockerd[2391]: time="2025-08-13T00:39:26.721046578Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Aug 13 00:39:26.721305 dockerd[2391]: time="2025-08-13T00:39:26.721149391Z" level=info msg="Initializing buildkit" Aug 13 00:39:26.749755 dockerd[2391]: time="2025-08-13T00:39:26.749693260Z" level=info msg="Completed buildkit initialization" Aug 13 00:39:26.758782 dockerd[2391]: time="2025-08-13T00:39:26.758727904Z" level=info msg="Daemon has completed initialization" Aug 13 00:39:26.759302 dockerd[2391]: time="2025-08-13T00:39:26.758797753Z" level=info msg="API listen on /run/docker.sock" Aug 13 00:39:26.759173 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 00:39:27.909818 containerd[1995]: time="2025-08-13T00:39:27.909773038Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Aug 13 00:39:28.464016 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount267937823.mount: Deactivated successfully. Aug 13 00:39:30.301383 containerd[1995]: time="2025-08-13T00:39:30.301318660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:39:30.302297 containerd[1995]: time="2025-08-13T00:39:30.301901704Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.11: active requests=0, bytes read=28077759" Aug 13 00:39:30.304437 containerd[1995]: time="2025-08-13T00:39:30.304394242Z" level=info msg="ImageCreate event name:\"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:39:30.308214 containerd[1995]: time="2025-08-13T00:39:30.308144424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:39:30.309983 containerd[1995]: time="2025-08-13T00:39:30.309771181Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.11\" with image id \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\", size \"28074559\" in 2.399946267s" Aug 13 00:39:30.309983 containerd[1995]: time="2025-08-13T00:39:30.309820509Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\"" Aug 13 00:39:30.310962 containerd[1995]: time="2025-08-13T00:39:30.310917013Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Aug 13 00:39:32.342292 containerd[1995]: time="2025-08-13T00:39:32.341331388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:39:32.342292 containerd[1995]: time="2025-08-13T00:39:32.342257655Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.11: active requests=0, bytes read=24713245" Aug 13 00:39:32.343203 containerd[1995]: time="2025-08-13T00:39:32.343172801Z" level=info msg="ImageCreate event name:\"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:39:32.346455 containerd[1995]: time="2025-08-13T00:39:32.346414378Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:39:32.347349 containerd[1995]: time="2025-08-13T00:39:32.347304235Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.11\" with image id \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\", size \"26315079\" in 2.036349053s" Aug 13 00:39:32.347590 containerd[1995]: time="2025-08-13T00:39:32.347352155Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\"" Aug 13 00:39:32.347798 containerd[1995]: time="2025-08-13T00:39:32.347780022Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Aug 13 00:39:34.033454 containerd[1995]: time="2025-08-13T00:39:34.033394856Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:39:34.037180 containerd[1995]: time="2025-08-13T00:39:34.037103094Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.11: active requests=0, bytes read=18783700" Aug 13 00:39:34.041372 containerd[1995]: time="2025-08-13T00:39:34.041308706Z" level=info msg="ImageCreate event name:\"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:39:34.046727 containerd[1995]: time="2025-08-13T00:39:34.046658480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:39:34.047781 containerd[1995]: time="2025-08-13T00:39:34.047453925Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.11\" with image id \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\", size \"20385552\" in 1.699646589s" Aug 13 00:39:34.047781 containerd[1995]: time="2025-08-13T00:39:34.047489349Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\"" Aug 13 00:39:34.048165 containerd[1995]: time="2025-08-13T00:39:34.048140872Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 13 00:39:34.920483 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 00:39:34.922731 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:39:35.185489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4054063588.mount: Deactivated successfully. Aug 13 00:39:35.503779 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:39:35.514184 (kubelet)[2670]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:39:35.586046 kubelet[2670]: E0813 00:39:35.585961 2670 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:39:35.590958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:39:35.591317 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:39:35.592067 systemd[1]: kubelet.service: Consumed 206ms CPU time, 109.8M memory peak. Aug 13 00:39:36.025130 containerd[1995]: time="2025-08-13T00:39:36.025087602Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:39:36.029749 containerd[1995]: time="2025-08-13T00:39:36.029699814Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.11: active requests=0, bytes read=30383612" Aug 13 00:39:36.032017 containerd[1995]: time="2025-08-13T00:39:36.031950904Z" level=info msg="ImageCreate event name:\"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:39:36.036003 containerd[1995]: time="2025-08-13T00:39:36.035936732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:39:36.039227 containerd[1995]: time="2025-08-13T00:39:36.039160329Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.11\" with image id \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\", repo tag \"registry.k8s.io/kube-proxy:v1.31.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\", size \"30382631\" in 1.99098919s" Aug 13 00:39:36.039227 containerd[1995]: time="2025-08-13T00:39:36.039194662Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\"" Aug 13 00:39:36.039805 containerd[1995]: time="2025-08-13T00:39:36.039719397Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 00:39:36.541486 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount926891507.mount: Deactivated successfully. Aug 13 00:39:37.615439 containerd[1995]: time="2025-08-13T00:39:37.615275627Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:39:37.621512 containerd[1995]: time="2025-08-13T00:39:37.621450391Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Aug 13 00:39:37.632025 containerd[1995]: time="2025-08-13T00:39:37.631978406Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:39:37.638525 containerd[1995]: time="2025-08-13T00:39:37.638441993Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:39:37.639584 containerd[1995]: time="2025-08-13T00:39:37.639427056Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.599671944s" Aug 13 00:39:37.639584 containerd[1995]: time="2025-08-13T00:39:37.639466036Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 00:39:37.640231 containerd[1995]: time="2025-08-13T00:39:37.640187787Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 00:39:38.099891 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3998652788.mount: Deactivated successfully. Aug 13 00:39:38.107827 containerd[1995]: time="2025-08-13T00:39:38.107757812Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:39:38.108995 containerd[1995]: time="2025-08-13T00:39:38.108768670Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 13 00:39:38.110409 containerd[1995]: time="2025-08-13T00:39:38.110373249Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:39:38.113440 containerd[1995]: time="2025-08-13T00:39:38.112759152Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:39:38.113440 containerd[1995]: time="2025-08-13T00:39:38.113313229Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 473.080547ms" Aug 13 00:39:38.113440 containerd[1995]: time="2025-08-13T00:39:38.113349785Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 00:39:38.114095 containerd[1995]: time="2025-08-13T00:39:38.114066949Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 00:39:38.653018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2698905016.mount: Deactivated successfully. Aug 13 00:39:40.981988 containerd[1995]: time="2025-08-13T00:39:40.981838778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:39:40.983901 containerd[1995]: time="2025-08-13T00:39:40.983841248Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Aug 13 00:39:40.986728 containerd[1995]: time="2025-08-13T00:39:40.986678643Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:39:40.989184 containerd[1995]: time="2025-08-13T00:39:40.989059502Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:39:40.990451 containerd[1995]: time="2025-08-13T00:39:40.990383694Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.876226011s" Aug 13 00:39:40.990451 containerd[1995]: time="2025-08-13T00:39:40.990433936Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 00:39:41.196639 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 13 00:39:44.069027 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:39:44.069296 systemd[1]: kubelet.service: Consumed 206ms CPU time, 109.8M memory peak. Aug 13 00:39:44.072146 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:39:44.109317 systemd[1]: Reload requested from client PID 2817 ('systemctl') (unit session-9.scope)... Aug 13 00:39:44.109344 systemd[1]: Reloading... Aug 13 00:39:44.249592 zram_generator::config[2857]: No configuration found. Aug 13 00:39:44.403636 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:39:44.540260 systemd[1]: Reloading finished in 430 ms. Aug 13 00:39:44.605008 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 00:39:44.605114 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 00:39:44.605406 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:39:44.605462 systemd[1]: kubelet.service: Consumed 144ms CPU time, 98M memory peak. Aug 13 00:39:44.607993 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:39:44.851439 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:39:44.861408 (kubelet)[2924]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:39:44.911590 kubelet[2924]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:39:44.911590 kubelet[2924]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 00:39:44.911590 kubelet[2924]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:39:44.914330 kubelet[2924]: I0813 00:39:44.914021 2924 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:39:45.610601 kubelet[2924]: I0813 00:39:45.610541 2924 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 00:39:45.610601 kubelet[2924]: I0813 00:39:45.610591 2924 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:39:45.610966 kubelet[2924]: I0813 00:39:45.610942 2924 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 00:39:45.725907 kubelet[2924]: E0813 00:39:45.724338 2924 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.31.138:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.31.138:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:39:45.726586 kubelet[2924]: I0813 00:39:45.726526 2924 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:39:45.747129 kubelet[2924]: I0813 00:39:45.747095 2924 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 13 00:39:45.755519 kubelet[2924]: I0813 00:39:45.755463 2924 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:39:45.759385 kubelet[2924]: I0813 00:39:45.759328 2924 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 00:39:45.759611 kubelet[2924]: I0813 00:39:45.759551 2924 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:39:45.759810 kubelet[2924]: I0813 00:39:45.759607 2924 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-138","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:39:45.759929 kubelet[2924]: I0813 00:39:45.759820 2924 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:39:45.759929 kubelet[2924]: I0813 00:39:45.759832 2924 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 00:39:45.760906 kubelet[2924]: I0813 00:39:45.760870 2924 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:39:45.769615 kubelet[2924]: I0813 00:39:45.768408 2924 kubelet.go:408] "Attempting to sync node with API server" Aug 13 00:39:45.769615 kubelet[2924]: I0813 00:39:45.768581 2924 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:39:45.770890 kubelet[2924]: I0813 00:39:45.770671 2924 kubelet.go:314] "Adding apiserver pod source" Aug 13 00:39:45.770890 kubelet[2924]: I0813 00:39:45.770712 2924 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:39:45.778134 kubelet[2924]: I0813 00:39:45.778092 2924 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 13 00:39:45.790328 kubelet[2924]: I0813 00:39:45.790253 2924 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:39:45.790542 kubelet[2924]: W0813 00:39:45.790503 2924 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 00:39:45.791483 kubelet[2924]: I0813 00:39:45.791278 2924 server.go:1274] "Started kubelet" Aug 13 00:39:45.794756 kubelet[2924]: W0813 00:39:45.794680 2924 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.31.138:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.31.138:6443: connect: connection refused Aug 13 00:39:45.794893 kubelet[2924]: E0813 00:39:45.794776 2924 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.31.138:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.31.138:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:39:45.798171 kubelet[2924]: I0813 00:39:45.797614 2924 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:39:45.805146 kubelet[2924]: I0813 00:39:45.804177 2924 server.go:449] "Adding debug handlers to kubelet server" Aug 13 00:39:45.809527 kubelet[2924]: I0813 00:39:45.809375 2924 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:39:45.809726 kubelet[2924]: I0813 00:39:45.809687 2924 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:39:45.809941 kubelet[2924]: W0813 00:39:45.809897 2924 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.31.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-138&limit=500&resourceVersion=0": dial tcp 172.31.31.138:6443: connect: connection refused Aug 13 00:39:45.809981 kubelet[2924]: E0813 00:39:45.809950 2924 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.31.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-138&limit=500&resourceVersion=0\": dial tcp 172.31.31.138:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:39:45.811614 kubelet[2924]: I0813 00:39:45.810631 2924 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:39:45.816050 kubelet[2924]: I0813 00:39:45.815238 2924 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:39:45.833243 kubelet[2924]: I0813 00:39:45.831682 2924 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 00:39:45.833243 kubelet[2924]: E0813 00:39:45.832430 2924 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-31-138\" not found" Aug 13 00:39:45.835368 kubelet[2924]: I0813 00:39:45.835332 2924 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 00:39:45.836833 kubelet[2924]: I0813 00:39:45.835419 2924 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:39:45.839010 kubelet[2924]: E0813 00:39:45.819915 2924 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.31.138:6443/api/v1/namespaces/default/events\": dial tcp 172.31.31.138:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-31-138.185b2ca961b46a4c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-138,UID:ip-172-31-31-138,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-31-138,},FirstTimestamp:2025-08-13 00:39:45.791248972 +0000 UTC m=+0.925205880,LastTimestamp:2025-08-13 00:39:45.791248972 +0000 UTC m=+0.925205880,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-138,}" Aug 13 00:39:45.841086 kubelet[2924]: E0813 00:39:45.839346 2924 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-138?timeout=10s\": dial tcp 172.31.31.138:6443: connect: connection refused" interval="200ms" Aug 13 00:39:45.841086 kubelet[2924]: W0813 00:39:45.839502 2924 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.31.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.138:6443: connect: connection refused Aug 13 00:39:45.841086 kubelet[2924]: E0813 00:39:45.839595 2924 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.31.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.31.138:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:39:45.850418 kubelet[2924]: I0813 00:39:45.850384 2924 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:39:45.870307 kubelet[2924]: I0813 00:39:45.869338 2924 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:39:45.870307 kubelet[2924]: I0813 00:39:45.869375 2924 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:39:45.871883 kubelet[2924]: E0813 00:39:45.871830 2924 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:39:45.875417 kubelet[2924]: I0813 00:39:45.873592 2924 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:39:45.883674 kubelet[2924]: I0813 00:39:45.883642 2924 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:39:45.884653 kubelet[2924]: I0813 00:39:45.884635 2924 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 00:39:45.884837 kubelet[2924]: I0813 00:39:45.884823 2924 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 00:39:45.887126 kubelet[2924]: E0813 00:39:45.887095 2924 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:39:45.887280 kubelet[2924]: W0813 00:39:45.887253 2924 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.31.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.138:6443: connect: connection refused Aug 13 00:39:45.887330 kubelet[2924]: E0813 00:39:45.887302 2924 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.31.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.31.138:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:39:45.906526 kubelet[2924]: I0813 00:39:45.906464 2924 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 00:39:45.908104 kubelet[2924]: I0813 00:39:45.906615 2924 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 00:39:45.908104 kubelet[2924]: I0813 00:39:45.906640 2924 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:39:45.915811 kubelet[2924]: I0813 00:39:45.915735 2924 policy_none.go:49] "None policy: Start" Aug 13 00:39:45.917604 kubelet[2924]: I0813 00:39:45.917533 2924 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 00:39:45.923212 kubelet[2924]: I0813 00:39:45.917693 2924 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:39:45.932683 kubelet[2924]: E0813 00:39:45.932657 2924 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-31-138\" not found" Aug 13 00:39:45.938871 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 00:39:45.954056 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 00:39:45.959399 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 00:39:45.971690 kubelet[2924]: I0813 00:39:45.971667 2924 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:39:45.972217 kubelet[2924]: I0813 00:39:45.972138 2924 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:39:45.972217 kubelet[2924]: I0813 00:39:45.972155 2924 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:39:45.975548 kubelet[2924]: I0813 00:39:45.972550 2924 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:39:45.975548 kubelet[2924]: E0813 00:39:45.974071 2924 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-31-138\" not found" Aug 13 00:39:46.003874 systemd[1]: Created slice kubepods-burstable-pod316ab10003df6c3a7916163d3fcf984f.slice - libcontainer container kubepods-burstable-pod316ab10003df6c3a7916163d3fcf984f.slice. Aug 13 00:39:46.023395 systemd[1]: Created slice kubepods-burstable-pod74b913fb971333e5a9cd9097b2228d24.slice - libcontainer container kubepods-burstable-pod74b913fb971333e5a9cd9097b2228d24.slice. Aug 13 00:39:46.030024 systemd[1]: Created slice kubepods-burstable-pod7a55c946170ca8b59faf7f06d35966fa.slice - libcontainer container kubepods-burstable-pod7a55c946170ca8b59faf7f06d35966fa.slice. Aug 13 00:39:46.041389 kubelet[2924]: E0813 00:39:46.041331 2924 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-138?timeout=10s\": dial tcp 172.31.31.138:6443: connect: connection refused" interval="400ms" Aug 13 00:39:46.075548 kubelet[2924]: I0813 00:39:46.075514 2924 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-138" Aug 13 00:39:46.075993 kubelet[2924]: E0813 00:39:46.075964 2924 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.31.138:6443/api/v1/nodes\": dial tcp 172.31.31.138:6443: connect: connection refused" node="ip-172-31-31-138" Aug 13 00:39:46.137175 kubelet[2924]: I0813 00:39:46.137028 2924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/316ab10003df6c3a7916163d3fcf984f-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-138\" (UID: \"316ab10003df6c3a7916163d3fcf984f\") " pod="kube-system/kube-controller-manager-ip-172-31-31-138" Aug 13 00:39:46.137175 kubelet[2924]: I0813 00:39:46.137069 2924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/316ab10003df6c3a7916163d3fcf984f-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-138\" (UID: \"316ab10003df6c3a7916163d3fcf984f\") " pod="kube-system/kube-controller-manager-ip-172-31-31-138" Aug 13 00:39:46.137175 kubelet[2924]: I0813 00:39:46.137096 2924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/316ab10003df6c3a7916163d3fcf984f-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-138\" (UID: \"316ab10003df6c3a7916163d3fcf984f\") " pod="kube-system/kube-controller-manager-ip-172-31-31-138" Aug 13 00:39:46.137175 kubelet[2924]: I0813 00:39:46.137112 2924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7a55c946170ca8b59faf7f06d35966fa-ca-certs\") pod \"kube-apiserver-ip-172-31-31-138\" (UID: \"7a55c946170ca8b59faf7f06d35966fa\") " pod="kube-system/kube-apiserver-ip-172-31-31-138" Aug 13 00:39:46.137175 kubelet[2924]: I0813 00:39:46.137128 2924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/316ab10003df6c3a7916163d3fcf984f-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-138\" (UID: \"316ab10003df6c3a7916163d3fcf984f\") " pod="kube-system/kube-controller-manager-ip-172-31-31-138" Aug 13 00:39:46.137397 kubelet[2924]: I0813 00:39:46.137144 2924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/316ab10003df6c3a7916163d3fcf984f-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-138\" (UID: \"316ab10003df6c3a7916163d3fcf984f\") " pod="kube-system/kube-controller-manager-ip-172-31-31-138" Aug 13 00:39:46.138061 kubelet[2924]: I0813 00:39:46.137158 2924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/74b913fb971333e5a9cd9097b2228d24-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-138\" (UID: \"74b913fb971333e5a9cd9097b2228d24\") " pod="kube-system/kube-scheduler-ip-172-31-31-138" Aug 13 00:39:46.138169 kubelet[2924]: I0813 00:39:46.138072 2924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7a55c946170ca8b59faf7f06d35966fa-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-138\" (UID: \"7a55c946170ca8b59faf7f06d35966fa\") " pod="kube-system/kube-apiserver-ip-172-31-31-138" Aug 13 00:39:46.138169 kubelet[2924]: I0813 00:39:46.138108 2924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7a55c946170ca8b59faf7f06d35966fa-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-138\" (UID: \"7a55c946170ca8b59faf7f06d35966fa\") " pod="kube-system/kube-apiserver-ip-172-31-31-138" Aug 13 00:39:46.278774 kubelet[2924]: I0813 00:39:46.278743 2924 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-138" Aug 13 00:39:46.280501 kubelet[2924]: E0813 00:39:46.280370 2924 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.31.138:6443/api/v1/nodes\": dial tcp 172.31.31.138:6443: connect: connection refused" node="ip-172-31-31-138" Aug 13 00:39:46.321602 containerd[1995]: time="2025-08-13T00:39:46.321536368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-138,Uid:316ab10003df6c3a7916163d3fcf984f,Namespace:kube-system,Attempt:0,}" Aug 13 00:39:46.328309 containerd[1995]: time="2025-08-13T00:39:46.328267626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-138,Uid:74b913fb971333e5a9cd9097b2228d24,Namespace:kube-system,Attempt:0,}" Aug 13 00:39:46.335401 containerd[1995]: time="2025-08-13T00:39:46.335345271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-138,Uid:7a55c946170ca8b59faf7f06d35966fa,Namespace:kube-system,Attempt:0,}" Aug 13 00:39:46.443976 kubelet[2924]: E0813 00:39:46.443889 2924 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-138?timeout=10s\": dial tcp 172.31.31.138:6443: connect: connection refused" interval="800ms" Aug 13 00:39:46.518827 containerd[1995]: time="2025-08-13T00:39:46.518764038Z" level=info msg="connecting to shim 685abbe05335a04938f95adc8993dc0da94fd7c743286ddf6550d15e6750fea0" address="unix:///run/containerd/s/c19dd42daa9de68a5fac359becafeb75350aed64cb41cd8412406747fb2fbea0" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:39:46.521090 containerd[1995]: time="2025-08-13T00:39:46.521046611Z" level=info msg="connecting to shim 0ed558137596ff8ad5073838b09479f1baab8b94a7ec004b76832bcea0683e0b" address="unix:///run/containerd/s/2e13b7f34a79709ef33d344725a75712c83e79cd645842e93db19b1039ada265" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:39:46.526917 containerd[1995]: time="2025-08-13T00:39:46.526194136Z" level=info msg="connecting to shim 1a3f920e7695c50481edd768149efa87417eb30a68020e2329694069f33e8d7c" address="unix:///run/containerd/s/f40c3b83122d004fbcd80e22cb4c13773425b838fec953522125cd6479715112" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:39:46.656817 systemd[1]: Started cri-containerd-1a3f920e7695c50481edd768149efa87417eb30a68020e2329694069f33e8d7c.scope - libcontainer container 1a3f920e7695c50481edd768149efa87417eb30a68020e2329694069f33e8d7c. Aug 13 00:39:46.662376 systemd[1]: Started cri-containerd-0ed558137596ff8ad5073838b09479f1baab8b94a7ec004b76832bcea0683e0b.scope - libcontainer container 0ed558137596ff8ad5073838b09479f1baab8b94a7ec004b76832bcea0683e0b. Aug 13 00:39:46.663957 systemd[1]: Started cri-containerd-685abbe05335a04938f95adc8993dc0da94fd7c743286ddf6550d15e6750fea0.scope - libcontainer container 685abbe05335a04938f95adc8993dc0da94fd7c743286ddf6550d15e6750fea0. Aug 13 00:39:46.685612 kubelet[2924]: I0813 00:39:46.685246 2924 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-138" Aug 13 00:39:46.687546 kubelet[2924]: E0813 00:39:46.687493 2924 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.31.138:6443/api/v1/nodes\": dial tcp 172.31.31.138:6443: connect: connection refused" node="ip-172-31-31-138" Aug 13 00:39:46.788161 containerd[1995]: time="2025-08-13T00:39:46.788053290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-138,Uid:316ab10003df6c3a7916163d3fcf984f,Namespace:kube-system,Attempt:0,} returns sandbox id \"685abbe05335a04938f95adc8993dc0da94fd7c743286ddf6550d15e6750fea0\"" Aug 13 00:39:46.798586 containerd[1995]: time="2025-08-13T00:39:46.798357700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-138,Uid:7a55c946170ca8b59faf7f06d35966fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a3f920e7695c50481edd768149efa87417eb30a68020e2329694069f33e8d7c\"" Aug 13 00:39:46.806975 containerd[1995]: time="2025-08-13T00:39:46.806928846Z" level=info msg="CreateContainer within sandbox \"1a3f920e7695c50481edd768149efa87417eb30a68020e2329694069f33e8d7c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 00:39:46.808795 containerd[1995]: time="2025-08-13T00:39:46.808743443Z" level=info msg="CreateContainer within sandbox \"685abbe05335a04938f95adc8993dc0da94fd7c743286ddf6550d15e6750fea0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 00:39:46.827935 containerd[1995]: time="2025-08-13T00:39:46.827832138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-138,Uid:74b913fb971333e5a9cd9097b2228d24,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ed558137596ff8ad5073838b09479f1baab8b94a7ec004b76832bcea0683e0b\"" Aug 13 00:39:46.828088 containerd[1995]: time="2025-08-13T00:39:46.828048389Z" level=info msg="Container d3b17116ddedea3c963a604933e32678b5fa024ff6e0f7828c20d293ff45c27d: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:39:46.829957 containerd[1995]: time="2025-08-13T00:39:46.829924518Z" level=info msg="Container cb0bc83c608ac71080dc4eac4c0aec86f264ce0e43ae9a700de1a93ade0bf7fb: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:39:46.834978 containerd[1995]: time="2025-08-13T00:39:46.834950150Z" level=info msg="CreateContainer within sandbox \"0ed558137596ff8ad5073838b09479f1baab8b94a7ec004b76832bcea0683e0b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 00:39:46.842506 containerd[1995]: time="2025-08-13T00:39:46.842392720Z" level=info msg="CreateContainer within sandbox \"685abbe05335a04938f95adc8993dc0da94fd7c743286ddf6550d15e6750fea0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d3b17116ddedea3c963a604933e32678b5fa024ff6e0f7828c20d293ff45c27d\"" Aug 13 00:39:46.846468 containerd[1995]: time="2025-08-13T00:39:46.846420681Z" level=info msg="CreateContainer within sandbox \"1a3f920e7695c50481edd768149efa87417eb30a68020e2329694069f33e8d7c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cb0bc83c608ac71080dc4eac4c0aec86f264ce0e43ae9a700de1a93ade0bf7fb\"" Aug 13 00:39:46.847326 containerd[1995]: time="2025-08-13T00:39:46.847130678Z" level=info msg="StartContainer for \"d3b17116ddedea3c963a604933e32678b5fa024ff6e0f7828c20d293ff45c27d\"" Aug 13 00:39:46.847800 containerd[1995]: time="2025-08-13T00:39:46.847418305Z" level=info msg="StartContainer for \"cb0bc83c608ac71080dc4eac4c0aec86f264ce0e43ae9a700de1a93ade0bf7fb\"" Aug 13 00:39:46.850218 containerd[1995]: time="2025-08-13T00:39:46.849184527Z" level=info msg="connecting to shim cb0bc83c608ac71080dc4eac4c0aec86f264ce0e43ae9a700de1a93ade0bf7fb" address="unix:///run/containerd/s/f40c3b83122d004fbcd80e22cb4c13773425b838fec953522125cd6479715112" protocol=ttrpc version=3 Aug 13 00:39:46.850218 containerd[1995]: time="2025-08-13T00:39:46.849705137Z" level=info msg="connecting to shim d3b17116ddedea3c963a604933e32678b5fa024ff6e0f7828c20d293ff45c27d" address="unix:///run/containerd/s/c19dd42daa9de68a5fac359becafeb75350aed64cb41cd8412406747fb2fbea0" protocol=ttrpc version=3 Aug 13 00:39:46.853828 containerd[1995]: time="2025-08-13T00:39:46.853788169Z" level=info msg="Container 53a9184b1f578ad7a0aaca081605870d0e5c3d30bfb561108809ab39f1356e1b: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:39:46.868984 containerd[1995]: time="2025-08-13T00:39:46.868935401Z" level=info msg="CreateContainer within sandbox \"0ed558137596ff8ad5073838b09479f1baab8b94a7ec004b76832bcea0683e0b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"53a9184b1f578ad7a0aaca081605870d0e5c3d30bfb561108809ab39f1356e1b\"" Aug 13 00:39:46.869709 containerd[1995]: time="2025-08-13T00:39:46.869677617Z" level=info msg="StartContainer for \"53a9184b1f578ad7a0aaca081605870d0e5c3d30bfb561108809ab39f1356e1b\"" Aug 13 00:39:46.870782 containerd[1995]: time="2025-08-13T00:39:46.870624178Z" level=info msg="connecting to shim 53a9184b1f578ad7a0aaca081605870d0e5c3d30bfb561108809ab39f1356e1b" address="unix:///run/containerd/s/2e13b7f34a79709ef33d344725a75712c83e79cd645842e93db19b1039ada265" protocol=ttrpc version=3 Aug 13 00:39:46.886987 systemd[1]: Started cri-containerd-d3b17116ddedea3c963a604933e32678b5fa024ff6e0f7828c20d293ff45c27d.scope - libcontainer container d3b17116ddedea3c963a604933e32678b5fa024ff6e0f7828c20d293ff45c27d. Aug 13 00:39:46.902810 systemd[1]: Started cri-containerd-cb0bc83c608ac71080dc4eac4c0aec86f264ce0e43ae9a700de1a93ade0bf7fb.scope - libcontainer container cb0bc83c608ac71080dc4eac4c0aec86f264ce0e43ae9a700de1a93ade0bf7fb. Aug 13 00:39:46.937380 systemd[1]: Started cri-containerd-53a9184b1f578ad7a0aaca081605870d0e5c3d30bfb561108809ab39f1356e1b.scope - libcontainer container 53a9184b1f578ad7a0aaca081605870d0e5c3d30bfb561108809ab39f1356e1b. Aug 13 00:39:46.997748 kubelet[2924]: W0813 00:39:46.997436 2924 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.31.138:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.31.138:6443: connect: connection refused Aug 13 00:39:46.999408 kubelet[2924]: E0813 00:39:46.997720 2924 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.31.138:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.31.138:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:39:47.019901 containerd[1995]: time="2025-08-13T00:39:47.019779190Z" level=info msg="StartContainer for \"cb0bc83c608ac71080dc4eac4c0aec86f264ce0e43ae9a700de1a93ade0bf7fb\" returns successfully" Aug 13 00:39:47.032354 containerd[1995]: time="2025-08-13T00:39:47.032259152Z" level=info msg="StartContainer for \"d3b17116ddedea3c963a604933e32678b5fa024ff6e0f7828c20d293ff45c27d\" returns successfully" Aug 13 00:39:47.041941 kubelet[2924]: W0813 00:39:47.041751 2924 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.31.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.138:6443: connect: connection refused Aug 13 00:39:47.041941 kubelet[2924]: E0813 00:39:47.041835 2924 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.31.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.31.138:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:39:47.073229 containerd[1995]: time="2025-08-13T00:39:47.073183628Z" level=info msg="StartContainer for \"53a9184b1f578ad7a0aaca081605870d0e5c3d30bfb561108809ab39f1356e1b\" returns successfully" Aug 13 00:39:47.245141 kubelet[2924]: E0813 00:39:47.245064 2924 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-138?timeout=10s\": dial tcp 172.31.31.138:6443: connect: connection refused" interval="1.6s" Aug 13 00:39:47.275241 kubelet[2924]: W0813 00:39:47.275160 2924 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.31.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.138:6443: connect: connection refused Aug 13 00:39:47.275391 kubelet[2924]: E0813 00:39:47.275253 2924 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.31.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.31.138:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:39:47.285891 kubelet[2924]: W0813 00:39:47.285807 2924 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.31.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-138&limit=500&resourceVersion=0": dial tcp 172.31.31.138:6443: connect: connection refused Aug 13 00:39:47.286020 kubelet[2924]: E0813 00:39:47.285904 2924 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.31.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-138&limit=500&resourceVersion=0\": dial tcp 172.31.31.138:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:39:47.490387 kubelet[2924]: I0813 00:39:47.490284 2924 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-138" Aug 13 00:39:47.490686 kubelet[2924]: E0813 00:39:47.490656 2924 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.31.138:6443/api/v1/nodes\": dial tcp 172.31.31.138:6443: connect: connection refused" node="ip-172-31-31-138" Aug 13 00:39:47.871854 kubelet[2924]: E0813 00:39:47.871736 2924 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.31.138:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.31.138:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:39:49.092719 kubelet[2924]: I0813 00:39:49.092684 2924 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-138" Aug 13 00:39:50.162825 kubelet[2924]: E0813 00:39:50.162784 2924 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-31-138\" not found" node="ip-172-31-31-138" Aug 13 00:39:50.254885 kubelet[2924]: E0813 00:39:50.254787 2924 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-31-138.185b2ca961b46a4c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-138,UID:ip-172-31-31-138,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-31-138,},FirstTimestamp:2025-08-13 00:39:45.791248972 +0000 UTC m=+0.925205880,LastTimestamp:2025-08-13 00:39:45.791248972 +0000 UTC m=+0.925205880,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-138,}" Aug 13 00:39:50.311754 kubelet[2924]: E0813 00:39:50.310838 2924 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-31-138.185b2ca96681c040 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-138,UID:ip-172-31-31-138,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-31-138,},FirstTimestamp:2025-08-13 00:39:45.87181472 +0000 UTC m=+1.005771626,LastTimestamp:2025-08-13 00:39:45.87181472 +0000 UTC m=+1.005771626,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-138,}" Aug 13 00:39:50.341932 kubelet[2924]: I0813 00:39:50.341821 2924 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-31-138" Aug 13 00:39:50.792292 kubelet[2924]: I0813 00:39:50.792253 2924 apiserver.go:52] "Watching apiserver" Aug 13 00:39:50.835896 kubelet[2924]: I0813 00:39:50.835829 2924 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 00:39:50.911385 kubelet[2924]: E0813 00:39:50.911338 2924 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-31-138\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-31-138" Aug 13 00:39:52.343430 systemd[1]: Reload requested from client PID 3196 ('systemctl') (unit session-9.scope)... Aug 13 00:39:52.343452 systemd[1]: Reloading... Aug 13 00:39:52.499628 zram_generator::config[3237]: No configuration found. Aug 13 00:39:52.654340 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:39:52.817578 systemd[1]: Reloading finished in 473 ms. Aug 13 00:39:52.845151 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:39:52.857286 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:39:52.857607 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:39:52.857679 systemd[1]: kubelet.service: Consumed 1.202s CPU time, 128.9M memory peak. Aug 13 00:39:52.861249 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:39:53.255050 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:39:53.268042 (kubelet)[3300]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:39:53.342836 kubelet[3300]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:39:53.342836 kubelet[3300]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 00:39:53.342836 kubelet[3300]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:39:53.343288 kubelet[3300]: I0813 00:39:53.342925 3300 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:39:53.351912 kubelet[3300]: I0813 00:39:53.351805 3300 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 00:39:53.351912 kubelet[3300]: I0813 00:39:53.351830 3300 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:39:53.352064 kubelet[3300]: I0813 00:39:53.352057 3300 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 00:39:53.353498 kubelet[3300]: I0813 00:39:53.353417 3300 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 00:39:53.361520 kubelet[3300]: I0813 00:39:53.361469 3300 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:39:53.372302 kubelet[3300]: I0813 00:39:53.372268 3300 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 13 00:39:53.376591 kubelet[3300]: I0813 00:39:53.376247 3300 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:39:53.377019 kubelet[3300]: I0813 00:39:53.376972 3300 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 00:39:53.377780 kubelet[3300]: I0813 00:39:53.377097 3300 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:39:53.377780 kubelet[3300]: I0813 00:39:53.377128 3300 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-138","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:39:53.377780 kubelet[3300]: I0813 00:39:53.377411 3300 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:39:53.377780 kubelet[3300]: I0813 00:39:53.377421 3300 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 00:39:53.377981 kubelet[3300]: I0813 00:39:53.377448 3300 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:39:53.377981 kubelet[3300]: I0813 00:39:53.377545 3300 kubelet.go:408] "Attempting to sync node with API server" Aug 13 00:39:53.377981 kubelet[3300]: I0813 00:39:53.377561 3300 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:39:53.377981 kubelet[3300]: I0813 00:39:53.377616 3300 kubelet.go:314] "Adding apiserver pod source" Aug 13 00:39:53.377981 kubelet[3300]: I0813 00:39:53.377626 3300 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:39:53.381931 kubelet[3300]: I0813 00:39:53.381478 3300 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 13 00:39:53.383166 kubelet[3300]: I0813 00:39:53.383032 3300 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:39:53.385485 kubelet[3300]: I0813 00:39:53.385059 3300 server.go:1274] "Started kubelet" Aug 13 00:39:53.392727 kubelet[3300]: I0813 00:39:53.392207 3300 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:39:53.407783 kubelet[3300]: I0813 00:39:53.407658 3300 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:39:53.409654 kubelet[3300]: I0813 00:39:53.408477 3300 server.go:449] "Adding debug handlers to kubelet server" Aug 13 00:39:53.410651 sudo[3314]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 13 00:39:53.411341 kubelet[3300]: I0813 00:39:53.410969 3300 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:39:53.411341 kubelet[3300]: I0813 00:39:53.411123 3300 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:39:53.411341 kubelet[3300]: I0813 00:39:53.411321 3300 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:39:53.411276 sudo[3314]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Aug 13 00:39:53.415848 kubelet[3300]: I0813 00:39:53.415825 3300 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 00:39:53.421228 kubelet[3300]: I0813 00:39:53.421208 3300 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:39:53.421439 kubelet[3300]: I0813 00:39:53.421424 3300 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:39:53.421893 kubelet[3300]: I0813 00:39:53.421870 3300 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 00:39:53.421985 kubelet[3300]: I0813 00:39:53.421974 3300 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:39:53.426941 kubelet[3300]: E0813 00:39:53.426894 3300 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:39:53.427344 kubelet[3300]: I0813 00:39:53.427295 3300 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:39:53.436520 kubelet[3300]: I0813 00:39:53.436297 3300 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:39:53.436520 kubelet[3300]: I0813 00:39:53.436345 3300 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 00:39:53.436520 kubelet[3300]: I0813 00:39:53.436373 3300 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 00:39:53.436520 kubelet[3300]: E0813 00:39:53.436433 3300 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:39:53.444246 kubelet[3300]: I0813 00:39:53.444199 3300 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:39:53.524744 kubelet[3300]: I0813 00:39:53.524645 3300 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 00:39:53.526586 kubelet[3300]: I0813 00:39:53.526313 3300 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 00:39:53.526586 kubelet[3300]: I0813 00:39:53.526350 3300 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:39:53.526739 kubelet[3300]: I0813 00:39:53.526561 3300 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 00:39:53.526955 kubelet[3300]: I0813 00:39:53.526817 3300 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 00:39:53.526955 kubelet[3300]: I0813 00:39:53.526854 3300 policy_none.go:49] "None policy: Start" Aug 13 00:39:53.528310 kubelet[3300]: I0813 00:39:53.528256 3300 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 00:39:53.528529 kubelet[3300]: I0813 00:39:53.528472 3300 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:39:53.528920 kubelet[3300]: I0813 00:39:53.528829 3300 state_mem.go:75] "Updated machine memory state" Aug 13 00:39:53.537552 kubelet[3300]: I0813 00:39:53.537506 3300 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:39:53.537825 kubelet[3300]: E0813 00:39:53.537808 3300 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 00:39:53.540067 kubelet[3300]: I0813 00:39:53.539773 3300 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:39:53.541337 kubelet[3300]: I0813 00:39:53.541178 3300 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:39:53.544951 kubelet[3300]: I0813 00:39:53.544924 3300 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:39:53.664632 kubelet[3300]: I0813 00:39:53.663980 3300 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-138" Aug 13 00:39:53.680960 kubelet[3300]: I0813 00:39:53.680920 3300 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-31-138" Aug 13 00:39:53.681105 kubelet[3300]: I0813 00:39:53.681046 3300 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-31-138" Aug 13 00:39:53.922740 kubelet[3300]: I0813 00:39:53.922704 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7a55c946170ca8b59faf7f06d35966fa-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-138\" (UID: \"7a55c946170ca8b59faf7f06d35966fa\") " pod="kube-system/kube-apiserver-ip-172-31-31-138" Aug 13 00:39:53.922889 kubelet[3300]: I0813 00:39:53.922751 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/316ab10003df6c3a7916163d3fcf984f-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-138\" (UID: \"316ab10003df6c3a7916163d3fcf984f\") " pod="kube-system/kube-controller-manager-ip-172-31-31-138" Aug 13 00:39:53.922889 kubelet[3300]: I0813 00:39:53.922777 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/316ab10003df6c3a7916163d3fcf984f-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-138\" (UID: \"316ab10003df6c3a7916163d3fcf984f\") " pod="kube-system/kube-controller-manager-ip-172-31-31-138" Aug 13 00:39:53.922889 kubelet[3300]: I0813 00:39:53.922801 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/316ab10003df6c3a7916163d3fcf984f-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-138\" (UID: \"316ab10003df6c3a7916163d3fcf984f\") " pod="kube-system/kube-controller-manager-ip-172-31-31-138" Aug 13 00:39:53.922889 kubelet[3300]: I0813 00:39:53.922822 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/316ab10003df6c3a7916163d3fcf984f-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-138\" (UID: \"316ab10003df6c3a7916163d3fcf984f\") " pod="kube-system/kube-controller-manager-ip-172-31-31-138" Aug 13 00:39:53.922889 kubelet[3300]: I0813 00:39:53.922847 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/316ab10003df6c3a7916163d3fcf984f-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-138\" (UID: \"316ab10003df6c3a7916163d3fcf984f\") " pod="kube-system/kube-controller-manager-ip-172-31-31-138" Aug 13 00:39:53.923101 kubelet[3300]: I0813 00:39:53.922871 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/74b913fb971333e5a9cd9097b2228d24-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-138\" (UID: \"74b913fb971333e5a9cd9097b2228d24\") " pod="kube-system/kube-scheduler-ip-172-31-31-138" Aug 13 00:39:53.923101 kubelet[3300]: I0813 00:39:53.922893 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7a55c946170ca8b59faf7f06d35966fa-ca-certs\") pod \"kube-apiserver-ip-172-31-31-138\" (UID: \"7a55c946170ca8b59faf7f06d35966fa\") " pod="kube-system/kube-apiserver-ip-172-31-31-138" Aug 13 00:39:53.923101 kubelet[3300]: I0813 00:39:53.922951 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7a55c946170ca8b59faf7f06d35966fa-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-138\" (UID: \"7a55c946170ca8b59faf7f06d35966fa\") " pod="kube-system/kube-apiserver-ip-172-31-31-138" Aug 13 00:39:54.116525 sudo[3314]: pam_unix(sudo:session): session closed for user root Aug 13 00:39:54.389310 kubelet[3300]: I0813 00:39:54.389182 3300 apiserver.go:52] "Watching apiserver" Aug 13 00:39:54.422996 kubelet[3300]: I0813 00:39:54.422945 3300 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 00:39:54.494592 kubelet[3300]: E0813 00:39:54.494097 3300 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-31-138\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-31-138" Aug 13 00:39:54.536092 kubelet[3300]: I0813 00:39:54.536007 3300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-31-138" podStartSLOduration=1.535985989 podStartE2EDuration="1.535985989s" podCreationTimestamp="2025-08-13 00:39:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:39:54.535671504 +0000 UTC m=+1.254860885" watchObservedRunningTime="2025-08-13 00:39:54.535985989 +0000 UTC m=+1.255175350" Aug 13 00:39:54.553915 kubelet[3300]: I0813 00:39:54.553854 3300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-31-138" podStartSLOduration=1.553832082 podStartE2EDuration="1.553832082s" podCreationTimestamp="2025-08-13 00:39:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:39:54.552698485 +0000 UTC m=+1.271887866" watchObservedRunningTime="2025-08-13 00:39:54.553832082 +0000 UTC m=+1.273021456" Aug 13 00:39:54.586822 kubelet[3300]: I0813 00:39:54.586549 3300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-31-138" podStartSLOduration=1.586528175 podStartE2EDuration="1.586528175s" podCreationTimestamp="2025-08-13 00:39:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:39:54.569899176 +0000 UTC m=+1.289088560" watchObservedRunningTime="2025-08-13 00:39:54.586528175 +0000 UTC m=+1.305717553" Aug 13 00:39:55.590449 update_engine[1976]: I20250813 00:39:55.590095 1976 update_attempter.cc:509] Updating boot flags... Aug 13 00:39:56.509037 sudo[2371]: pam_unix(sudo:session): session closed for user root Aug 13 00:39:56.534379 sshd[2370]: Connection closed by 139.178.68.195 port 36274 Aug 13 00:39:56.536199 sshd-session[2368]: pam_unix(sshd:session): session closed for user core Aug 13 00:39:56.544465 systemd[1]: sshd@8-172.31.31.138:22-139.178.68.195:36274.service: Deactivated successfully. Aug 13 00:39:56.548591 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 00:39:56.549886 systemd[1]: session-9.scope: Consumed 5.134s CPU time, 207.2M memory peak. Aug 13 00:39:56.553974 systemd-logind[1973]: Session 9 logged out. Waiting for processes to exit. Aug 13 00:39:56.581386 systemd-logind[1973]: Removed session 9. Aug 13 00:39:56.887492 kubelet[3300]: I0813 00:39:56.887258 3300 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 00:39:56.888144 containerd[1995]: time="2025-08-13T00:39:56.888087904Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 00:39:56.889487 kubelet[3300]: I0813 00:39:56.889441 3300 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 00:39:57.905878 systemd[1]: Created slice kubepods-besteffort-pod4f804e93_d171_43b5_be95_ae74e6278694.slice - libcontainer container kubepods-besteffort-pod4f804e93_d171_43b5_be95_ae74e6278694.slice. Aug 13 00:39:57.922615 systemd[1]: Created slice kubepods-burstable-poda2fa0580_0b53_4bda_9766_22557323fec8.slice - libcontainer container kubepods-burstable-poda2fa0580_0b53_4bda_9766_22557323fec8.slice. Aug 13 00:39:57.982058 systemd[1]: Created slice kubepods-besteffort-pod2b2e4db8_60d6_4b82_931e_91f11fb06629.slice - libcontainer container kubepods-besteffort-pod2b2e4db8_60d6_4b82_931e_91f11fb06629.slice. Aug 13 00:39:58.061470 kubelet[3300]: I0813 00:39:58.060933 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a2fa0580-0b53-4bda-9766-22557323fec8-etc-cni-netd\") pod \"cilium-6dgvq\" (UID: \"a2fa0580-0b53-4bda-9766-22557323fec8\") " pod="kube-system/cilium-6dgvq" Aug 13 00:39:58.061470 kubelet[3300]: I0813 00:39:58.061040 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a2fa0580-0b53-4bda-9766-22557323fec8-clustermesh-secrets\") pod \"cilium-6dgvq\" (UID: \"a2fa0580-0b53-4bda-9766-22557323fec8\") " pod="kube-system/cilium-6dgvq" Aug 13 00:39:58.061470 kubelet[3300]: I0813 00:39:58.061062 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f804e93-d171-43b5-be95-ae74e6278694-lib-modules\") pod \"kube-proxy-7d75n\" (UID: \"4f804e93-d171-43b5-be95-ae74e6278694\") " pod="kube-system/kube-proxy-7d75n" Aug 13 00:39:58.061470 kubelet[3300]: I0813 00:39:58.061077 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a2fa0580-0b53-4bda-9766-22557323fec8-cilium-run\") pod \"cilium-6dgvq\" (UID: \"a2fa0580-0b53-4bda-9766-22557323fec8\") " pod="kube-system/cilium-6dgvq" Aug 13 00:39:58.061470 kubelet[3300]: I0813 00:39:58.061091 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2fa0580-0b53-4bda-9766-22557323fec8-lib-modules\") pod \"cilium-6dgvq\" (UID: \"a2fa0580-0b53-4bda-9766-22557323fec8\") " pod="kube-system/cilium-6dgvq" Aug 13 00:39:58.061470 kubelet[3300]: I0813 00:39:58.061140 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a2fa0580-0b53-4bda-9766-22557323fec8-cilium-config-path\") pod \"cilium-6dgvq\" (UID: \"a2fa0580-0b53-4bda-9766-22557323fec8\") " pod="kube-system/cilium-6dgvq" Aug 13 00:39:58.061985 kubelet[3300]: I0813 00:39:58.061164 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2b2e4db8-60d6-4b82-931e-91f11fb06629-cilium-config-path\") pod \"cilium-operator-5d85765b45-nfpxz\" (UID: \"2b2e4db8-60d6-4b82-931e-91f11fb06629\") " pod="kube-system/cilium-operator-5d85765b45-nfpxz" Aug 13 00:39:58.061985 kubelet[3300]: I0813 00:39:58.061184 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4f804e93-d171-43b5-be95-ae74e6278694-xtables-lock\") pod \"kube-proxy-7d75n\" (UID: \"4f804e93-d171-43b5-be95-ae74e6278694\") " pod="kube-system/kube-proxy-7d75n" Aug 13 00:39:58.061985 kubelet[3300]: I0813 00:39:58.061200 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a2fa0580-0b53-4bda-9766-22557323fec8-host-proc-sys-kernel\") pod \"cilium-6dgvq\" (UID: \"a2fa0580-0b53-4bda-9766-22557323fec8\") " pod="kube-system/cilium-6dgvq" Aug 13 00:39:58.061985 kubelet[3300]: I0813 00:39:58.061214 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a2fa0580-0b53-4bda-9766-22557323fec8-hubble-tls\") pod \"cilium-6dgvq\" (UID: \"a2fa0580-0b53-4bda-9766-22557323fec8\") " pod="kube-system/cilium-6dgvq" Aug 13 00:39:58.061985 kubelet[3300]: I0813 00:39:58.061228 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4f804e93-d171-43b5-be95-ae74e6278694-kube-proxy\") pod \"kube-proxy-7d75n\" (UID: \"4f804e93-d171-43b5-be95-ae74e6278694\") " pod="kube-system/kube-proxy-7d75n" Aug 13 00:39:58.062165 kubelet[3300]: I0813 00:39:58.061243 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a2fa0580-0b53-4bda-9766-22557323fec8-host-proc-sys-net\") pod \"cilium-6dgvq\" (UID: \"a2fa0580-0b53-4bda-9766-22557323fec8\") " pod="kube-system/cilium-6dgvq" Aug 13 00:39:58.062165 kubelet[3300]: I0813 00:39:58.061259 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjxd5\" (UniqueName: \"kubernetes.io/projected/a2fa0580-0b53-4bda-9766-22557323fec8-kube-api-access-wjxd5\") pod \"cilium-6dgvq\" (UID: \"a2fa0580-0b53-4bda-9766-22557323fec8\") " pod="kube-system/cilium-6dgvq" Aug 13 00:39:58.062165 kubelet[3300]: I0813 00:39:58.061276 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a2fa0580-0b53-4bda-9766-22557323fec8-hostproc\") pod \"cilium-6dgvq\" (UID: \"a2fa0580-0b53-4bda-9766-22557323fec8\") " pod="kube-system/cilium-6dgvq" Aug 13 00:39:58.062165 kubelet[3300]: I0813 00:39:58.061306 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a2fa0580-0b53-4bda-9766-22557323fec8-cilium-cgroup\") pod \"cilium-6dgvq\" (UID: \"a2fa0580-0b53-4bda-9766-22557323fec8\") " pod="kube-system/cilium-6dgvq" Aug 13 00:39:58.062165 kubelet[3300]: I0813 00:39:58.061321 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a2fa0580-0b53-4bda-9766-22557323fec8-bpf-maps\") pod \"cilium-6dgvq\" (UID: \"a2fa0580-0b53-4bda-9766-22557323fec8\") " pod="kube-system/cilium-6dgvq" Aug 13 00:39:58.062294 kubelet[3300]: I0813 00:39:58.061337 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4j2t\" (UniqueName: \"kubernetes.io/projected/4f804e93-d171-43b5-be95-ae74e6278694-kube-api-access-x4j2t\") pod \"kube-proxy-7d75n\" (UID: \"4f804e93-d171-43b5-be95-ae74e6278694\") " pod="kube-system/kube-proxy-7d75n" Aug 13 00:39:58.062294 kubelet[3300]: I0813 00:39:58.061353 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a2fa0580-0b53-4bda-9766-22557323fec8-cni-path\") pod \"cilium-6dgvq\" (UID: \"a2fa0580-0b53-4bda-9766-22557323fec8\") " pod="kube-system/cilium-6dgvq" Aug 13 00:39:58.062294 kubelet[3300]: I0813 00:39:58.061367 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2fa0580-0b53-4bda-9766-22557323fec8-xtables-lock\") pod \"cilium-6dgvq\" (UID: \"a2fa0580-0b53-4bda-9766-22557323fec8\") " pod="kube-system/cilium-6dgvq" Aug 13 00:39:58.162368 kubelet[3300]: I0813 00:39:58.162184 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpmhd\" (UniqueName: \"kubernetes.io/projected/2b2e4db8-60d6-4b82-931e-91f11fb06629-kube-api-access-dpmhd\") pod \"cilium-operator-5d85765b45-nfpxz\" (UID: \"2b2e4db8-60d6-4b82-931e-91f11fb06629\") " pod="kube-system/cilium-operator-5d85765b45-nfpxz" Aug 13 00:39:58.216881 containerd[1995]: time="2025-08-13T00:39:58.216846736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7d75n,Uid:4f804e93-d171-43b5-be95-ae74e6278694,Namespace:kube-system,Attempt:0,}" Aug 13 00:39:58.230773 containerd[1995]: time="2025-08-13T00:39:58.230710966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6dgvq,Uid:a2fa0580-0b53-4bda-9766-22557323fec8,Namespace:kube-system,Attempt:0,}" Aug 13 00:39:58.256557 containerd[1995]: time="2025-08-13T00:39:58.255548061Z" level=info msg="connecting to shim 9ffb647b94332a2fb55f757464eb8dfe8850dc94960097816409e726c2cd0e73" address="unix:///run/containerd/s/df84b82f668d588d838131dc0f0900fa848953aadda489b235f9d33622cfa6ee" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:39:58.278679 containerd[1995]: time="2025-08-13T00:39:58.278631052Z" level=info msg="connecting to shim f4a771018e58eb138da5af8e03499c112c9a71f2e5017e0186906c12e6432515" address="unix:///run/containerd/s/1c5483829eb1a6e028976ef5483134cd4f400cb5ca0d5ff694c6014dfcf2646c" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:39:58.286943 containerd[1995]: time="2025-08-13T00:39:58.286647179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-nfpxz,Uid:2b2e4db8-60d6-4b82-931e-91f11fb06629,Namespace:kube-system,Attempt:0,}" Aug 13 00:39:58.310084 systemd[1]: Started cri-containerd-9ffb647b94332a2fb55f757464eb8dfe8850dc94960097816409e726c2cd0e73.scope - libcontainer container 9ffb647b94332a2fb55f757464eb8dfe8850dc94960097816409e726c2cd0e73. Aug 13 00:39:58.321831 systemd[1]: Started cri-containerd-f4a771018e58eb138da5af8e03499c112c9a71f2e5017e0186906c12e6432515.scope - libcontainer container f4a771018e58eb138da5af8e03499c112c9a71f2e5017e0186906c12e6432515. Aug 13 00:39:58.339657 containerd[1995]: time="2025-08-13T00:39:58.338551569Z" level=info msg="connecting to shim 74ae4b1be46b408cae2c9bf7e6f76615dae66ae1932445dcb61e39f19c9e0e23" address="unix:///run/containerd/s/8d4f21fdaa2a497114a9a6d7b7dc9a3d9f8d86851fb770425fa1199105de1953" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:39:58.385458 systemd[1]: Started cri-containerd-74ae4b1be46b408cae2c9bf7e6f76615dae66ae1932445dcb61e39f19c9e0e23.scope - libcontainer container 74ae4b1be46b408cae2c9bf7e6f76615dae66ae1932445dcb61e39f19c9e0e23. Aug 13 00:39:58.415887 containerd[1995]: time="2025-08-13T00:39:58.415403935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6dgvq,Uid:a2fa0580-0b53-4bda-9766-22557323fec8,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4a771018e58eb138da5af8e03499c112c9a71f2e5017e0186906c12e6432515\"" Aug 13 00:39:58.419553 containerd[1995]: time="2025-08-13T00:39:58.418854920Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 13 00:39:58.429190 containerd[1995]: time="2025-08-13T00:39:58.429144340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7d75n,Uid:4f804e93-d171-43b5-be95-ae74e6278694,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ffb647b94332a2fb55f757464eb8dfe8850dc94960097816409e726c2cd0e73\"" Aug 13 00:39:58.435335 containerd[1995]: time="2025-08-13T00:39:58.435164308Z" level=info msg="CreateContainer within sandbox \"9ffb647b94332a2fb55f757464eb8dfe8850dc94960097816409e726c2cd0e73\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 00:39:58.450589 containerd[1995]: time="2025-08-13T00:39:58.449734254Z" level=info msg="Container 2ac989f36c3614f29146d33fc17cc47fb3bd2b68527891686d8b2bcbc771a784: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:39:58.463169 containerd[1995]: time="2025-08-13T00:39:58.463126579Z" level=info msg="CreateContainer within sandbox \"9ffb647b94332a2fb55f757464eb8dfe8850dc94960097816409e726c2cd0e73\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2ac989f36c3614f29146d33fc17cc47fb3bd2b68527891686d8b2bcbc771a784\"" Aug 13 00:39:58.465250 containerd[1995]: time="2025-08-13T00:39:58.464965668Z" level=info msg="StartContainer for \"2ac989f36c3614f29146d33fc17cc47fb3bd2b68527891686d8b2bcbc771a784\"" Aug 13 00:39:58.469466 containerd[1995]: time="2025-08-13T00:39:58.469260296Z" level=info msg="connecting to shim 2ac989f36c3614f29146d33fc17cc47fb3bd2b68527891686d8b2bcbc771a784" address="unix:///run/containerd/s/df84b82f668d588d838131dc0f0900fa848953aadda489b235f9d33622cfa6ee" protocol=ttrpc version=3 Aug 13 00:39:58.479813 containerd[1995]: time="2025-08-13T00:39:58.479723554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-nfpxz,Uid:2b2e4db8-60d6-4b82-931e-91f11fb06629,Namespace:kube-system,Attempt:0,} returns sandbox id \"74ae4b1be46b408cae2c9bf7e6f76615dae66ae1932445dcb61e39f19c9e0e23\"" Aug 13 00:39:58.499886 systemd[1]: Started cri-containerd-2ac989f36c3614f29146d33fc17cc47fb3bd2b68527891686d8b2bcbc771a784.scope - libcontainer container 2ac989f36c3614f29146d33fc17cc47fb3bd2b68527891686d8b2bcbc771a784. Aug 13 00:39:58.542787 containerd[1995]: time="2025-08-13T00:39:58.542747902Z" level=info msg="StartContainer for \"2ac989f36c3614f29146d33fc17cc47fb3bd2b68527891686d8b2bcbc771a784\" returns successfully" Aug 13 00:39:59.564750 kubelet[3300]: I0813 00:39:59.563635 3300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7d75n" podStartSLOduration=2.563613619 podStartE2EDuration="2.563613619s" podCreationTimestamp="2025-08-13 00:39:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:39:59.563417106 +0000 UTC m=+6.282606508" watchObservedRunningTime="2025-08-13 00:39:59.563613619 +0000 UTC m=+6.282803000" Aug 13 00:40:06.421484 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1893844855.mount: Deactivated successfully. Aug 13 00:40:09.090039 containerd[1995]: time="2025-08-13T00:40:09.089977992Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:40:09.092001 containerd[1995]: time="2025-08-13T00:40:09.091848335Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Aug 13 00:40:09.092695 containerd[1995]: time="2025-08-13T00:40:09.092645156Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:40:09.094232 containerd[1995]: time="2025-08-13T00:40:09.093787067Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.674611209s" Aug 13 00:40:09.094232 containerd[1995]: time="2025-08-13T00:40:09.093826638Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 13 00:40:09.096769 containerd[1995]: time="2025-08-13T00:40:09.096742080Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 13 00:40:09.097757 containerd[1995]: time="2025-08-13T00:40:09.097720995Z" level=info msg="CreateContainer within sandbox \"f4a771018e58eb138da5af8e03499c112c9a71f2e5017e0186906c12e6432515\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:40:09.138325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2747337995.mount: Deactivated successfully. Aug 13 00:40:09.141998 containerd[1995]: time="2025-08-13T00:40:09.141699639Z" level=info msg="Container f72528df6daa33c95d0dea697b780421a25d94cd5d14ef46fb4d4b4ca8f6c9d9: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:40:09.159206 containerd[1995]: time="2025-08-13T00:40:09.159108183Z" level=info msg="CreateContainer within sandbox \"f4a771018e58eb138da5af8e03499c112c9a71f2e5017e0186906c12e6432515\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f72528df6daa33c95d0dea697b780421a25d94cd5d14ef46fb4d4b4ca8f6c9d9\"" Aug 13 00:40:09.163359 containerd[1995]: time="2025-08-13T00:40:09.163247277Z" level=info msg="StartContainer for \"f72528df6daa33c95d0dea697b780421a25d94cd5d14ef46fb4d4b4ca8f6c9d9\"" Aug 13 00:40:09.174395 containerd[1995]: time="2025-08-13T00:40:09.174302343Z" level=info msg="connecting to shim f72528df6daa33c95d0dea697b780421a25d94cd5d14ef46fb4d4b4ca8f6c9d9" address="unix:///run/containerd/s/1c5483829eb1a6e028976ef5483134cd4f400cb5ca0d5ff694c6014dfcf2646c" protocol=ttrpc version=3 Aug 13 00:40:09.213823 systemd[1]: Started cri-containerd-f72528df6daa33c95d0dea697b780421a25d94cd5d14ef46fb4d4b4ca8f6c9d9.scope - libcontainer container f72528df6daa33c95d0dea697b780421a25d94cd5d14ef46fb4d4b4ca8f6c9d9. Aug 13 00:40:09.255661 containerd[1995]: time="2025-08-13T00:40:09.255138891Z" level=info msg="StartContainer for \"f72528df6daa33c95d0dea697b780421a25d94cd5d14ef46fb4d4b4ca8f6c9d9\" returns successfully" Aug 13 00:40:09.270282 systemd[1]: cri-containerd-f72528df6daa33c95d0dea697b780421a25d94cd5d14ef46fb4d4b4ca8f6c9d9.scope: Deactivated successfully. Aug 13 00:40:09.320218 containerd[1995]: time="2025-08-13T00:40:09.318172137Z" level=info msg="received exit event container_id:\"f72528df6daa33c95d0dea697b780421a25d94cd5d14ef46fb4d4b4ca8f6c9d9\" id:\"f72528df6daa33c95d0dea697b780421a25d94cd5d14ef46fb4d4b4ca8f6c9d9\" pid:3980 exited_at:{seconds:1755045609 nanos:275167872}" Aug 13 00:40:09.323458 containerd[1995]: time="2025-08-13T00:40:09.323422138Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f72528df6daa33c95d0dea697b780421a25d94cd5d14ef46fb4d4b4ca8f6c9d9\" id:\"f72528df6daa33c95d0dea697b780421a25d94cd5d14ef46fb4d4b4ca8f6c9d9\" pid:3980 exited_at:{seconds:1755045609 nanos:275167872}" Aug 13 00:40:09.339129 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f72528df6daa33c95d0dea697b780421a25d94cd5d14ef46fb4d4b4ca8f6c9d9-rootfs.mount: Deactivated successfully. Aug 13 00:40:10.653401 containerd[1995]: time="2025-08-13T00:40:10.652788973Z" level=info msg="CreateContainer within sandbox \"f4a771018e58eb138da5af8e03499c112c9a71f2e5017e0186906c12e6432515\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 00:40:10.670132 containerd[1995]: time="2025-08-13T00:40:10.670091194Z" level=info msg="Container 0bff6a91a020ee02d7d8d5bf6ad4bce4489a226c958ae7374935cea675f24b73: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:40:10.702273 containerd[1995]: time="2025-08-13T00:40:10.702223436Z" level=info msg="CreateContainer within sandbox \"f4a771018e58eb138da5af8e03499c112c9a71f2e5017e0186906c12e6432515\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0bff6a91a020ee02d7d8d5bf6ad4bce4489a226c958ae7374935cea675f24b73\"" Aug 13 00:40:10.702831 containerd[1995]: time="2025-08-13T00:40:10.702802097Z" level=info msg="StartContainer for \"0bff6a91a020ee02d7d8d5bf6ad4bce4489a226c958ae7374935cea675f24b73\"" Aug 13 00:40:10.704414 containerd[1995]: time="2025-08-13T00:40:10.704368193Z" level=info msg="connecting to shim 0bff6a91a020ee02d7d8d5bf6ad4bce4489a226c958ae7374935cea675f24b73" address="unix:///run/containerd/s/1c5483829eb1a6e028976ef5483134cd4f400cb5ca0d5ff694c6014dfcf2646c" protocol=ttrpc version=3 Aug 13 00:40:10.745046 systemd[1]: Started cri-containerd-0bff6a91a020ee02d7d8d5bf6ad4bce4489a226c958ae7374935cea675f24b73.scope - libcontainer container 0bff6a91a020ee02d7d8d5bf6ad4bce4489a226c958ae7374935cea675f24b73. Aug 13 00:40:10.784839 containerd[1995]: time="2025-08-13T00:40:10.784787664Z" level=info msg="StartContainer for \"0bff6a91a020ee02d7d8d5bf6ad4bce4489a226c958ae7374935cea675f24b73\" returns successfully" Aug 13 00:40:10.812254 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:40:10.812619 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:40:10.813697 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:40:10.817198 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:40:10.822394 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 00:40:10.825051 systemd[1]: cri-containerd-0bff6a91a020ee02d7d8d5bf6ad4bce4489a226c958ae7374935cea675f24b73.scope: Deactivated successfully. Aug 13 00:40:10.828032 containerd[1995]: time="2025-08-13T00:40:10.827726885Z" level=info msg="received exit event container_id:\"0bff6a91a020ee02d7d8d5bf6ad4bce4489a226c958ae7374935cea675f24b73\" id:\"0bff6a91a020ee02d7d8d5bf6ad4bce4489a226c958ae7374935cea675f24b73\" pid:4025 exited_at:{seconds:1755045610 nanos:826064121}" Aug 13 00:40:10.829027 containerd[1995]: time="2025-08-13T00:40:10.828957648Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0bff6a91a020ee02d7d8d5bf6ad4bce4489a226c958ae7374935cea675f24b73\" id:\"0bff6a91a020ee02d7d8d5bf6ad4bce4489a226c958ae7374935cea675f24b73\" pid:4025 exited_at:{seconds:1755045610 nanos:826064121}" Aug 13 00:40:10.881097 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:40:11.629282 containerd[1995]: time="2025-08-13T00:40:11.629212691Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:40:11.630408 containerd[1995]: time="2025-08-13T00:40:11.630359167Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Aug 13 00:40:11.631512 containerd[1995]: time="2025-08-13T00:40:11.631482464Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:40:11.632831 containerd[1995]: time="2025-08-13T00:40:11.632797667Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.535919516s" Aug 13 00:40:11.632831 containerd[1995]: time="2025-08-13T00:40:11.632833454Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 13 00:40:11.636456 containerd[1995]: time="2025-08-13T00:40:11.635246304Z" level=info msg="CreateContainer within sandbox \"74ae4b1be46b408cae2c9bf7e6f76615dae66ae1932445dcb61e39f19c9e0e23\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 13 00:40:11.646454 containerd[1995]: time="2025-08-13T00:40:11.646413955Z" level=info msg="Container 59a52de492c3bfa9776c0bbe266d9be14c6d95767bd872ce196edc2c6f964350: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:40:11.656333 containerd[1995]: time="2025-08-13T00:40:11.656048751Z" level=info msg="CreateContainer within sandbox \"f4a771018e58eb138da5af8e03499c112c9a71f2e5017e0186906c12e6432515\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 00:40:11.657557 containerd[1995]: time="2025-08-13T00:40:11.657516335Z" level=info msg="CreateContainer within sandbox \"74ae4b1be46b408cae2c9bf7e6f76615dae66ae1932445dcb61e39f19c9e0e23\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"59a52de492c3bfa9776c0bbe266d9be14c6d95767bd872ce196edc2c6f964350\"" Aug 13 00:40:11.659355 containerd[1995]: time="2025-08-13T00:40:11.658694033Z" level=info msg="StartContainer for \"59a52de492c3bfa9776c0bbe266d9be14c6d95767bd872ce196edc2c6f964350\"" Aug 13 00:40:11.661776 containerd[1995]: time="2025-08-13T00:40:11.661751341Z" level=info msg="connecting to shim 59a52de492c3bfa9776c0bbe266d9be14c6d95767bd872ce196edc2c6f964350" address="unix:///run/containerd/s/8d4f21fdaa2a497114a9a6d7b7dc9a3d9f8d86851fb770425fa1199105de1953" protocol=ttrpc version=3 Aug 13 00:40:11.671739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount91988635.mount: Deactivated successfully. Aug 13 00:40:11.672278 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0bff6a91a020ee02d7d8d5bf6ad4bce4489a226c958ae7374935cea675f24b73-rootfs.mount: Deactivated successfully. Aug 13 00:40:11.681173 containerd[1995]: time="2025-08-13T00:40:11.681050003Z" level=info msg="Container 0b82f313e448e18b724c9cb9757900171cadc3ed050dab16d6b8af7db147acca: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:40:11.701000 systemd[1]: Started cri-containerd-59a52de492c3bfa9776c0bbe266d9be14c6d95767bd872ce196edc2c6f964350.scope - libcontainer container 59a52de492c3bfa9776c0bbe266d9be14c6d95767bd872ce196edc2c6f964350. Aug 13 00:40:11.705341 containerd[1995]: time="2025-08-13T00:40:11.705304137Z" level=info msg="CreateContainer within sandbox \"f4a771018e58eb138da5af8e03499c112c9a71f2e5017e0186906c12e6432515\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0b82f313e448e18b724c9cb9757900171cadc3ed050dab16d6b8af7db147acca\"" Aug 13 00:40:11.706889 containerd[1995]: time="2025-08-13T00:40:11.706847954Z" level=info msg="StartContainer for \"0b82f313e448e18b724c9cb9757900171cadc3ed050dab16d6b8af7db147acca\"" Aug 13 00:40:11.708257 containerd[1995]: time="2025-08-13T00:40:11.708162766Z" level=info msg="connecting to shim 0b82f313e448e18b724c9cb9757900171cadc3ed050dab16d6b8af7db147acca" address="unix:///run/containerd/s/1c5483829eb1a6e028976ef5483134cd4f400cb5ca0d5ff694c6014dfcf2646c" protocol=ttrpc version=3 Aug 13 00:40:11.734806 systemd[1]: Started cri-containerd-0b82f313e448e18b724c9cb9757900171cadc3ed050dab16d6b8af7db147acca.scope - libcontainer container 0b82f313e448e18b724c9cb9757900171cadc3ed050dab16d6b8af7db147acca. Aug 13 00:40:11.763347 containerd[1995]: time="2025-08-13T00:40:11.763294834Z" level=info msg="StartContainer for \"59a52de492c3bfa9776c0bbe266d9be14c6d95767bd872ce196edc2c6f964350\" returns successfully" Aug 13 00:40:11.810455 containerd[1995]: time="2025-08-13T00:40:11.810400485Z" level=info msg="StartContainer for \"0b82f313e448e18b724c9cb9757900171cadc3ed050dab16d6b8af7db147acca\" returns successfully" Aug 13 00:40:11.820066 systemd[1]: cri-containerd-0b82f313e448e18b724c9cb9757900171cadc3ed050dab16d6b8af7db147acca.scope: Deactivated successfully. Aug 13 00:40:11.820578 systemd[1]: cri-containerd-0b82f313e448e18b724c9cb9757900171cadc3ed050dab16d6b8af7db147acca.scope: Consumed 32ms CPU time, 4.7M memory peak, 1M read from disk. Aug 13 00:40:11.823479 containerd[1995]: time="2025-08-13T00:40:11.823430090Z" level=info msg="received exit event container_id:\"0b82f313e448e18b724c9cb9757900171cadc3ed050dab16d6b8af7db147acca\" id:\"0b82f313e448e18b724c9cb9757900171cadc3ed050dab16d6b8af7db147acca\" pid:4110 exited_at:{seconds:1755045611 nanos:822390664}" Aug 13 00:40:11.825272 containerd[1995]: time="2025-08-13T00:40:11.825130677Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0b82f313e448e18b724c9cb9757900171cadc3ed050dab16d6b8af7db147acca\" id:\"0b82f313e448e18b724c9cb9757900171cadc3ed050dab16d6b8af7db147acca\" pid:4110 exited_at:{seconds:1755045611 nanos:822390664}" Aug 13 00:40:12.672698 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0b82f313e448e18b724c9cb9757900171cadc3ed050dab16d6b8af7db147acca-rootfs.mount: Deactivated successfully. Aug 13 00:40:12.677123 containerd[1995]: time="2025-08-13T00:40:12.677080500Z" level=info msg="CreateContainer within sandbox \"f4a771018e58eb138da5af8e03499c112c9a71f2e5017e0186906c12e6432515\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 00:40:12.698601 containerd[1995]: time="2025-08-13T00:40:12.695821466Z" level=info msg="Container 7bdff14a344421548ee795e1ff9e6c51cfc0985c6925c763cf21e1c299b36276: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:40:12.698640 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1932540097.mount: Deactivated successfully. Aug 13 00:40:12.705704 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount393156730.mount: Deactivated successfully. Aug 13 00:40:12.719463 containerd[1995]: time="2025-08-13T00:40:12.719392243Z" level=info msg="CreateContainer within sandbox \"f4a771018e58eb138da5af8e03499c112c9a71f2e5017e0186906c12e6432515\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7bdff14a344421548ee795e1ff9e6c51cfc0985c6925c763cf21e1c299b36276\"" Aug 13 00:40:12.720421 containerd[1995]: time="2025-08-13T00:40:12.720387797Z" level=info msg="StartContainer for \"7bdff14a344421548ee795e1ff9e6c51cfc0985c6925c763cf21e1c299b36276\"" Aug 13 00:40:12.723609 containerd[1995]: time="2025-08-13T00:40:12.723480991Z" level=info msg="connecting to shim 7bdff14a344421548ee795e1ff9e6c51cfc0985c6925c763cf21e1c299b36276" address="unix:///run/containerd/s/1c5483829eb1a6e028976ef5483134cd4f400cb5ca0d5ff694c6014dfcf2646c" protocol=ttrpc version=3 Aug 13 00:40:12.784798 systemd[1]: Started cri-containerd-7bdff14a344421548ee795e1ff9e6c51cfc0985c6925c763cf21e1c299b36276.scope - libcontainer container 7bdff14a344421548ee795e1ff9e6c51cfc0985c6925c763cf21e1c299b36276. Aug 13 00:40:12.867816 kubelet[3300]: I0813 00:40:12.866552 3300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-nfpxz" podStartSLOduration=2.714571126 podStartE2EDuration="15.866530413s" podCreationTimestamp="2025-08-13 00:39:57 +0000 UTC" firstStartedPulling="2025-08-13 00:39:58.482083426 +0000 UTC m=+5.201272799" lastFinishedPulling="2025-08-13 00:40:11.634042726 +0000 UTC m=+18.353232086" observedRunningTime="2025-08-13 00:40:12.7639562 +0000 UTC m=+19.483145581" watchObservedRunningTime="2025-08-13 00:40:12.866530413 +0000 UTC m=+19.585719794" Aug 13 00:40:12.908925 containerd[1995]: time="2025-08-13T00:40:12.907662645Z" level=info msg="StartContainer for \"7bdff14a344421548ee795e1ff9e6c51cfc0985c6925c763cf21e1c299b36276\" returns successfully" Aug 13 00:40:12.911031 systemd[1]: cri-containerd-7bdff14a344421548ee795e1ff9e6c51cfc0985c6925c763cf21e1c299b36276.scope: Deactivated successfully. Aug 13 00:40:12.912324 containerd[1995]: time="2025-08-13T00:40:12.912255469Z" level=info msg="received exit event container_id:\"7bdff14a344421548ee795e1ff9e6c51cfc0985c6925c763cf21e1c299b36276\" id:\"7bdff14a344421548ee795e1ff9e6c51cfc0985c6925c763cf21e1c299b36276\" pid:4164 exited_at:{seconds:1755045612 nanos:911849627}" Aug 13 00:40:12.913113 containerd[1995]: time="2025-08-13T00:40:12.912951845Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7bdff14a344421548ee795e1ff9e6c51cfc0985c6925c763cf21e1c299b36276\" id:\"7bdff14a344421548ee795e1ff9e6c51cfc0985c6925c763cf21e1c299b36276\" pid:4164 exited_at:{seconds:1755045612 nanos:911849627}" Aug 13 00:40:13.671002 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7bdff14a344421548ee795e1ff9e6c51cfc0985c6925c763cf21e1c299b36276-rootfs.mount: Deactivated successfully. Aug 13 00:40:13.681938 containerd[1995]: time="2025-08-13T00:40:13.681366928Z" level=info msg="CreateContainer within sandbox \"f4a771018e58eb138da5af8e03499c112c9a71f2e5017e0186906c12e6432515\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 00:40:13.734356 containerd[1995]: time="2025-08-13T00:40:13.732833171Z" level=info msg="Container bfb0e68d2b8cd647bc19ab976b8be3c663c18dfe9561af34306c163ab2b6dd96: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:40:13.768394 containerd[1995]: time="2025-08-13T00:40:13.768339505Z" level=info msg="CreateContainer within sandbox \"f4a771018e58eb138da5af8e03499c112c9a71f2e5017e0186906c12e6432515\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bfb0e68d2b8cd647bc19ab976b8be3c663c18dfe9561af34306c163ab2b6dd96\"" Aug 13 00:40:13.771193 containerd[1995]: time="2025-08-13T00:40:13.769246750Z" level=info msg="StartContainer for \"bfb0e68d2b8cd647bc19ab976b8be3c663c18dfe9561af34306c163ab2b6dd96\"" Aug 13 00:40:13.771193 containerd[1995]: time="2025-08-13T00:40:13.771019681Z" level=info msg="connecting to shim bfb0e68d2b8cd647bc19ab976b8be3c663c18dfe9561af34306c163ab2b6dd96" address="unix:///run/containerd/s/1c5483829eb1a6e028976ef5483134cd4f400cb5ca0d5ff694c6014dfcf2646c" protocol=ttrpc version=3 Aug 13 00:40:13.812854 systemd[1]: Started cri-containerd-bfb0e68d2b8cd647bc19ab976b8be3c663c18dfe9561af34306c163ab2b6dd96.scope - libcontainer container bfb0e68d2b8cd647bc19ab976b8be3c663c18dfe9561af34306c163ab2b6dd96. Aug 13 00:40:13.861523 containerd[1995]: time="2025-08-13T00:40:13.861449987Z" level=info msg="StartContainer for \"bfb0e68d2b8cd647bc19ab976b8be3c663c18dfe9561af34306c163ab2b6dd96\" returns successfully" Aug 13 00:40:14.153436 containerd[1995]: time="2025-08-13T00:40:14.153187967Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bfb0e68d2b8cd647bc19ab976b8be3c663c18dfe9561af34306c163ab2b6dd96\" id:\"e6c04122fe677b9eab6d3fae598bc24b6bfcd177ef029da7b4b2ebc10990582b\" pid:4233 exited_at:{seconds:1755045614 nanos:151639125}" Aug 13 00:40:14.205084 kubelet[3300]: I0813 00:40:14.204876 3300 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 13 00:40:14.284322 systemd[1]: Created slice kubepods-burstable-pod8f9fa263_73fc_464f_8eeb_2c9975b5273e.slice - libcontainer container kubepods-burstable-pod8f9fa263_73fc_464f_8eeb_2c9975b5273e.slice. Aug 13 00:40:14.297131 systemd[1]: Created slice kubepods-burstable-poded5538c3_1491_4167_b42f_098390e874ff.slice - libcontainer container kubepods-burstable-poded5538c3_1491_4167_b42f_098390e874ff.slice. Aug 13 00:40:14.348485 kubelet[3300]: I0813 00:40:14.348432 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdqcc\" (UniqueName: \"kubernetes.io/projected/ed5538c3-1491-4167-b42f-098390e874ff-kube-api-access-fdqcc\") pod \"coredns-7c65d6cfc9-zv8cf\" (UID: \"ed5538c3-1491-4167-b42f-098390e874ff\") " pod="kube-system/coredns-7c65d6cfc9-zv8cf" Aug 13 00:40:14.348921 kubelet[3300]: I0813 00:40:14.348743 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f9fa263-73fc-464f-8eeb-2c9975b5273e-config-volume\") pod \"coredns-7c65d6cfc9-qv4zl\" (UID: \"8f9fa263-73fc-464f-8eeb-2c9975b5273e\") " pod="kube-system/coredns-7c65d6cfc9-qv4zl" Aug 13 00:40:14.349057 kubelet[3300]: I0813 00:40:14.348951 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r82hc\" (UniqueName: \"kubernetes.io/projected/8f9fa263-73fc-464f-8eeb-2c9975b5273e-kube-api-access-r82hc\") pod \"coredns-7c65d6cfc9-qv4zl\" (UID: \"8f9fa263-73fc-464f-8eeb-2c9975b5273e\") " pod="kube-system/coredns-7c65d6cfc9-qv4zl" Aug 13 00:40:14.349057 kubelet[3300]: I0813 00:40:14.348988 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed5538c3-1491-4167-b42f-098390e874ff-config-volume\") pod \"coredns-7c65d6cfc9-zv8cf\" (UID: \"ed5538c3-1491-4167-b42f-098390e874ff\") " pod="kube-system/coredns-7c65d6cfc9-zv8cf" Aug 13 00:40:14.609069 containerd[1995]: time="2025-08-13T00:40:14.608954031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zv8cf,Uid:ed5538c3-1491-4167-b42f-098390e874ff,Namespace:kube-system,Attempt:0,}" Aug 13 00:40:14.609443 containerd[1995]: time="2025-08-13T00:40:14.609227406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qv4zl,Uid:8f9fa263-73fc-464f-8eeb-2c9975b5273e,Namespace:kube-system,Attempt:0,}" Aug 13 00:40:16.944700 systemd-networkd[1839]: cilium_host: Link UP Aug 13 00:40:16.945373 systemd-networkd[1839]: cilium_net: Link UP Aug 13 00:40:16.946367 systemd-networkd[1839]: cilium_net: Gained carrier Aug 13 00:40:16.946781 (udev-worker)[4298]: Network interface NamePolicy= disabled on kernel command line. Aug 13 00:40:16.947269 systemd-networkd[1839]: cilium_host: Gained carrier Aug 13 00:40:16.950943 (udev-worker)[4337]: Network interface NamePolicy= disabled on kernel command line. Aug 13 00:40:17.146683 (udev-worker)[4346]: Network interface NamePolicy= disabled on kernel command line. Aug 13 00:40:17.156200 systemd-networkd[1839]: cilium_vxlan: Link UP Aug 13 00:40:17.156209 systemd-networkd[1839]: cilium_vxlan: Gained carrier Aug 13 00:40:17.404245 systemd-networkd[1839]: cilium_host: Gained IPv6LL Aug 13 00:40:17.812234 systemd-networkd[1839]: cilium_net: Gained IPv6LL Aug 13 00:40:18.084614 kernel: NET: Registered PF_ALG protocol family Aug 13 00:40:18.260685 systemd-networkd[1839]: cilium_vxlan: Gained IPv6LL Aug 13 00:40:18.988166 (udev-worker)[4347]: Network interface NamePolicy= disabled on kernel command line. Aug 13 00:40:18.990700 systemd-networkd[1839]: lxc_health: Link UP Aug 13 00:40:19.013671 systemd-networkd[1839]: lxc_health: Gained carrier Aug 13 00:40:19.218858 systemd-networkd[1839]: lxc3d050e81e413: Link UP Aug 13 00:40:19.219593 kernel: eth0: renamed from tmpe32f6 Aug 13 00:40:19.223334 systemd-networkd[1839]: lxc3d050e81e413: Gained carrier Aug 13 00:40:19.685477 systemd-networkd[1839]: lxca5c756f3a07d: Link UP Aug 13 00:40:19.690609 kernel: eth0: renamed from tmp3e7ff Aug 13 00:40:19.694912 systemd-networkd[1839]: lxca5c756f3a07d: Gained carrier Aug 13 00:40:20.329150 kubelet[3300]: I0813 00:40:20.326473 3300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6dgvq" podStartSLOduration=12.649780114 podStartE2EDuration="23.326450901s" podCreationTimestamp="2025-08-13 00:39:57 +0000 UTC" firstStartedPulling="2025-08-13 00:39:58.418072542 +0000 UTC m=+5.137261911" lastFinishedPulling="2025-08-13 00:40:09.094743323 +0000 UTC m=+15.813932698" observedRunningTime="2025-08-13 00:40:14.778184622 +0000 UTC m=+21.497374004" watchObservedRunningTime="2025-08-13 00:40:20.326450901 +0000 UTC m=+27.045640283" Aug 13 00:40:20.883879 systemd-networkd[1839]: lxc3d050e81e413: Gained IPv6LL Aug 13 00:40:21.011805 systemd-networkd[1839]: lxc_health: Gained IPv6LL Aug 13 00:40:21.652207 systemd-networkd[1839]: lxca5c756f3a07d: Gained IPv6LL Aug 13 00:40:23.986897 containerd[1995]: time="2025-08-13T00:40:23.986848759Z" level=info msg="connecting to shim 3e7ff74cf75115d03e4b5a4ec57fc0fa0b51ef9d3cc5b279726fc6be51b7709a" address="unix:///run/containerd/s/5f5e1a0d7d4a6ed2fda9c2edcd83180380fd837de4056e508deb292bb281c0bd" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:40:23.987605 containerd[1995]: time="2025-08-13T00:40:23.987072406Z" level=info msg="connecting to shim e32f6537e1f2ec8034921af538d6c3813caf6133547f1997eba6d1790ff6a5df" address="unix:///run/containerd/s/19849a1fc5499156d090f36022db2f858cfd802928a42da24cd8ae301a821d40" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:40:24.078079 systemd[1]: Started cri-containerd-e32f6537e1f2ec8034921af538d6c3813caf6133547f1997eba6d1790ff6a5df.scope - libcontainer container e32f6537e1f2ec8034921af538d6c3813caf6133547f1997eba6d1790ff6a5df. Aug 13 00:40:24.091228 systemd[1]: Started cri-containerd-3e7ff74cf75115d03e4b5a4ec57fc0fa0b51ef9d3cc5b279726fc6be51b7709a.scope - libcontainer container 3e7ff74cf75115d03e4b5a4ec57fc0fa0b51ef9d3cc5b279726fc6be51b7709a. Aug 13 00:40:24.199774 containerd[1995]: time="2025-08-13T00:40:24.199731247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qv4zl,Uid:8f9fa263-73fc-464f-8eeb-2c9975b5273e,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e7ff74cf75115d03e4b5a4ec57fc0fa0b51ef9d3cc5b279726fc6be51b7709a\"" Aug 13 00:40:24.206032 containerd[1995]: time="2025-08-13T00:40:24.205988408Z" level=info msg="CreateContainer within sandbox \"3e7ff74cf75115d03e4b5a4ec57fc0fa0b51ef9d3cc5b279726fc6be51b7709a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:40:24.216419 containerd[1995]: time="2025-08-13T00:40:24.216298537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zv8cf,Uid:ed5538c3-1491-4167-b42f-098390e874ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"e32f6537e1f2ec8034921af538d6c3813caf6133547f1997eba6d1790ff6a5df\"" Aug 13 00:40:24.219901 containerd[1995]: time="2025-08-13T00:40:24.219742454Z" level=info msg="CreateContainer within sandbox \"e32f6537e1f2ec8034921af538d6c3813caf6133547f1997eba6d1790ff6a5df\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:40:24.238309 containerd[1995]: time="2025-08-13T00:40:24.237036164Z" level=info msg="Container edd38b5b111753cfd3b337a2e31d40edc85111ff4b170c62771dbe90f37cf553: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:40:24.241223 containerd[1995]: time="2025-08-13T00:40:24.241182636Z" level=info msg="Container 14d6f1fde33817e0697bfc6d6ebe25c6e07b10fbae11aa2b9fb3486ce7ac2d43: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:40:24.251029 containerd[1995]: time="2025-08-13T00:40:24.250928894Z" level=info msg="CreateContainer within sandbox \"3e7ff74cf75115d03e4b5a4ec57fc0fa0b51ef9d3cc5b279726fc6be51b7709a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"edd38b5b111753cfd3b337a2e31d40edc85111ff4b170c62771dbe90f37cf553\"" Aug 13 00:40:24.252640 containerd[1995]: time="2025-08-13T00:40:24.252345901Z" level=info msg="StartContainer for \"edd38b5b111753cfd3b337a2e31d40edc85111ff4b170c62771dbe90f37cf553\"" Aug 13 00:40:24.253726 containerd[1995]: time="2025-08-13T00:40:24.253693284Z" level=info msg="connecting to shim edd38b5b111753cfd3b337a2e31d40edc85111ff4b170c62771dbe90f37cf553" address="unix:///run/containerd/s/5f5e1a0d7d4a6ed2fda9c2edcd83180380fd837de4056e508deb292bb281c0bd" protocol=ttrpc version=3 Aug 13 00:40:24.272249 containerd[1995]: time="2025-08-13T00:40:24.272195638Z" level=info msg="CreateContainer within sandbox \"e32f6537e1f2ec8034921af538d6c3813caf6133547f1997eba6d1790ff6a5df\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"14d6f1fde33817e0697bfc6d6ebe25c6e07b10fbae11aa2b9fb3486ce7ac2d43\"" Aug 13 00:40:24.274924 containerd[1995]: time="2025-08-13T00:40:24.274886370Z" level=info msg="StartContainer for \"14d6f1fde33817e0697bfc6d6ebe25c6e07b10fbae11aa2b9fb3486ce7ac2d43\"" Aug 13 00:40:24.282482 containerd[1995]: time="2025-08-13T00:40:24.282446761Z" level=info msg="connecting to shim 14d6f1fde33817e0697bfc6d6ebe25c6e07b10fbae11aa2b9fb3486ce7ac2d43" address="unix:///run/containerd/s/19849a1fc5499156d090f36022db2f858cfd802928a42da24cd8ae301a821d40" protocol=ttrpc version=3 Aug 13 00:40:24.283780 systemd[1]: Started cri-containerd-edd38b5b111753cfd3b337a2e31d40edc85111ff4b170c62771dbe90f37cf553.scope - libcontainer container edd38b5b111753cfd3b337a2e31d40edc85111ff4b170c62771dbe90f37cf553. Aug 13 00:40:24.314777 systemd[1]: Started cri-containerd-14d6f1fde33817e0697bfc6d6ebe25c6e07b10fbae11aa2b9fb3486ce7ac2d43.scope - libcontainer container 14d6f1fde33817e0697bfc6d6ebe25c6e07b10fbae11aa2b9fb3486ce7ac2d43. Aug 13 00:40:24.370499 containerd[1995]: time="2025-08-13T00:40:24.370469696Z" level=info msg="StartContainer for \"edd38b5b111753cfd3b337a2e31d40edc85111ff4b170c62771dbe90f37cf553\" returns successfully" Aug 13 00:40:24.376587 containerd[1995]: time="2025-08-13T00:40:24.376520881Z" level=info msg="StartContainer for \"14d6f1fde33817e0697bfc6d6ebe25c6e07b10fbae11aa2b9fb3486ce7ac2d43\" returns successfully" Aug 13 00:40:24.832686 kubelet[3300]: I0813 00:40:24.832601 3300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-zv8cf" podStartSLOduration=27.832585937 podStartE2EDuration="27.832585937s" podCreationTimestamp="2025-08-13 00:39:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:40:24.832031825 +0000 UTC m=+31.551221222" watchObservedRunningTime="2025-08-13 00:40:24.832585937 +0000 UTC m=+31.551775311" Aug 13 00:40:24.850460 kubelet[3300]: I0813 00:40:24.850168 3300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-qv4zl" podStartSLOduration=27.850149833 podStartE2EDuration="27.850149833s" podCreationTimestamp="2025-08-13 00:39:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:40:24.848370412 +0000 UTC m=+31.567559792" watchObservedRunningTime="2025-08-13 00:40:24.850149833 +0000 UTC m=+31.569339214" Aug 13 00:40:24.951425 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2739312875.mount: Deactivated successfully. Aug 13 00:40:26.262845 ntpd[1965]: Listen normally on 8 cilium_host 192.168.0.139:123 Aug 13 00:40:26.263228 ntpd[1965]: 13 Aug 00:40:26 ntpd[1965]: Listen normally on 8 cilium_host 192.168.0.139:123 Aug 13 00:40:26.263228 ntpd[1965]: 13 Aug 00:40:26 ntpd[1965]: Listen normally on 9 cilium_net [fe80::4b5:f1ff:fe77:cacf%4]:123 Aug 13 00:40:26.263228 ntpd[1965]: 13 Aug 00:40:26 ntpd[1965]: Listen normally on 10 cilium_host [fe80::4e:aaff:fe7c:cb86%5]:123 Aug 13 00:40:26.263228 ntpd[1965]: 13 Aug 00:40:26 ntpd[1965]: Listen normally on 11 cilium_vxlan [fe80::986b:b9ff:fe94:53d0%6]:123 Aug 13 00:40:26.263228 ntpd[1965]: 13 Aug 00:40:26 ntpd[1965]: Listen normally on 12 lxc_health [fe80::a4be:20ff:fe21:5d41%8]:123 Aug 13 00:40:26.263228 ntpd[1965]: 13 Aug 00:40:26 ntpd[1965]: Listen normally on 13 lxc3d050e81e413 [fe80::4c9c:ccff:feb0:3b3b%10]:123 Aug 13 00:40:26.263228 ntpd[1965]: 13 Aug 00:40:26 ntpd[1965]: Listen normally on 14 lxca5c756f3a07d [fe80::4493:dff:fe77:9faf%12]:123 Aug 13 00:40:26.262924 ntpd[1965]: Listen normally on 9 cilium_net [fe80::4b5:f1ff:fe77:cacf%4]:123 Aug 13 00:40:26.262979 ntpd[1965]: Listen normally on 10 cilium_host [fe80::4e:aaff:fe7c:cb86%5]:123 Aug 13 00:40:26.263012 ntpd[1965]: Listen normally on 11 cilium_vxlan [fe80::986b:b9ff:fe94:53d0%6]:123 Aug 13 00:40:26.263043 ntpd[1965]: Listen normally on 12 lxc_health [fe80::a4be:20ff:fe21:5d41%8]:123 Aug 13 00:40:26.263074 ntpd[1965]: Listen normally on 13 lxc3d050e81e413 [fe80::4c9c:ccff:feb0:3b3b%10]:123 Aug 13 00:40:26.263100 ntpd[1965]: Listen normally on 14 lxca5c756f3a07d [fe80::4493:dff:fe77:9faf%12]:123 Aug 13 00:40:43.149854 systemd[1]: Started sshd@9-172.31.31.138:22-139.178.68.195:50090.service - OpenSSH per-connection server daemon (139.178.68.195:50090). Aug 13 00:40:43.376459 sshd[4883]: Accepted publickey for core from 139.178.68.195 port 50090 ssh2: RSA SHA256:2C5UUUFKFtbeXpxut91iAvg9/kHC7TPoVPANvS2Tr9A Aug 13 00:40:43.378085 sshd-session[4883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:40:43.392188 systemd-logind[1973]: New session 10 of user core. Aug 13 00:40:43.396942 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 00:40:44.286958 sshd[4885]: Connection closed by 139.178.68.195 port 50090 Aug 13 00:40:44.288164 sshd-session[4883]: pam_unix(sshd:session): session closed for user core Aug 13 00:40:44.295007 systemd[1]: sshd@9-172.31.31.138:22-139.178.68.195:50090.service: Deactivated successfully. Aug 13 00:40:44.299374 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 00:40:44.300516 systemd-logind[1973]: Session 10 logged out. Waiting for processes to exit. Aug 13 00:40:44.303886 systemd-logind[1973]: Removed session 10. Aug 13 00:40:49.327879 systemd[1]: Started sshd@10-172.31.31.138:22-139.178.68.195:50102.service - OpenSSH per-connection server daemon (139.178.68.195:50102). Aug 13 00:40:49.513983 sshd[4899]: Accepted publickey for core from 139.178.68.195 port 50102 ssh2: RSA SHA256:2C5UUUFKFtbeXpxut91iAvg9/kHC7TPoVPANvS2Tr9A Aug 13 00:40:49.515594 sshd-session[4899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:40:49.522327 systemd-logind[1973]: New session 11 of user core. Aug 13 00:40:49.528837 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 00:40:49.736339 sshd[4901]: Connection closed by 139.178.68.195 port 50102 Aug 13 00:40:49.737825 sshd-session[4899]: pam_unix(sshd:session): session closed for user core Aug 13 00:40:49.741327 systemd[1]: sshd@10-172.31.31.138:22-139.178.68.195:50102.service: Deactivated successfully. Aug 13 00:40:49.744367 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 00:40:49.747581 systemd-logind[1973]: Session 11 logged out. Waiting for processes to exit. Aug 13 00:40:49.749016 systemd-logind[1973]: Removed session 11. Aug 13 00:40:54.774883 systemd[1]: Started sshd@11-172.31.31.138:22-139.178.68.195:60158.service - OpenSSH per-connection server daemon (139.178.68.195:60158). Aug 13 00:40:54.961505 sshd[4915]: Accepted publickey for core from 139.178.68.195 port 60158 ssh2: RSA SHA256:2C5UUUFKFtbeXpxut91iAvg9/kHC7TPoVPANvS2Tr9A Aug 13 00:40:54.962658 sshd-session[4915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:40:54.968643 systemd-logind[1973]: New session 12 of user core. Aug 13 00:40:54.981030 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 00:40:55.183763 sshd[4917]: Connection closed by 139.178.68.195 port 60158 Aug 13 00:40:55.184932 sshd-session[4915]: pam_unix(sshd:session): session closed for user core Aug 13 00:40:55.189456 systemd[1]: sshd@11-172.31.31.138:22-139.178.68.195:60158.service: Deactivated successfully. Aug 13 00:40:55.192482 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 00:40:55.194428 systemd-logind[1973]: Session 12 logged out. Waiting for processes to exit. Aug 13 00:40:55.196174 systemd-logind[1973]: Removed session 12. Aug 13 00:40:55.219521 systemd[1]: Started sshd@12-172.31.31.138:22-139.178.68.195:60160.service - OpenSSH per-connection server daemon (139.178.68.195:60160). Aug 13 00:40:55.396411 sshd[4930]: Accepted publickey for core from 139.178.68.195 port 60160 ssh2: RSA SHA256:2C5UUUFKFtbeXpxut91iAvg9/kHC7TPoVPANvS2Tr9A Aug 13 00:40:55.397962 sshd-session[4930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:40:55.404277 systemd-logind[1973]: New session 13 of user core. Aug 13 00:40:55.413970 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 00:40:55.650709 sshd[4932]: Connection closed by 139.178.68.195 port 60160 Aug 13 00:40:55.651492 sshd-session[4930]: pam_unix(sshd:session): session closed for user core Aug 13 00:40:55.661003 systemd[1]: sshd@12-172.31.31.138:22-139.178.68.195:60160.service: Deactivated successfully. Aug 13 00:40:55.665972 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 00:40:55.669772 systemd-logind[1973]: Session 13 logged out. Waiting for processes to exit. Aug 13 00:40:55.688688 systemd[1]: Started sshd@13-172.31.31.138:22-139.178.68.195:60174.service - OpenSSH per-connection server daemon (139.178.68.195:60174). Aug 13 00:40:55.690241 systemd-logind[1973]: Removed session 13. Aug 13 00:40:55.870593 sshd[4942]: Accepted publickey for core from 139.178.68.195 port 60174 ssh2: RSA SHA256:2C5UUUFKFtbeXpxut91iAvg9/kHC7TPoVPANvS2Tr9A Aug 13 00:40:55.871091 sshd-session[4942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:40:55.893804 systemd-logind[1973]: New session 14 of user core. Aug 13 00:40:55.900548 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 00:40:56.098422 sshd[4944]: Connection closed by 139.178.68.195 port 60174 Aug 13 00:40:56.097442 sshd-session[4942]: pam_unix(sshd:session): session closed for user core Aug 13 00:40:56.104255 systemd-logind[1973]: Session 14 logged out. Waiting for processes to exit. Aug 13 00:40:56.104400 systemd[1]: sshd@13-172.31.31.138:22-139.178.68.195:60174.service: Deactivated successfully. Aug 13 00:40:56.108173 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 00:40:56.112234 systemd-logind[1973]: Removed session 14. Aug 13 00:41:01.135530 systemd[1]: Started sshd@14-172.31.31.138:22-139.178.68.195:43006.service - OpenSSH per-connection server daemon (139.178.68.195:43006). Aug 13 00:41:01.391347 sshd[4958]: Accepted publickey for core from 139.178.68.195 port 43006 ssh2: RSA SHA256:2C5UUUFKFtbeXpxut91iAvg9/kHC7TPoVPANvS2Tr9A Aug 13 00:41:01.405242 sshd-session[4958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:41:01.430904 systemd-logind[1973]: New session 15 of user core. Aug 13 00:41:01.444918 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 00:41:01.790792 sshd[4960]: Connection closed by 139.178.68.195 port 43006 Aug 13 00:41:01.791867 sshd-session[4958]: pam_unix(sshd:session): session closed for user core Aug 13 00:41:01.809501 systemd-logind[1973]: Session 15 logged out. Waiting for processes to exit. Aug 13 00:41:01.813902 systemd[1]: sshd@14-172.31.31.138:22-139.178.68.195:43006.service: Deactivated successfully. Aug 13 00:41:01.819336 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 00:41:01.824734 systemd-logind[1973]: Removed session 15. Aug 13 00:41:06.825906 systemd[1]: Started sshd@15-172.31.31.138:22-139.178.68.195:43010.service - OpenSSH per-connection server daemon (139.178.68.195:43010). Aug 13 00:41:07.010627 sshd[4972]: Accepted publickey for core from 139.178.68.195 port 43010 ssh2: RSA SHA256:2C5UUUFKFtbeXpxut91iAvg9/kHC7TPoVPANvS2Tr9A Aug 13 00:41:07.012060 sshd-session[4972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:41:07.019306 systemd-logind[1973]: New session 16 of user core. Aug 13 00:41:07.024835 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 00:41:07.217070 sshd[4974]: Connection closed by 139.178.68.195 port 43010 Aug 13 00:41:07.217814 sshd-session[4972]: pam_unix(sshd:session): session closed for user core Aug 13 00:41:07.221225 systemd[1]: sshd@15-172.31.31.138:22-139.178.68.195:43010.service: Deactivated successfully. Aug 13 00:41:07.223758 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 00:41:07.226169 systemd-logind[1973]: Session 16 logged out. Waiting for processes to exit. Aug 13 00:41:07.227733 systemd-logind[1973]: Removed session 16. Aug 13 00:41:12.252036 systemd[1]: Started sshd@16-172.31.31.138:22-139.178.68.195:41880.service - OpenSSH per-connection server daemon (139.178.68.195:41880). Aug 13 00:41:12.422621 sshd[4986]: Accepted publickey for core from 139.178.68.195 port 41880 ssh2: RSA SHA256:2C5UUUFKFtbeXpxut91iAvg9/kHC7TPoVPANvS2Tr9A Aug 13 00:41:12.423815 sshd-session[4986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:41:12.429925 systemd-logind[1973]: New session 17 of user core. Aug 13 00:41:12.433807 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 00:41:12.631769 sshd[4988]: Connection closed by 139.178.68.195 port 41880 Aug 13 00:41:12.633309 sshd-session[4986]: pam_unix(sshd:session): session closed for user core Aug 13 00:41:12.637892 systemd[1]: sshd@16-172.31.31.138:22-139.178.68.195:41880.service: Deactivated successfully. Aug 13 00:41:12.641671 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 00:41:12.642822 systemd-logind[1973]: Session 17 logged out. Waiting for processes to exit. Aug 13 00:41:12.644881 systemd-logind[1973]: Removed session 17. Aug 13 00:41:12.666885 systemd[1]: Started sshd@17-172.31.31.138:22-139.178.68.195:41896.service - OpenSSH per-connection server daemon (139.178.68.195:41896). Aug 13 00:41:12.865674 sshd[4999]: Accepted publickey for core from 139.178.68.195 port 41896 ssh2: RSA SHA256:2C5UUUFKFtbeXpxut91iAvg9/kHC7TPoVPANvS2Tr9A Aug 13 00:41:12.867094 sshd-session[4999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:41:12.873701 systemd-logind[1973]: New session 18 of user core. Aug 13 00:41:12.880804 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 00:41:13.623680 sshd[5001]: Connection closed by 139.178.68.195 port 41896 Aug 13 00:41:13.626086 sshd-session[4999]: pam_unix(sshd:session): session closed for user core Aug 13 00:41:13.654725 systemd[1]: sshd@17-172.31.31.138:22-139.178.68.195:41896.service: Deactivated successfully. Aug 13 00:41:13.656964 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 00:41:13.660072 systemd-logind[1973]: Session 18 logged out. Waiting for processes to exit. Aug 13 00:41:13.667461 systemd[1]: Started sshd@18-172.31.31.138:22-139.178.68.195:41904.service - OpenSSH per-connection server daemon (139.178.68.195:41904). Aug 13 00:41:13.668289 systemd-logind[1973]: Removed session 18. Aug 13 00:41:13.858602 sshd[5011]: Accepted publickey for core from 139.178.68.195 port 41904 ssh2: RSA SHA256:2C5UUUFKFtbeXpxut91iAvg9/kHC7TPoVPANvS2Tr9A Aug 13 00:41:13.861181 sshd-session[5011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:41:13.880299 systemd-logind[1973]: New session 19 of user core. Aug 13 00:41:13.893133 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 00:41:15.651522 sshd[5013]: Connection closed by 139.178.68.195 port 41904 Aug 13 00:41:15.652814 sshd-session[5011]: pam_unix(sshd:session): session closed for user core Aug 13 00:41:15.657555 systemd-logind[1973]: Session 19 logged out. Waiting for processes to exit. Aug 13 00:41:15.658316 systemd[1]: sshd@18-172.31.31.138:22-139.178.68.195:41904.service: Deactivated successfully. Aug 13 00:41:15.661361 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 00:41:15.663924 systemd-logind[1973]: Removed session 19. Aug 13 00:41:15.684698 systemd[1]: Started sshd@19-172.31.31.138:22-139.178.68.195:41914.service - OpenSSH per-connection server daemon (139.178.68.195:41914). Aug 13 00:41:15.868826 sshd[5031]: Accepted publickey for core from 139.178.68.195 port 41914 ssh2: RSA SHA256:2C5UUUFKFtbeXpxut91iAvg9/kHC7TPoVPANvS2Tr9A Aug 13 00:41:15.870736 sshd-session[5031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:41:15.877412 systemd-logind[1973]: New session 20 of user core. Aug 13 00:41:15.890828 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 00:41:16.253577 sshd[5033]: Connection closed by 139.178.68.195 port 41914 Aug 13 00:41:16.254829 sshd-session[5031]: pam_unix(sshd:session): session closed for user core Aug 13 00:41:16.259603 systemd-logind[1973]: Session 20 logged out. Waiting for processes to exit. Aug 13 00:41:16.260264 systemd[1]: sshd@19-172.31.31.138:22-139.178.68.195:41914.service: Deactivated successfully. Aug 13 00:41:16.262867 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 00:41:16.264895 systemd-logind[1973]: Removed session 20. Aug 13 00:41:16.288831 systemd[1]: Started sshd@20-172.31.31.138:22-139.178.68.195:41924.service - OpenSSH per-connection server daemon (139.178.68.195:41924). Aug 13 00:41:16.461553 sshd[5043]: Accepted publickey for core from 139.178.68.195 port 41924 ssh2: RSA SHA256:2C5UUUFKFtbeXpxut91iAvg9/kHC7TPoVPANvS2Tr9A Aug 13 00:41:16.462149 sshd-session[5043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:41:16.467657 systemd-logind[1973]: New session 21 of user core. Aug 13 00:41:16.474830 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 00:41:16.668613 sshd[5045]: Connection closed by 139.178.68.195 port 41924 Aug 13 00:41:16.670529 sshd-session[5043]: pam_unix(sshd:session): session closed for user core Aug 13 00:41:16.675149 systemd-logind[1973]: Session 21 logged out. Waiting for processes to exit. Aug 13 00:41:16.676379 systemd[1]: sshd@20-172.31.31.138:22-139.178.68.195:41924.service: Deactivated successfully. Aug 13 00:41:16.679743 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 00:41:16.681779 systemd-logind[1973]: Removed session 21. Aug 13 00:41:21.702124 systemd[1]: Started sshd@21-172.31.31.138:22-139.178.68.195:33518.service - OpenSSH per-connection server daemon (139.178.68.195:33518). Aug 13 00:41:21.880487 sshd[5057]: Accepted publickey for core from 139.178.68.195 port 33518 ssh2: RSA SHA256:2C5UUUFKFtbeXpxut91iAvg9/kHC7TPoVPANvS2Tr9A Aug 13 00:41:21.881917 sshd-session[5057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:41:21.888240 systemd-logind[1973]: New session 22 of user core. Aug 13 00:41:21.895844 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 00:41:22.088626 sshd[5059]: Connection closed by 139.178.68.195 port 33518 Aug 13 00:41:22.090057 sshd-session[5057]: pam_unix(sshd:session): session closed for user core Aug 13 00:41:22.094232 systemd[1]: sshd@21-172.31.31.138:22-139.178.68.195:33518.service: Deactivated successfully. Aug 13 00:41:22.096885 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 00:41:22.098519 systemd-logind[1973]: Session 22 logged out. Waiting for processes to exit. Aug 13 00:41:22.100443 systemd-logind[1973]: Removed session 22. Aug 13 00:41:27.121100 systemd[1]: Started sshd@22-172.31.31.138:22-139.178.68.195:33532.service - OpenSSH per-connection server daemon (139.178.68.195:33532). Aug 13 00:41:27.299618 sshd[5074]: Accepted publickey for core from 139.178.68.195 port 33532 ssh2: RSA SHA256:2C5UUUFKFtbeXpxut91iAvg9/kHC7TPoVPANvS2Tr9A Aug 13 00:41:27.300921 sshd-session[5074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:41:27.305962 systemd-logind[1973]: New session 23 of user core. Aug 13 00:41:27.317883 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 00:41:27.508582 sshd[5076]: Connection closed by 139.178.68.195 port 33532 Aug 13 00:41:27.510088 sshd-session[5074]: pam_unix(sshd:session): session closed for user core Aug 13 00:41:27.514283 systemd-logind[1973]: Session 23 logged out. Waiting for processes to exit. Aug 13 00:41:27.514525 systemd[1]: sshd@22-172.31.31.138:22-139.178.68.195:33532.service: Deactivated successfully. Aug 13 00:41:27.517210 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 00:41:27.518479 systemd-logind[1973]: Removed session 23. Aug 13 00:41:32.542212 systemd[1]: Started sshd@23-172.31.31.138:22-139.178.68.195:38414.service - OpenSSH per-connection server daemon (139.178.68.195:38414). Aug 13 00:41:32.721251 sshd[5091]: Accepted publickey for core from 139.178.68.195 port 38414 ssh2: RSA SHA256:2C5UUUFKFtbeXpxut91iAvg9/kHC7TPoVPANvS2Tr9A Aug 13 00:41:32.722954 sshd-session[5091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:41:32.729377 systemd-logind[1973]: New session 24 of user core. Aug 13 00:41:32.736806 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 00:41:32.931969 sshd[5093]: Connection closed by 139.178.68.195 port 38414 Aug 13 00:41:32.933740 sshd-session[5091]: pam_unix(sshd:session): session closed for user core Aug 13 00:41:32.937726 systemd[1]: sshd@23-172.31.31.138:22-139.178.68.195:38414.service: Deactivated successfully. Aug 13 00:41:32.940127 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 00:41:32.941420 systemd-logind[1973]: Session 24 logged out. Waiting for processes to exit. Aug 13 00:41:32.943365 systemd-logind[1973]: Removed session 24. Aug 13 00:41:37.966834 systemd[1]: Started sshd@24-172.31.31.138:22-139.178.68.195:38426.service - OpenSSH per-connection server daemon (139.178.68.195:38426). Aug 13 00:41:38.133297 sshd[5106]: Accepted publickey for core from 139.178.68.195 port 38426 ssh2: RSA SHA256:2C5UUUFKFtbeXpxut91iAvg9/kHC7TPoVPANvS2Tr9A Aug 13 00:41:38.134772 sshd-session[5106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:41:38.141660 systemd-logind[1973]: New session 25 of user core. Aug 13 00:41:38.148805 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 00:41:38.331164 sshd[5108]: Connection closed by 139.178.68.195 port 38426 Aug 13 00:41:38.332029 sshd-session[5106]: pam_unix(sshd:session): session closed for user core Aug 13 00:41:38.336327 systemd-logind[1973]: Session 25 logged out. Waiting for processes to exit. Aug 13 00:41:38.337578 systemd[1]: sshd@24-172.31.31.138:22-139.178.68.195:38426.service: Deactivated successfully. Aug 13 00:41:38.339933 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 00:41:38.341380 systemd-logind[1973]: Removed session 25. Aug 13 00:41:38.365653 systemd[1]: Started sshd@25-172.31.31.138:22-139.178.68.195:38430.service - OpenSSH per-connection server daemon (139.178.68.195:38430). Aug 13 00:41:38.559347 sshd[5120]: Accepted publickey for core from 139.178.68.195 port 38430 ssh2: RSA SHA256:2C5UUUFKFtbeXpxut91iAvg9/kHC7TPoVPANvS2Tr9A Aug 13 00:41:38.560769 sshd-session[5120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:41:38.566943 systemd-logind[1973]: New session 26 of user core. Aug 13 00:41:38.577919 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 13 00:41:40.728972 containerd[1995]: time="2025-08-13T00:41:40.728877167Z" level=info msg="StopContainer for \"59a52de492c3bfa9776c0bbe266d9be14c6d95767bd872ce196edc2c6f964350\" with timeout 30 (s)" Aug 13 00:41:40.744492 containerd[1995]: time="2025-08-13T00:41:40.744447213Z" level=info msg="Stop container \"59a52de492c3bfa9776c0bbe266d9be14c6d95767bd872ce196edc2c6f964350\" with signal terminated" Aug 13 00:41:40.803323 systemd[1]: cri-containerd-59a52de492c3bfa9776c0bbe266d9be14c6d95767bd872ce196edc2c6f964350.scope: Deactivated successfully. Aug 13 00:41:40.808144 containerd[1995]: time="2025-08-13T00:41:40.808057312Z" level=info msg="received exit event container_id:\"59a52de492c3bfa9776c0bbe266d9be14c6d95767bd872ce196edc2c6f964350\" id:\"59a52de492c3bfa9776c0bbe266d9be14c6d95767bd872ce196edc2c6f964350\" pid:4090 exited_at:{seconds:1755045700 nanos:807219220}" Aug 13 00:41:40.808144 containerd[1995]: time="2025-08-13T00:41:40.808112252Z" level=info msg="TaskExit event in podsandbox handler container_id:\"59a52de492c3bfa9776c0bbe266d9be14c6d95767bd872ce196edc2c6f964350\" id:\"59a52de492c3bfa9776c0bbe266d9be14c6d95767bd872ce196edc2c6f964350\" pid:4090 exited_at:{seconds:1755045700 nanos:807219220}" Aug 13 00:41:40.851730 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59a52de492c3bfa9776c0bbe266d9be14c6d95767bd872ce196edc2c6f964350-rootfs.mount: Deactivated successfully. Aug 13 00:41:40.863676 containerd[1995]: time="2025-08-13T00:41:40.863617088Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:41:40.868437 containerd[1995]: time="2025-08-13T00:41:40.868395779Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bfb0e68d2b8cd647bc19ab976b8be3c663c18dfe9561af34306c163ab2b6dd96\" id:\"3f9df1a98d397fd31c904a6b7f254993702681dcfc60f36639e02380d5993d96\" pid:5148 exited_at:{seconds:1755045700 nanos:868133232}" Aug 13 00:41:40.870458 containerd[1995]: time="2025-08-13T00:41:40.870316011Z" level=info msg="StopContainer for \"bfb0e68d2b8cd647bc19ab976b8be3c663c18dfe9561af34306c163ab2b6dd96\" with timeout 2 (s)" Aug 13 00:41:40.871304 containerd[1995]: time="2025-08-13T00:41:40.871262008Z" level=info msg="Stop container \"bfb0e68d2b8cd647bc19ab976b8be3c663c18dfe9561af34306c163ab2b6dd96\" with signal terminated" Aug 13 00:41:40.872519 containerd[1995]: time="2025-08-13T00:41:40.872481629Z" level=info msg="StopContainer for \"59a52de492c3bfa9776c0bbe266d9be14c6d95767bd872ce196edc2c6f964350\" returns successfully" Aug 13 00:41:40.874498 containerd[1995]: time="2025-08-13T00:41:40.874397622Z" level=info msg="StopPodSandbox for \"74ae4b1be46b408cae2c9bf7e6f76615dae66ae1932445dcb61e39f19c9e0e23\"" Aug 13 00:41:40.884578 systemd-networkd[1839]: lxc_health: Link DOWN Aug 13 00:41:40.884589 systemd-networkd[1839]: lxc_health: Lost carrier Aug 13 00:41:40.894047 containerd[1995]: time="2025-08-13T00:41:40.893997342Z" level=info msg="Container to stop \"59a52de492c3bfa9776c0bbe266d9be14c6d95767bd872ce196edc2c6f964350\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:41:40.911434 systemd[1]: cri-containerd-74ae4b1be46b408cae2c9bf7e6f76615dae66ae1932445dcb61e39f19c9e0e23.scope: Deactivated successfully. Aug 13 00:41:40.914332 containerd[1995]: time="2025-08-13T00:41:40.914282628Z" level=info msg="TaskExit event in podsandbox handler container_id:\"74ae4b1be46b408cae2c9bf7e6f76615dae66ae1932445dcb61e39f19c9e0e23\" id:\"74ae4b1be46b408cae2c9bf7e6f76615dae66ae1932445dcb61e39f19c9e0e23\" pid:3763 exit_status:137 exited_at:{seconds:1755045700 nanos:912737728}" Aug 13 00:41:40.918121 systemd[1]: cri-containerd-bfb0e68d2b8cd647bc19ab976b8be3c663c18dfe9561af34306c163ab2b6dd96.scope: Deactivated successfully. Aug 13 00:41:40.918497 systemd[1]: cri-containerd-bfb0e68d2b8cd647bc19ab976b8be3c663c18dfe9561af34306c163ab2b6dd96.scope: Consumed 8.388s CPU time, 219.5M memory peak, 99.8M read from disk, 13.3M written to disk. Aug 13 00:41:40.920907 containerd[1995]: time="2025-08-13T00:41:40.920773331Z" level=info msg="received exit event container_id:\"bfb0e68d2b8cd647bc19ab976b8be3c663c18dfe9561af34306c163ab2b6dd96\" id:\"bfb0e68d2b8cd647bc19ab976b8be3c663c18dfe9561af34306c163ab2b6dd96\" pid:4202 exited_at:{seconds:1755045700 nanos:919952440}" Aug 13 00:41:40.961373 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bfb0e68d2b8cd647bc19ab976b8be3c663c18dfe9561af34306c163ab2b6dd96-rootfs.mount: Deactivated successfully. Aug 13 00:41:40.975182 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-74ae4b1be46b408cae2c9bf7e6f76615dae66ae1932445dcb61e39f19c9e0e23-rootfs.mount: Deactivated successfully. Aug 13 00:41:40.978924 containerd[1995]: time="2025-08-13T00:41:40.978885341Z" level=info msg="shim disconnected" id=74ae4b1be46b408cae2c9bf7e6f76615dae66ae1932445dcb61e39f19c9e0e23 namespace=k8s.io Aug 13 00:41:40.981419 containerd[1995]: time="2025-08-13T00:41:40.979194085Z" level=warning msg="cleaning up after shim disconnected" id=74ae4b1be46b408cae2c9bf7e6f76615dae66ae1932445dcb61e39f19c9e0e23 namespace=k8s.io Aug 13 00:41:40.985605 containerd[1995]: time="2025-08-13T00:41:40.979216450Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:41:40.985844 containerd[1995]: time="2025-08-13T00:41:40.981939679Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bfb0e68d2b8cd647bc19ab976b8be3c663c18dfe9561af34306c163ab2b6dd96\" id:\"bfb0e68d2b8cd647bc19ab976b8be3c663c18dfe9561af34306c163ab2b6dd96\" pid:4202 exited_at:{seconds:1755045700 nanos:919952440}" Aug 13 00:41:40.988054 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-74ae4b1be46b408cae2c9bf7e6f76615dae66ae1932445dcb61e39f19c9e0e23-shm.mount: Deactivated successfully. Aug 13 00:41:41.044541 containerd[1995]: time="2025-08-13T00:41:41.044484291Z" level=info msg="TearDown network for sandbox \"74ae4b1be46b408cae2c9bf7e6f76615dae66ae1932445dcb61e39f19c9e0e23\" successfully" Aug 13 00:41:41.044541 containerd[1995]: time="2025-08-13T00:41:41.044540006Z" level=info msg="StopPodSandbox for \"74ae4b1be46b408cae2c9bf7e6f76615dae66ae1932445dcb61e39f19c9e0e23\" returns successfully" Aug 13 00:41:41.044830 containerd[1995]: time="2025-08-13T00:41:41.044792950Z" level=info msg="received exit event sandbox_id:\"74ae4b1be46b408cae2c9bf7e6f76615dae66ae1932445dcb61e39f19c9e0e23\" exit_status:137 exited_at:{seconds:1755045700 nanos:912737728}" Aug 13 00:41:41.045833 containerd[1995]: time="2025-08-13T00:41:41.045707131Z" level=info msg="StopContainer for \"bfb0e68d2b8cd647bc19ab976b8be3c663c18dfe9561af34306c163ab2b6dd96\" returns successfully" Aug 13 00:41:41.050159 containerd[1995]: time="2025-08-13T00:41:41.050120802Z" level=info msg="StopPodSandbox for \"f4a771018e58eb138da5af8e03499c112c9a71f2e5017e0186906c12e6432515\"" Aug 13 00:41:41.050297 containerd[1995]: time="2025-08-13T00:41:41.050247320Z" level=info msg="Container to stop \"f72528df6daa33c95d0dea697b780421a25d94cd5d14ef46fb4d4b4ca8f6c9d9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:41:41.050297 containerd[1995]: time="2025-08-13T00:41:41.050268037Z" level=info msg="Container to stop \"0b82f313e448e18b724c9cb9757900171cadc3ed050dab16d6b8af7db147acca\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:41:41.050297 containerd[1995]: time="2025-08-13T00:41:41.050279672Z" level=info msg="Container to stop \"7bdff14a344421548ee795e1ff9e6c51cfc0985c6925c763cf21e1c299b36276\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:41:41.050297 containerd[1995]: time="2025-08-13T00:41:41.050290758Z" level=info msg="Container to stop \"bfb0e68d2b8cd647bc19ab976b8be3c663c18dfe9561af34306c163ab2b6dd96\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:41:41.050461 containerd[1995]: time="2025-08-13T00:41:41.050306281Z" level=info msg="Container to stop \"0bff6a91a020ee02d7d8d5bf6ad4bce4489a226c958ae7374935cea675f24b73\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:41:41.061444 systemd[1]: cri-containerd-f4a771018e58eb138da5af8e03499c112c9a71f2e5017e0186906c12e6432515.scope: Deactivated successfully. Aug 13 00:41:41.072404 containerd[1995]: time="2025-08-13T00:41:41.072361706Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f4a771018e58eb138da5af8e03499c112c9a71f2e5017e0186906c12e6432515\" id:\"f4a771018e58eb138da5af8e03499c112c9a71f2e5017e0186906c12e6432515\" pid:3714 exit_status:137 exited_at:{seconds:1755045701 nanos:70027704}" Aug 13 00:41:41.085490 kubelet[3300]: I0813 00:41:41.085281 3300 scope.go:117] "RemoveContainer" containerID="59a52de492c3bfa9776c0bbe266d9be14c6d95767bd872ce196edc2c6f964350" Aug 13 00:41:41.093466 containerd[1995]: time="2025-08-13T00:41:41.093424961Z" level=info msg="RemoveContainer for \"59a52de492c3bfa9776c0bbe266d9be14c6d95767bd872ce196edc2c6f964350\"" Aug 13 00:41:41.107433 kubelet[3300]: I0813 00:41:41.105873 3300 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2b2e4db8-60d6-4b82-931e-91f11fb06629-cilium-config-path\") pod \"2b2e4db8-60d6-4b82-931e-91f11fb06629\" (UID: \"2b2e4db8-60d6-4b82-931e-91f11fb06629\") " Aug 13 00:41:41.107433 kubelet[3300]: I0813 00:41:41.105927 3300 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpmhd\" (UniqueName: \"kubernetes.io/projected/2b2e4db8-60d6-4b82-931e-91f11fb06629-kube-api-access-dpmhd\") pod \"2b2e4db8-60d6-4b82-931e-91f11fb06629\" (UID: \"2b2e4db8-60d6-4b82-931e-91f11fb06629\") " Aug 13 00:41:41.117487 kubelet[3300]: I0813 00:41:41.113870 3300 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b2e4db8-60d6-4b82-931e-91f11fb06629-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2b2e4db8-60d6-4b82-931e-91f11fb06629" (UID: "2b2e4db8-60d6-4b82-931e-91f11fb06629"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 00:41:41.119531 systemd[1]: var-lib-kubelet-pods-2b2e4db8\x2d60d6\x2d4b82\x2d931e\x2d91f11fb06629-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddpmhd.mount: Deactivated successfully. Aug 13 00:41:41.122403 kubelet[3300]: I0813 00:41:41.122364 3300 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b2e4db8-60d6-4b82-931e-91f11fb06629-kube-api-access-dpmhd" (OuterVolumeSpecName: "kube-api-access-dpmhd") pod "2b2e4db8-60d6-4b82-931e-91f11fb06629" (UID: "2b2e4db8-60d6-4b82-931e-91f11fb06629"). InnerVolumeSpecName "kube-api-access-dpmhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:41:41.124851 containerd[1995]: time="2025-08-13T00:41:41.124700001Z" level=info msg="RemoveContainer for \"59a52de492c3bfa9776c0bbe266d9be14c6d95767bd872ce196edc2c6f964350\" returns successfully" Aug 13 00:41:41.127905 kubelet[3300]: I0813 00:41:41.125404 3300 scope.go:117] "RemoveContainer" containerID="59a52de492c3bfa9776c0bbe266d9be14c6d95767bd872ce196edc2c6f964350" Aug 13 00:41:41.134207 containerd[1995]: time="2025-08-13T00:41:41.127817123Z" level=error msg="ContainerStatus for \"59a52de492c3bfa9776c0bbe266d9be14c6d95767bd872ce196edc2c6f964350\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"59a52de492c3bfa9776c0bbe266d9be14c6d95767bd872ce196edc2c6f964350\": not found" Aug 13 00:41:41.136320 kubelet[3300]: E0813 00:41:41.136267 3300 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"59a52de492c3bfa9776c0bbe266d9be14c6d95767bd872ce196edc2c6f964350\": not found" containerID="59a52de492c3bfa9776c0bbe266d9be14c6d95767bd872ce196edc2c6f964350" Aug 13 00:41:41.138342 kubelet[3300]: I0813 00:41:41.138137 3300 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"59a52de492c3bfa9776c0bbe266d9be14c6d95767bd872ce196edc2c6f964350"} err="failed to get container status \"59a52de492c3bfa9776c0bbe266d9be14c6d95767bd872ce196edc2c6f964350\": rpc error: code = NotFound desc = an error occurred when try to find container \"59a52de492c3bfa9776c0bbe266d9be14c6d95767bd872ce196edc2c6f964350\": not found" Aug 13 00:41:41.171587 containerd[1995]: time="2025-08-13T00:41:41.171488147Z" level=info msg="shim disconnected" id=f4a771018e58eb138da5af8e03499c112c9a71f2e5017e0186906c12e6432515 namespace=k8s.io Aug 13 00:41:41.171587 containerd[1995]: time="2025-08-13T00:41:41.171523708Z" level=warning msg="cleaning up after shim disconnected" id=f4a771018e58eb138da5af8e03499c112c9a71f2e5017e0186906c12e6432515 namespace=k8s.io Aug 13 00:41:41.171976 containerd[1995]: time="2025-08-13T00:41:41.171537723Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:41:41.207206 kubelet[3300]: I0813 00:41:41.207136 3300 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2b2e4db8-60d6-4b82-931e-91f11fb06629-cilium-config-path\") on node \"ip-172-31-31-138\" DevicePath \"\"" Aug 13 00:41:41.207206 kubelet[3300]: I0813 00:41:41.207178 3300 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpmhd\" (UniqueName: \"kubernetes.io/projected/2b2e4db8-60d6-4b82-931e-91f11fb06629-kube-api-access-dpmhd\") on node \"ip-172-31-31-138\" DevicePath \"\"" Aug 13 00:41:41.210602 containerd[1995]: time="2025-08-13T00:41:41.210496619Z" level=info msg="received exit event sandbox_id:\"f4a771018e58eb138da5af8e03499c112c9a71f2e5017e0186906c12e6432515\" exit_status:137 exited_at:{seconds:1755045701 nanos:70027704}" Aug 13 00:41:41.210882 containerd[1995]: time="2025-08-13T00:41:41.210854970Z" level=info msg="TearDown network for sandbox \"f4a771018e58eb138da5af8e03499c112c9a71f2e5017e0186906c12e6432515\" successfully" Aug 13 00:41:41.210882 containerd[1995]: time="2025-08-13T00:41:41.210879284Z" level=info msg="StopPodSandbox for \"f4a771018e58eb138da5af8e03499c112c9a71f2e5017e0186906c12e6432515\" returns successfully" Aug 13 00:41:41.308259 kubelet[3300]: I0813 00:41:41.308121 3300 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a2fa0580-0b53-4bda-9766-22557323fec8-clustermesh-secrets\") pod \"a2fa0580-0b53-4bda-9766-22557323fec8\" (UID: \"a2fa0580-0b53-4bda-9766-22557323fec8\") " Aug 13 00:41:41.308259 kubelet[3300]: I0813 00:41:41.308170 3300 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a2fa0580-0b53-4bda-9766-22557323fec8-cilium-run\") pod \"a2fa0580-0b53-4bda-9766-22557323fec8\" (UID: \"a2fa0580-0b53-4bda-9766-22557323fec8\") " Aug 13 00:41:41.308259 kubelet[3300]: I0813 00:41:41.308201 3300 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjxd5\" (UniqueName: \"kubernetes.io/projected/a2fa0580-0b53-4bda-9766-22557323fec8-kube-api-access-wjxd5\") pod \"a2fa0580-0b53-4bda-9766-22557323fec8\" (UID: \"a2fa0580-0b53-4bda-9766-22557323fec8\") " Aug 13 00:41:41.308259 kubelet[3300]: I0813 00:41:41.308229 3300 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a2fa0580-0b53-4bda-9766-22557323fec8-hostproc\") pod \"a2fa0580-0b53-4bda-9766-22557323fec8\" (UID: \"a2fa0580-0b53-4bda-9766-22557323fec8\") " Aug 13 00:41:41.308259 kubelet[3300]: I0813 00:41:41.308248 3300 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a2fa0580-0b53-4bda-9766-22557323fec8-cni-path\") pod \"a2fa0580-0b53-4bda-9766-22557323fec8\" (UID: \"a2fa0580-0b53-4bda-9766-22557323fec8\") " Aug 13 00:41:41.308603 kubelet[3300]: I0813 00:41:41.308270 3300 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2fa0580-0b53-4bda-9766-22557323fec8-lib-modules\") pod \"a2fa0580-0b53-4bda-9766-22557323fec8\" (UID: \"a2fa0580-0b53-4bda-9766-22557323fec8\") " Aug 13 00:41:41.308603 kubelet[3300]: I0813 00:41:41.308294 3300 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a2fa0580-0b53-4bda-9766-22557323fec8-cilium-config-path\") pod \"a2fa0580-0b53-4bda-9766-22557323fec8\" (UID: \"a2fa0580-0b53-4bda-9766-22557323fec8\") " Aug 13 00:41:41.308603 kubelet[3300]: I0813 00:41:41.308314 3300 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a2fa0580-0b53-4bda-9766-22557323fec8-host-proc-sys-net\") pod \"a2fa0580-0b53-4bda-9766-22557323fec8\" (UID: \"a2fa0580-0b53-4bda-9766-22557323fec8\") " Aug 13 00:41:41.308603 kubelet[3300]: I0813 00:41:41.308338 3300 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a2fa0580-0b53-4bda-9766-22557323fec8-cilium-cgroup\") pod \"a2fa0580-0b53-4bda-9766-22557323fec8\" (UID: \"a2fa0580-0b53-4bda-9766-22557323fec8\") " Aug 13 00:41:41.308603 kubelet[3300]: I0813 00:41:41.308358 3300 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2fa0580-0b53-4bda-9766-22557323fec8-xtables-lock\") pod \"a2fa0580-0b53-4bda-9766-22557323fec8\" (UID: \"a2fa0580-0b53-4bda-9766-22557323fec8\") " Aug 13 00:41:41.308603 kubelet[3300]: I0813 00:41:41.308377 3300 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a2fa0580-0b53-4bda-9766-22557323fec8-etc-cni-netd\") pod \"a2fa0580-0b53-4bda-9766-22557323fec8\" (UID: \"a2fa0580-0b53-4bda-9766-22557323fec8\") " Aug 13 00:41:41.308855 kubelet[3300]: I0813 00:41:41.308397 3300 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a2fa0580-0b53-4bda-9766-22557323fec8-host-proc-sys-kernel\") pod \"a2fa0580-0b53-4bda-9766-22557323fec8\" (UID: \"a2fa0580-0b53-4bda-9766-22557323fec8\") " Aug 13 00:41:41.308855 kubelet[3300]: I0813 00:41:41.308421 3300 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a2fa0580-0b53-4bda-9766-22557323fec8-hubble-tls\") pod \"a2fa0580-0b53-4bda-9766-22557323fec8\" (UID: \"a2fa0580-0b53-4bda-9766-22557323fec8\") " Aug 13 00:41:41.308855 kubelet[3300]: I0813 00:41:41.308443 3300 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a2fa0580-0b53-4bda-9766-22557323fec8-bpf-maps\") pod \"a2fa0580-0b53-4bda-9766-22557323fec8\" (UID: \"a2fa0580-0b53-4bda-9766-22557323fec8\") " Aug 13 00:41:41.308855 kubelet[3300]: I0813 00:41:41.308532 3300 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2fa0580-0b53-4bda-9766-22557323fec8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a2fa0580-0b53-4bda-9766-22557323fec8" (UID: "a2fa0580-0b53-4bda-9766-22557323fec8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:41:41.309695 kubelet[3300]: I0813 00:41:41.309663 3300 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2fa0580-0b53-4bda-9766-22557323fec8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a2fa0580-0b53-4bda-9766-22557323fec8" (UID: "a2fa0580-0b53-4bda-9766-22557323fec8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:41:41.309987 kubelet[3300]: I0813 00:41:41.309856 3300 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2fa0580-0b53-4bda-9766-22557323fec8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a2fa0580-0b53-4bda-9766-22557323fec8" (UID: "a2fa0580-0b53-4bda-9766-22557323fec8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:41:41.309987 kubelet[3300]: I0813 00:41:41.309916 3300 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2fa0580-0b53-4bda-9766-22557323fec8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a2fa0580-0b53-4bda-9766-22557323fec8" (UID: "a2fa0580-0b53-4bda-9766-22557323fec8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:41:41.309987 kubelet[3300]: I0813 00:41:41.309941 3300 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2fa0580-0b53-4bda-9766-22557323fec8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a2fa0580-0b53-4bda-9766-22557323fec8" (UID: "a2fa0580-0b53-4bda-9766-22557323fec8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:41:41.309987 kubelet[3300]: I0813 00:41:41.309963 3300 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2fa0580-0b53-4bda-9766-22557323fec8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a2fa0580-0b53-4bda-9766-22557323fec8" (UID: "a2fa0580-0b53-4bda-9766-22557323fec8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:41:41.310583 kubelet[3300]: I0813 00:41:41.310469 3300 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2fa0580-0b53-4bda-9766-22557323fec8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a2fa0580-0b53-4bda-9766-22557323fec8" (UID: "a2fa0580-0b53-4bda-9766-22557323fec8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:41:41.313596 kubelet[3300]: I0813 00:41:41.312848 3300 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2fa0580-0b53-4bda-9766-22557323fec8-hostproc" (OuterVolumeSpecName: "hostproc") pod "a2fa0580-0b53-4bda-9766-22557323fec8" (UID: "a2fa0580-0b53-4bda-9766-22557323fec8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:41:41.313596 kubelet[3300]: I0813 00:41:41.312912 3300 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2fa0580-0b53-4bda-9766-22557323fec8-cni-path" (OuterVolumeSpecName: "cni-path") pod "a2fa0580-0b53-4bda-9766-22557323fec8" (UID: "a2fa0580-0b53-4bda-9766-22557323fec8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:41:41.313596 kubelet[3300]: I0813 00:41:41.312937 3300 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2fa0580-0b53-4bda-9766-22557323fec8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a2fa0580-0b53-4bda-9766-22557323fec8" (UID: "a2fa0580-0b53-4bda-9766-22557323fec8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:41:41.317253 kubelet[3300]: I0813 00:41:41.317191 3300 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2fa0580-0b53-4bda-9766-22557323fec8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a2fa0580-0b53-4bda-9766-22557323fec8" (UID: "a2fa0580-0b53-4bda-9766-22557323fec8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:41:41.317695 kubelet[3300]: I0813 00:41:41.317666 3300 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2fa0580-0b53-4bda-9766-22557323fec8-kube-api-access-wjxd5" (OuterVolumeSpecName: "kube-api-access-wjxd5") pod "a2fa0580-0b53-4bda-9766-22557323fec8" (UID: "a2fa0580-0b53-4bda-9766-22557323fec8"). InnerVolumeSpecName "kube-api-access-wjxd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:41:41.318258 kubelet[3300]: I0813 00:41:41.318230 3300 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2fa0580-0b53-4bda-9766-22557323fec8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a2fa0580-0b53-4bda-9766-22557323fec8" (UID: "a2fa0580-0b53-4bda-9766-22557323fec8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 00:41:41.318752 kubelet[3300]: I0813 00:41:41.318725 3300 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2fa0580-0b53-4bda-9766-22557323fec8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a2fa0580-0b53-4bda-9766-22557323fec8" (UID: "a2fa0580-0b53-4bda-9766-22557323fec8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 00:41:41.391368 systemd[1]: Removed slice kubepods-besteffort-pod2b2e4db8_60d6_4b82_931e_91f11fb06629.slice - libcontainer container kubepods-besteffort-pod2b2e4db8_60d6_4b82_931e_91f11fb06629.slice. Aug 13 00:41:41.409072 kubelet[3300]: I0813 00:41:41.409033 3300 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2fa0580-0b53-4bda-9766-22557323fec8-lib-modules\") on node \"ip-172-31-31-138\" DevicePath \"\"" Aug 13 00:41:41.409072 kubelet[3300]: I0813 00:41:41.409067 3300 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a2fa0580-0b53-4bda-9766-22557323fec8-cilium-config-path\") on node \"ip-172-31-31-138\" DevicePath \"\"" Aug 13 00:41:41.409072 kubelet[3300]: I0813 00:41:41.409081 3300 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjxd5\" (UniqueName: \"kubernetes.io/projected/a2fa0580-0b53-4bda-9766-22557323fec8-kube-api-access-wjxd5\") on node \"ip-172-31-31-138\" DevicePath \"\"" Aug 13 00:41:41.410360 kubelet[3300]: I0813 00:41:41.409091 3300 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a2fa0580-0b53-4bda-9766-22557323fec8-hostproc\") on node \"ip-172-31-31-138\" DevicePath \"\"" Aug 13 00:41:41.410360 kubelet[3300]: I0813 00:41:41.409100 3300 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a2fa0580-0b53-4bda-9766-22557323fec8-cni-path\") on node \"ip-172-31-31-138\" DevicePath \"\"" Aug 13 00:41:41.410360 kubelet[3300]: I0813 00:41:41.409110 3300 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a2fa0580-0b53-4bda-9766-22557323fec8-host-proc-sys-net\") on node \"ip-172-31-31-138\" DevicePath \"\"" Aug 13 00:41:41.410360 kubelet[3300]: I0813 00:41:41.409118 3300 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a2fa0580-0b53-4bda-9766-22557323fec8-cilium-cgroup\") on node \"ip-172-31-31-138\" DevicePath \"\"" Aug 13 00:41:41.410360 kubelet[3300]: I0813 00:41:41.409126 3300 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2fa0580-0b53-4bda-9766-22557323fec8-xtables-lock\") on node \"ip-172-31-31-138\" DevicePath \"\"" Aug 13 00:41:41.410360 kubelet[3300]: I0813 00:41:41.409133 3300 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a2fa0580-0b53-4bda-9766-22557323fec8-bpf-maps\") on node \"ip-172-31-31-138\" DevicePath \"\"" Aug 13 00:41:41.410360 kubelet[3300]: I0813 00:41:41.409140 3300 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a2fa0580-0b53-4bda-9766-22557323fec8-etc-cni-netd\") on node \"ip-172-31-31-138\" DevicePath \"\"" Aug 13 00:41:41.410360 kubelet[3300]: I0813 00:41:41.409149 3300 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a2fa0580-0b53-4bda-9766-22557323fec8-host-proc-sys-kernel\") on node \"ip-172-31-31-138\" DevicePath \"\"" Aug 13 00:41:41.410594 kubelet[3300]: I0813 00:41:41.409157 3300 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a2fa0580-0b53-4bda-9766-22557323fec8-hubble-tls\") on node \"ip-172-31-31-138\" DevicePath \"\"" Aug 13 00:41:41.410594 kubelet[3300]: I0813 00:41:41.409164 3300 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a2fa0580-0b53-4bda-9766-22557323fec8-clustermesh-secrets\") on node \"ip-172-31-31-138\" DevicePath \"\"" Aug 13 00:41:41.410594 kubelet[3300]: I0813 00:41:41.409173 3300 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a2fa0580-0b53-4bda-9766-22557323fec8-cilium-run\") on node \"ip-172-31-31-138\" DevicePath \"\"" Aug 13 00:41:41.441137 kubelet[3300]: I0813 00:41:41.441094 3300 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b2e4db8-60d6-4b82-931e-91f11fb06629" path="/var/lib/kubelet/pods/2b2e4db8-60d6-4b82-931e-91f11fb06629/volumes" Aug 13 00:41:41.446293 systemd[1]: Removed slice kubepods-burstable-poda2fa0580_0b53_4bda_9766_22557323fec8.slice - libcontainer container kubepods-burstable-poda2fa0580_0b53_4bda_9766_22557323fec8.slice. Aug 13 00:41:41.446447 systemd[1]: kubepods-burstable-poda2fa0580_0b53_4bda_9766_22557323fec8.slice: Consumed 8.514s CPU time, 219.8M memory peak, 100.8M read from disk, 13.3M written to disk. Aug 13 00:41:41.849929 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f4a771018e58eb138da5af8e03499c112c9a71f2e5017e0186906c12e6432515-rootfs.mount: Deactivated successfully. Aug 13 00:41:41.850033 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f4a771018e58eb138da5af8e03499c112c9a71f2e5017e0186906c12e6432515-shm.mount: Deactivated successfully. Aug 13 00:41:41.850108 systemd[1]: var-lib-kubelet-pods-a2fa0580\x2d0b53\x2d4bda\x2d9766\x2d22557323fec8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwjxd5.mount: Deactivated successfully. Aug 13 00:41:41.850167 systemd[1]: var-lib-kubelet-pods-a2fa0580\x2d0b53\x2d4bda\x2d9766\x2d22557323fec8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 00:41:41.850227 systemd[1]: var-lib-kubelet-pods-a2fa0580\x2d0b53\x2d4bda\x2d9766\x2d22557323fec8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 00:41:42.152969 kubelet[3300]: I0813 00:41:42.152861 3300 scope.go:117] "RemoveContainer" containerID="bfb0e68d2b8cd647bc19ab976b8be3c663c18dfe9561af34306c163ab2b6dd96" Aug 13 00:41:42.178351 containerd[1995]: time="2025-08-13T00:41:42.178292812Z" level=info msg="RemoveContainer for \"bfb0e68d2b8cd647bc19ab976b8be3c663c18dfe9561af34306c163ab2b6dd96\"" Aug 13 00:41:42.205601 containerd[1995]: time="2025-08-13T00:41:42.204686045Z" level=info msg="RemoveContainer for \"bfb0e68d2b8cd647bc19ab976b8be3c663c18dfe9561af34306c163ab2b6dd96\" returns successfully" Aug 13 00:41:42.209556 kubelet[3300]: I0813 00:41:42.209513 3300 scope.go:117] "RemoveContainer" containerID="7bdff14a344421548ee795e1ff9e6c51cfc0985c6925c763cf21e1c299b36276" Aug 13 00:41:42.213591 containerd[1995]: time="2025-08-13T00:41:42.213537429Z" level=info msg="RemoveContainer for \"7bdff14a344421548ee795e1ff9e6c51cfc0985c6925c763cf21e1c299b36276\"" Aug 13 00:41:42.231283 containerd[1995]: time="2025-08-13T00:41:42.231185281Z" level=info msg="RemoveContainer for \"7bdff14a344421548ee795e1ff9e6c51cfc0985c6925c763cf21e1c299b36276\" returns successfully" Aug 13 00:41:42.234740 kubelet[3300]: I0813 00:41:42.231556 3300 scope.go:117] "RemoveContainer" containerID="0b82f313e448e18b724c9cb9757900171cadc3ed050dab16d6b8af7db147acca" Aug 13 00:41:42.239777 containerd[1995]: time="2025-08-13T00:41:42.239735504Z" level=info msg="RemoveContainer for \"0b82f313e448e18b724c9cb9757900171cadc3ed050dab16d6b8af7db147acca\"" Aug 13 00:41:42.246315 containerd[1995]: time="2025-08-13T00:41:42.246272522Z" level=info msg="RemoveContainer for \"0b82f313e448e18b724c9cb9757900171cadc3ed050dab16d6b8af7db147acca\" returns successfully" Aug 13 00:41:42.246667 kubelet[3300]: I0813 00:41:42.246512 3300 scope.go:117] "RemoveContainer" containerID="0bff6a91a020ee02d7d8d5bf6ad4bce4489a226c958ae7374935cea675f24b73" Aug 13 00:41:42.248905 containerd[1995]: time="2025-08-13T00:41:42.248869282Z" level=info msg="RemoveContainer for \"0bff6a91a020ee02d7d8d5bf6ad4bce4489a226c958ae7374935cea675f24b73\"" Aug 13 00:41:42.255614 containerd[1995]: time="2025-08-13T00:41:42.254821140Z" level=info msg="RemoveContainer for \"0bff6a91a020ee02d7d8d5bf6ad4bce4489a226c958ae7374935cea675f24b73\" returns successfully" Aug 13 00:41:42.256381 kubelet[3300]: I0813 00:41:42.256341 3300 scope.go:117] "RemoveContainer" containerID="f72528df6daa33c95d0dea697b780421a25d94cd5d14ef46fb4d4b4ca8f6c9d9" Aug 13 00:41:42.258360 containerd[1995]: time="2025-08-13T00:41:42.258319804Z" level=info msg="RemoveContainer for \"f72528df6daa33c95d0dea697b780421a25d94cd5d14ef46fb4d4b4ca8f6c9d9\"" Aug 13 00:41:42.264263 containerd[1995]: time="2025-08-13T00:41:42.264213549Z" level=info msg="RemoveContainer for \"f72528df6daa33c95d0dea697b780421a25d94cd5d14ef46fb4d4b4ca8f6c9d9\" returns successfully" Aug 13 00:41:42.587629 sshd[5122]: Connection closed by 139.178.68.195 port 38430 Aug 13 00:41:42.588245 sshd-session[5120]: pam_unix(sshd:session): session closed for user core Aug 13 00:41:42.592955 systemd-logind[1973]: Session 26 logged out. Waiting for processes to exit. Aug 13 00:41:42.593861 systemd[1]: sshd@25-172.31.31.138:22-139.178.68.195:38430.service: Deactivated successfully. Aug 13 00:41:42.596511 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 00:41:42.596854 systemd[1]: session-26.scope: Consumed 1.280s CPU time, 27.1M memory peak. Aug 13 00:41:42.599149 systemd-logind[1973]: Removed session 26. Aug 13 00:41:42.623721 systemd[1]: Started sshd@26-172.31.31.138:22-139.178.68.195:48126.service - OpenSSH per-connection server daemon (139.178.68.195:48126). Aug 13 00:41:42.818884 sshd[5282]: Accepted publickey for core from 139.178.68.195 port 48126 ssh2: RSA SHA256:2C5UUUFKFtbeXpxut91iAvg9/kHC7TPoVPANvS2Tr9A Aug 13 00:41:42.820613 sshd-session[5282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:41:42.827649 systemd-logind[1973]: New session 27 of user core. Aug 13 00:41:42.838824 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 13 00:41:43.262871 ntpd[1965]: Deleting interface #12 lxc_health, fe80::a4be:20ff:fe21:5d41%8#123, interface stats: received=0, sent=0, dropped=0, active_time=77 secs Aug 13 00:41:43.263397 ntpd[1965]: 13 Aug 00:41:43 ntpd[1965]: Deleting interface #12 lxc_health, fe80::a4be:20ff:fe21:5d41%8#123, interface stats: received=0, sent=0, dropped=0, active_time=77 secs Aug 13 00:41:43.442468 kubelet[3300]: I0813 00:41:43.441806 3300 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2fa0580-0b53-4bda-9766-22557323fec8" path="/var/lib/kubelet/pods/a2fa0580-0b53-4bda-9766-22557323fec8/volumes" Aug 13 00:41:43.507740 kubelet[3300]: E0813 00:41:43.507703 3300 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2b2e4db8-60d6-4b82-931e-91f11fb06629" containerName="cilium-operator" Aug 13 00:41:43.507740 kubelet[3300]: E0813 00:41:43.507734 3300 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a2fa0580-0b53-4bda-9766-22557323fec8" containerName="mount-cgroup" Aug 13 00:41:43.507740 kubelet[3300]: E0813 00:41:43.507744 3300 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a2fa0580-0b53-4bda-9766-22557323fec8" containerName="apply-sysctl-overwrites" Aug 13 00:41:43.507740 kubelet[3300]: E0813 00:41:43.507752 3300 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a2fa0580-0b53-4bda-9766-22557323fec8" containerName="mount-bpf-fs" Aug 13 00:41:43.507740 kubelet[3300]: E0813 00:41:43.507760 3300 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a2fa0580-0b53-4bda-9766-22557323fec8" containerName="clean-cilium-state" Aug 13 00:41:43.508114 kubelet[3300]: E0813 00:41:43.507770 3300 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a2fa0580-0b53-4bda-9766-22557323fec8" containerName="cilium-agent" Aug 13 00:41:43.508114 kubelet[3300]: I0813 00:41:43.507804 3300 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b2e4db8-60d6-4b82-931e-91f11fb06629" containerName="cilium-operator" Aug 13 00:41:43.508114 kubelet[3300]: I0813 00:41:43.507814 3300 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2fa0580-0b53-4bda-9766-22557323fec8" containerName="cilium-agent" Aug 13 00:41:43.510792 sshd[5284]: Connection closed by 139.178.68.195 port 48126 Aug 13 00:41:43.513133 sshd-session[5282]: pam_unix(sshd:session): session closed for user core Aug 13 00:41:43.528772 systemd[1]: sshd@26-172.31.31.138:22-139.178.68.195:48126.service: Deactivated successfully. Aug 13 00:41:43.534358 systemd[1]: session-27.scope: Deactivated successfully. Aug 13 00:41:43.550752 systemd-logind[1973]: Session 27 logged out. Waiting for processes to exit. Aug 13 00:41:43.553624 systemd[1]: Created slice kubepods-burstable-poda35a068b_8c8b_4d6f_be13_072750604193.slice - libcontainer container kubepods-burstable-poda35a068b_8c8b_4d6f_be13_072750604193.slice. Aug 13 00:41:43.557426 systemd[1]: Started sshd@27-172.31.31.138:22-139.178.68.195:48132.service - OpenSSH per-connection server daemon (139.178.68.195:48132). Aug 13 00:41:43.562111 systemd-logind[1973]: Removed session 27. Aug 13 00:41:43.623214 kubelet[3300]: I0813 00:41:43.622599 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a35a068b-8c8b-4d6f-be13-072750604193-hostproc\") pod \"cilium-2dscn\" (UID: \"a35a068b-8c8b-4d6f-be13-072750604193\") " pod="kube-system/cilium-2dscn" Aug 13 00:41:43.623214 kubelet[3300]: I0813 00:41:43.622654 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a35a068b-8c8b-4d6f-be13-072750604193-host-proc-sys-net\") pod \"cilium-2dscn\" (UID: \"a35a068b-8c8b-4d6f-be13-072750604193\") " pod="kube-system/cilium-2dscn" Aug 13 00:41:43.623214 kubelet[3300]: I0813 00:41:43.622761 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a35a068b-8c8b-4d6f-be13-072750604193-cilium-run\") pod \"cilium-2dscn\" (UID: \"a35a068b-8c8b-4d6f-be13-072750604193\") " pod="kube-system/cilium-2dscn" Aug 13 00:41:43.623214 kubelet[3300]: I0813 00:41:43.622807 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a35a068b-8c8b-4d6f-be13-072750604193-cilium-config-path\") pod \"cilium-2dscn\" (UID: \"a35a068b-8c8b-4d6f-be13-072750604193\") " pod="kube-system/cilium-2dscn" Aug 13 00:41:43.623214 kubelet[3300]: I0813 00:41:43.622882 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a35a068b-8c8b-4d6f-be13-072750604193-host-proc-sys-kernel\") pod \"cilium-2dscn\" (UID: \"a35a068b-8c8b-4d6f-be13-072750604193\") " pod="kube-system/cilium-2dscn" Aug 13 00:41:43.623214 kubelet[3300]: I0813 00:41:43.622930 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a35a068b-8c8b-4d6f-be13-072750604193-etc-cni-netd\") pod \"cilium-2dscn\" (UID: \"a35a068b-8c8b-4d6f-be13-072750604193\") " pod="kube-system/cilium-2dscn" Aug 13 00:41:43.623563 kubelet[3300]: I0813 00:41:43.622950 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nkrq\" (UniqueName: \"kubernetes.io/projected/a35a068b-8c8b-4d6f-be13-072750604193-kube-api-access-7nkrq\") pod \"cilium-2dscn\" (UID: \"a35a068b-8c8b-4d6f-be13-072750604193\") " pod="kube-system/cilium-2dscn" Aug 13 00:41:43.623563 kubelet[3300]: I0813 00:41:43.622975 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a35a068b-8c8b-4d6f-be13-072750604193-hubble-tls\") pod \"cilium-2dscn\" (UID: \"a35a068b-8c8b-4d6f-be13-072750604193\") " pod="kube-system/cilium-2dscn" Aug 13 00:41:43.623563 kubelet[3300]: I0813 00:41:43.623001 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a35a068b-8c8b-4d6f-be13-072750604193-cilium-cgroup\") pod \"cilium-2dscn\" (UID: \"a35a068b-8c8b-4d6f-be13-072750604193\") " pod="kube-system/cilium-2dscn" Aug 13 00:41:43.623563 kubelet[3300]: I0813 00:41:43.623020 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a35a068b-8c8b-4d6f-be13-072750604193-lib-modules\") pod \"cilium-2dscn\" (UID: \"a35a068b-8c8b-4d6f-be13-072750604193\") " pod="kube-system/cilium-2dscn" Aug 13 00:41:43.623563 kubelet[3300]: I0813 00:41:43.623040 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a35a068b-8c8b-4d6f-be13-072750604193-cilium-ipsec-secrets\") pod \"cilium-2dscn\" (UID: \"a35a068b-8c8b-4d6f-be13-072750604193\") " pod="kube-system/cilium-2dscn" Aug 13 00:41:43.623563 kubelet[3300]: I0813 00:41:43.623062 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a35a068b-8c8b-4d6f-be13-072750604193-bpf-maps\") pod \"cilium-2dscn\" (UID: \"a35a068b-8c8b-4d6f-be13-072750604193\") " pod="kube-system/cilium-2dscn" Aug 13 00:41:43.623839 kubelet[3300]: I0813 00:41:43.623084 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a35a068b-8c8b-4d6f-be13-072750604193-cni-path\") pod \"cilium-2dscn\" (UID: \"a35a068b-8c8b-4d6f-be13-072750604193\") " pod="kube-system/cilium-2dscn" Aug 13 00:41:43.623839 kubelet[3300]: I0813 00:41:43.623106 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a35a068b-8c8b-4d6f-be13-072750604193-xtables-lock\") pod \"cilium-2dscn\" (UID: \"a35a068b-8c8b-4d6f-be13-072750604193\") " pod="kube-system/cilium-2dscn" Aug 13 00:41:43.623839 kubelet[3300]: I0813 00:41:43.623130 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a35a068b-8c8b-4d6f-be13-072750604193-clustermesh-secrets\") pod \"cilium-2dscn\" (UID: \"a35a068b-8c8b-4d6f-be13-072750604193\") " pod="kube-system/cilium-2dscn" Aug 13 00:41:43.666014 kubelet[3300]: E0813 00:41:43.665957 3300 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 00:41:43.779633 sshd[5294]: Accepted publickey for core from 139.178.68.195 port 48132 ssh2: RSA SHA256:2C5UUUFKFtbeXpxut91iAvg9/kHC7TPoVPANvS2Tr9A Aug 13 00:41:43.780634 sshd-session[5294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:41:43.787643 systemd-logind[1973]: New session 28 of user core. Aug 13 00:41:43.796861 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 13 00:41:43.867426 containerd[1995]: time="2025-08-13T00:41:43.867365321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2dscn,Uid:a35a068b-8c8b-4d6f-be13-072750604193,Namespace:kube-system,Attempt:0,}" Aug 13 00:41:43.905525 containerd[1995]: time="2025-08-13T00:41:43.905474214Z" level=info msg="connecting to shim be98521289723c88647a6395945fa0804d5118f1a3643703eb720fa6eb3a059f" address="unix:///run/containerd/s/5666427bea10decdf0635bbd692a87ae49c1854f35e481da0b47269fdc2c2e37" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:41:43.925966 sshd[5300]: Connection closed by 139.178.68.195 port 48132 Aug 13 00:41:43.927790 sshd-session[5294]: pam_unix(sshd:session): session closed for user core Aug 13 00:41:43.932901 systemd[1]: Started cri-containerd-be98521289723c88647a6395945fa0804d5118f1a3643703eb720fa6eb3a059f.scope - libcontainer container be98521289723c88647a6395945fa0804d5118f1a3643703eb720fa6eb3a059f. Aug 13 00:41:43.933386 systemd[1]: sshd@27-172.31.31.138:22-139.178.68.195:48132.service: Deactivated successfully. Aug 13 00:41:43.940030 systemd[1]: session-28.scope: Deactivated successfully. Aug 13 00:41:43.944501 systemd-logind[1973]: Session 28 logged out. Waiting for processes to exit. Aug 13 00:41:43.956468 systemd-logind[1973]: Removed session 28. Aug 13 00:41:43.958880 systemd[1]: Started sshd@28-172.31.31.138:22-139.178.68.195:48134.service - OpenSSH per-connection server daemon (139.178.68.195:48134). Aug 13 00:41:43.999013 containerd[1995]: time="2025-08-13T00:41:43.998900268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2dscn,Uid:a35a068b-8c8b-4d6f-be13-072750604193,Namespace:kube-system,Attempt:0,} returns sandbox id \"be98521289723c88647a6395945fa0804d5118f1a3643703eb720fa6eb3a059f\"" Aug 13 00:41:44.003922 containerd[1995]: time="2025-08-13T00:41:44.003879881Z" level=info msg="CreateContainer within sandbox \"be98521289723c88647a6395945fa0804d5118f1a3643703eb720fa6eb3a059f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:41:44.019389 containerd[1995]: time="2025-08-13T00:41:44.018673488Z" level=info msg="Container 99258acbbff6921889c205c0769bb5fcd5e99868acb76869d04b098c7d34798e: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:41:44.031286 containerd[1995]: time="2025-08-13T00:41:44.031152990Z" level=info msg="CreateContainer within sandbox \"be98521289723c88647a6395945fa0804d5118f1a3643703eb720fa6eb3a059f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"99258acbbff6921889c205c0769bb5fcd5e99868acb76869d04b098c7d34798e\"" Aug 13 00:41:44.033253 containerd[1995]: time="2025-08-13T00:41:44.033213644Z" level=info msg="StartContainer for \"99258acbbff6921889c205c0769bb5fcd5e99868acb76869d04b098c7d34798e\"" Aug 13 00:41:44.034582 containerd[1995]: time="2025-08-13T00:41:44.034492280Z" level=info msg="connecting to shim 99258acbbff6921889c205c0769bb5fcd5e99868acb76869d04b098c7d34798e" address="unix:///run/containerd/s/5666427bea10decdf0635bbd692a87ae49c1854f35e481da0b47269fdc2c2e37" protocol=ttrpc version=3 Aug 13 00:41:44.064837 systemd[1]: Started cri-containerd-99258acbbff6921889c205c0769bb5fcd5e99868acb76869d04b098c7d34798e.scope - libcontainer container 99258acbbff6921889c205c0769bb5fcd5e99868acb76869d04b098c7d34798e. Aug 13 00:41:44.101549 containerd[1995]: time="2025-08-13T00:41:44.101519021Z" level=info msg="StartContainer for \"99258acbbff6921889c205c0769bb5fcd5e99868acb76869d04b098c7d34798e\" returns successfully" Aug 13 00:41:44.146531 sshd[5345]: Accepted publickey for core from 139.178.68.195 port 48134 ssh2: RSA SHA256:2C5UUUFKFtbeXpxut91iAvg9/kHC7TPoVPANvS2Tr9A Aug 13 00:41:44.148279 sshd-session[5345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:41:44.154507 systemd-logind[1973]: New session 29 of user core. Aug 13 00:41:44.161941 systemd[1]: Started session-29.scope - Session 29 of User core. Aug 13 00:41:44.162427 systemd[1]: cri-containerd-99258acbbff6921889c205c0769bb5fcd5e99868acb76869d04b098c7d34798e.scope: Deactivated successfully. Aug 13 00:41:44.163508 systemd[1]: cri-containerd-99258acbbff6921889c205c0769bb5fcd5e99868acb76869d04b098c7d34798e.scope: Consumed 24ms CPU time, 9.7M memory peak, 3.2M read from disk. Aug 13 00:41:44.165836 containerd[1995]: time="2025-08-13T00:41:44.165647842Z" level=info msg="received exit event container_id:\"99258acbbff6921889c205c0769bb5fcd5e99868acb76869d04b098c7d34798e\" id:\"99258acbbff6921889c205c0769bb5fcd5e99868acb76869d04b098c7d34798e\" pid:5368 exited_at:{seconds:1755045704 nanos:164144806}" Aug 13 00:41:44.165836 containerd[1995]: time="2025-08-13T00:41:44.165814236Z" level=info msg="TaskExit event in podsandbox handler container_id:\"99258acbbff6921889c205c0769bb5fcd5e99868acb76869d04b098c7d34798e\" id:\"99258acbbff6921889c205c0769bb5fcd5e99868acb76869d04b098c7d34798e\" pid:5368 exited_at:{seconds:1755045704 nanos:164144806}" Aug 13 00:41:45.185039 containerd[1995]: time="2025-08-13T00:41:45.184989629Z" level=info msg="CreateContainer within sandbox \"be98521289723c88647a6395945fa0804d5118f1a3643703eb720fa6eb3a059f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 00:41:45.230236 containerd[1995]: time="2025-08-13T00:41:45.227479930Z" level=info msg="Container f0b48c7d9414cd3250d23236b5eaa107f2b4b47500437df4db17af42850ccca0: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:41:45.229083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3944726056.mount: Deactivated successfully. Aug 13 00:41:45.242487 containerd[1995]: time="2025-08-13T00:41:45.242436211Z" level=info msg="CreateContainer within sandbox \"be98521289723c88647a6395945fa0804d5118f1a3643703eb720fa6eb3a059f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f0b48c7d9414cd3250d23236b5eaa107f2b4b47500437df4db17af42850ccca0\"" Aug 13 00:41:45.243546 containerd[1995]: time="2025-08-13T00:41:45.243499993Z" level=info msg="StartContainer for \"f0b48c7d9414cd3250d23236b5eaa107f2b4b47500437df4db17af42850ccca0\"" Aug 13 00:41:45.245340 containerd[1995]: time="2025-08-13T00:41:45.245293260Z" level=info msg="connecting to shim f0b48c7d9414cd3250d23236b5eaa107f2b4b47500437df4db17af42850ccca0" address="unix:///run/containerd/s/5666427bea10decdf0635bbd692a87ae49c1854f35e481da0b47269fdc2c2e37" protocol=ttrpc version=3 Aug 13 00:41:45.279845 systemd[1]: Started cri-containerd-f0b48c7d9414cd3250d23236b5eaa107f2b4b47500437df4db17af42850ccca0.scope - libcontainer container f0b48c7d9414cd3250d23236b5eaa107f2b4b47500437df4db17af42850ccca0. Aug 13 00:41:45.327740 containerd[1995]: time="2025-08-13T00:41:45.327692367Z" level=info msg="StartContainer for \"f0b48c7d9414cd3250d23236b5eaa107f2b4b47500437df4db17af42850ccca0\" returns successfully" Aug 13 00:41:45.342206 systemd[1]: cri-containerd-f0b48c7d9414cd3250d23236b5eaa107f2b4b47500437df4db17af42850ccca0.scope: Deactivated successfully. Aug 13 00:41:45.342466 systemd[1]: cri-containerd-f0b48c7d9414cd3250d23236b5eaa107f2b4b47500437df4db17af42850ccca0.scope: Consumed 22ms CPU time, 7.5M memory peak, 2.1M read from disk. Aug 13 00:41:45.343981 containerd[1995]: time="2025-08-13T00:41:45.343925611Z" level=info msg="received exit event container_id:\"f0b48c7d9414cd3250d23236b5eaa107f2b4b47500437df4db17af42850ccca0\" id:\"f0b48c7d9414cd3250d23236b5eaa107f2b4b47500437df4db17af42850ccca0\" pid:5418 exited_at:{seconds:1755045705 nanos:343696900}" Aug 13 00:41:45.344202 containerd[1995]: time="2025-08-13T00:41:45.344179913Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f0b48c7d9414cd3250d23236b5eaa107f2b4b47500437df4db17af42850ccca0\" id:\"f0b48c7d9414cd3250d23236b5eaa107f2b4b47500437df4db17af42850ccca0\" pid:5418 exited_at:{seconds:1755045705 nanos:343696900}" Aug 13 00:41:45.371108 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f0b48c7d9414cd3250d23236b5eaa107f2b4b47500437df4db17af42850ccca0-rootfs.mount: Deactivated successfully. Aug 13 00:41:45.997725 kubelet[3300]: I0813 00:41:45.997671 3300 setters.go:600] "Node became not ready" node="ip-172-31-31-138" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T00:41:45Z","lastTransitionTime":"2025-08-13T00:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Aug 13 00:41:46.191510 containerd[1995]: time="2025-08-13T00:41:46.191016030Z" level=info msg="CreateContainer within sandbox \"be98521289723c88647a6395945fa0804d5118f1a3643703eb720fa6eb3a059f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 00:41:46.211397 containerd[1995]: time="2025-08-13T00:41:46.207673596Z" level=info msg="Container ef7dad408489820ac936f71e2bedced3e01237f6f400eb8596874f896a860286: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:41:46.229993 containerd[1995]: time="2025-08-13T00:41:46.229943314Z" level=info msg="CreateContainer within sandbox \"be98521289723c88647a6395945fa0804d5118f1a3643703eb720fa6eb3a059f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ef7dad408489820ac936f71e2bedced3e01237f6f400eb8596874f896a860286\"" Aug 13 00:41:46.231087 containerd[1995]: time="2025-08-13T00:41:46.230878099Z" level=info msg="StartContainer for \"ef7dad408489820ac936f71e2bedced3e01237f6f400eb8596874f896a860286\"" Aug 13 00:41:46.233809 containerd[1995]: time="2025-08-13T00:41:46.233776391Z" level=info msg="connecting to shim ef7dad408489820ac936f71e2bedced3e01237f6f400eb8596874f896a860286" address="unix:///run/containerd/s/5666427bea10decdf0635bbd692a87ae49c1854f35e481da0b47269fdc2c2e37" protocol=ttrpc version=3 Aug 13 00:41:46.264806 systemd[1]: Started cri-containerd-ef7dad408489820ac936f71e2bedced3e01237f6f400eb8596874f896a860286.scope - libcontainer container ef7dad408489820ac936f71e2bedced3e01237f6f400eb8596874f896a860286. Aug 13 00:41:46.313361 containerd[1995]: time="2025-08-13T00:41:46.313318523Z" level=info msg="StartContainer for \"ef7dad408489820ac936f71e2bedced3e01237f6f400eb8596874f896a860286\" returns successfully" Aug 13 00:41:46.361266 systemd[1]: cri-containerd-ef7dad408489820ac936f71e2bedced3e01237f6f400eb8596874f896a860286.scope: Deactivated successfully. Aug 13 00:41:46.361557 systemd[1]: cri-containerd-ef7dad408489820ac936f71e2bedced3e01237f6f400eb8596874f896a860286.scope: Consumed 26ms CPU time, 5.8M memory peak, 1.1M read from disk. Aug 13 00:41:46.362707 containerd[1995]: time="2025-08-13T00:41:46.362655386Z" level=info msg="received exit event container_id:\"ef7dad408489820ac936f71e2bedced3e01237f6f400eb8596874f896a860286\" id:\"ef7dad408489820ac936f71e2bedced3e01237f6f400eb8596874f896a860286\" pid:5462 exited_at:{seconds:1755045706 nanos:362401308}" Aug 13 00:41:46.363152 containerd[1995]: time="2025-08-13T00:41:46.363126969Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ef7dad408489820ac936f71e2bedced3e01237f6f400eb8596874f896a860286\" id:\"ef7dad408489820ac936f71e2bedced3e01237f6f400eb8596874f896a860286\" pid:5462 exited_at:{seconds:1755045706 nanos:362401308}" Aug 13 00:41:46.397085 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef7dad408489820ac936f71e2bedced3e01237f6f400eb8596874f896a860286-rootfs.mount: Deactivated successfully. Aug 13 00:41:46.437417 kubelet[3300]: E0813 00:41:46.437359 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-zv8cf" podUID="ed5538c3-1491-4167-b42f-098390e874ff" Aug 13 00:41:47.195055 containerd[1995]: time="2025-08-13T00:41:47.194507891Z" level=info msg="CreateContainer within sandbox \"be98521289723c88647a6395945fa0804d5118f1a3643703eb720fa6eb3a059f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 00:41:47.220622 containerd[1995]: time="2025-08-13T00:41:47.218148699Z" level=info msg="Container 184497c2b398cb229ec6ce1bc456374aabee1fa9e82a0d86d2ef731f27b4aab6: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:41:47.241716 containerd[1995]: time="2025-08-13T00:41:47.241662642Z" level=info msg="CreateContainer within sandbox \"be98521289723c88647a6395945fa0804d5118f1a3643703eb720fa6eb3a059f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"184497c2b398cb229ec6ce1bc456374aabee1fa9e82a0d86d2ef731f27b4aab6\"" Aug 13 00:41:47.243704 containerd[1995]: time="2025-08-13T00:41:47.243666932Z" level=info msg="StartContainer for \"184497c2b398cb229ec6ce1bc456374aabee1fa9e82a0d86d2ef731f27b4aab6\"" Aug 13 00:41:47.245088 containerd[1995]: time="2025-08-13T00:41:47.245045858Z" level=info msg="connecting to shim 184497c2b398cb229ec6ce1bc456374aabee1fa9e82a0d86d2ef731f27b4aab6" address="unix:///run/containerd/s/5666427bea10decdf0635bbd692a87ae49c1854f35e481da0b47269fdc2c2e37" protocol=ttrpc version=3 Aug 13 00:41:47.276829 systemd[1]: Started cri-containerd-184497c2b398cb229ec6ce1bc456374aabee1fa9e82a0d86d2ef731f27b4aab6.scope - libcontainer container 184497c2b398cb229ec6ce1bc456374aabee1fa9e82a0d86d2ef731f27b4aab6. Aug 13 00:41:47.308887 systemd[1]: cri-containerd-184497c2b398cb229ec6ce1bc456374aabee1fa9e82a0d86d2ef731f27b4aab6.scope: Deactivated successfully. Aug 13 00:41:47.309409 containerd[1995]: time="2025-08-13T00:41:47.309371671Z" level=info msg="TaskExit event in podsandbox handler container_id:\"184497c2b398cb229ec6ce1bc456374aabee1fa9e82a0d86d2ef731f27b4aab6\" id:\"184497c2b398cb229ec6ce1bc456374aabee1fa9e82a0d86d2ef731f27b4aab6\" pid:5501 exited_at:{seconds:1755045707 nanos:309021761}" Aug 13 00:41:47.312999 containerd[1995]: time="2025-08-13T00:41:47.312958118Z" level=info msg="received exit event container_id:\"184497c2b398cb229ec6ce1bc456374aabee1fa9e82a0d86d2ef731f27b4aab6\" id:\"184497c2b398cb229ec6ce1bc456374aabee1fa9e82a0d86d2ef731f27b4aab6\" pid:5501 exited_at:{seconds:1755045707 nanos:309021761}" Aug 13 00:41:47.332013 containerd[1995]: time="2025-08-13T00:41:47.331884577Z" level=info msg="StartContainer for \"184497c2b398cb229ec6ce1bc456374aabee1fa9e82a0d86d2ef731f27b4aab6\" returns successfully" Aug 13 00:41:47.353700 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-184497c2b398cb229ec6ce1bc456374aabee1fa9e82a0d86d2ef731f27b4aab6-rootfs.mount: Deactivated successfully. Aug 13 00:41:48.201560 containerd[1995]: time="2025-08-13T00:41:48.201506863Z" level=info msg="CreateContainer within sandbox \"be98521289723c88647a6395945fa0804d5118f1a3643703eb720fa6eb3a059f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 00:41:48.222752 containerd[1995]: time="2025-08-13T00:41:48.222704748Z" level=info msg="Container 131277162d05d4775e79b8b4ccca59ddee9694440ed86ebd560919123c9b16fd: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:41:48.240234 containerd[1995]: time="2025-08-13T00:41:48.240165369Z" level=info msg="CreateContainer within sandbox \"be98521289723c88647a6395945fa0804d5118f1a3643703eb720fa6eb3a059f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"131277162d05d4775e79b8b4ccca59ddee9694440ed86ebd560919123c9b16fd\"" Aug 13 00:41:48.240885 containerd[1995]: time="2025-08-13T00:41:48.240778148Z" level=info msg="StartContainer for \"131277162d05d4775e79b8b4ccca59ddee9694440ed86ebd560919123c9b16fd\"" Aug 13 00:41:48.242216 containerd[1995]: time="2025-08-13T00:41:48.242183778Z" level=info msg="connecting to shim 131277162d05d4775e79b8b4ccca59ddee9694440ed86ebd560919123c9b16fd" address="unix:///run/containerd/s/5666427bea10decdf0635bbd692a87ae49c1854f35e481da0b47269fdc2c2e37" protocol=ttrpc version=3 Aug 13 00:41:48.283042 systemd[1]: Started cri-containerd-131277162d05d4775e79b8b4ccca59ddee9694440ed86ebd560919123c9b16fd.scope - libcontainer container 131277162d05d4775e79b8b4ccca59ddee9694440ed86ebd560919123c9b16fd. Aug 13 00:41:48.324821 containerd[1995]: time="2025-08-13T00:41:48.324226900Z" level=info msg="StartContainer for \"131277162d05d4775e79b8b4ccca59ddee9694440ed86ebd560919123c9b16fd\" returns successfully" Aug 13 00:41:48.437412 kubelet[3300]: E0813 00:41:48.437351 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-zv8cf" podUID="ed5538c3-1491-4167-b42f-098390e874ff" Aug 13 00:41:48.604156 containerd[1995]: time="2025-08-13T00:41:48.603721489Z" level=info msg="TaskExit event in podsandbox handler container_id:\"131277162d05d4775e79b8b4ccca59ddee9694440ed86ebd560919123c9b16fd\" id:\"caa251ecd4a97aff7e8805846a66ed782a546ce9cb5f6efcb19344cf752c209b\" pid:5568 exited_at:{seconds:1755045708 nanos:603304190}" Aug 13 00:41:49.354668 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Aug 13 00:41:50.882527 containerd[1995]: time="2025-08-13T00:41:50.882488029Z" level=info msg="TaskExit event in podsandbox handler container_id:\"131277162d05d4775e79b8b4ccca59ddee9694440ed86ebd560919123c9b16fd\" id:\"c95ae1bc89468d0c2c581fb44f311c214a5907535d61f0e13ef16f2ce46079e7\" pid:5664 exit_status:1 exited_at:{seconds:1755045710 nanos:881376172}" Aug 13 00:41:52.544743 systemd-networkd[1839]: lxc_health: Link UP Aug 13 00:41:52.557650 systemd-networkd[1839]: lxc_health: Gained carrier Aug 13 00:41:52.558040 (udev-worker)[6056]: Network interface NamePolicy= disabled on kernel command line. Aug 13 00:41:53.217541 containerd[1995]: time="2025-08-13T00:41:53.217483076Z" level=info msg="TaskExit event in podsandbox handler container_id:\"131277162d05d4775e79b8b4ccca59ddee9694440ed86ebd560919123c9b16fd\" id:\"73c26bff019f5d1ffc5f1c381df656fdb54f52b504c5c4ddc8c397c2e1ffd806\" pid:6083 exited_at:{seconds:1755045713 nanos:216803440}" Aug 13 00:41:53.427892 containerd[1995]: time="2025-08-13T00:41:53.427831790Z" level=info msg="StopPodSandbox for \"f4a771018e58eb138da5af8e03499c112c9a71f2e5017e0186906c12e6432515\"" Aug 13 00:41:53.428313 containerd[1995]: time="2025-08-13T00:41:53.428250381Z" level=info msg="TearDown network for sandbox \"f4a771018e58eb138da5af8e03499c112c9a71f2e5017e0186906c12e6432515\" successfully" Aug 13 00:41:53.428313 containerd[1995]: time="2025-08-13T00:41:53.428273576Z" level=info msg="StopPodSandbox for \"f4a771018e58eb138da5af8e03499c112c9a71f2e5017e0186906c12e6432515\" returns successfully" Aug 13 00:41:53.429838 containerd[1995]: time="2025-08-13T00:41:53.429788230Z" level=info msg="RemovePodSandbox for \"f4a771018e58eb138da5af8e03499c112c9a71f2e5017e0186906c12e6432515\"" Aug 13 00:41:53.430078 containerd[1995]: time="2025-08-13T00:41:53.430004809Z" level=info msg="Forcibly stopping sandbox \"f4a771018e58eb138da5af8e03499c112c9a71f2e5017e0186906c12e6432515\"" Aug 13 00:41:53.430754 containerd[1995]: time="2025-08-13T00:41:53.430707891Z" level=info msg="TearDown network for sandbox \"f4a771018e58eb138da5af8e03499c112c9a71f2e5017e0186906c12e6432515\" successfully" Aug 13 00:41:53.440795 containerd[1995]: time="2025-08-13T00:41:53.439714239Z" level=info msg="Ensure that sandbox f4a771018e58eb138da5af8e03499c112c9a71f2e5017e0186906c12e6432515 in task-service has been cleanup successfully" Aug 13 00:41:53.450522 containerd[1995]: time="2025-08-13T00:41:53.450453944Z" level=info msg="RemovePodSandbox \"f4a771018e58eb138da5af8e03499c112c9a71f2e5017e0186906c12e6432515\" returns successfully" Aug 13 00:41:53.451294 containerd[1995]: time="2025-08-13T00:41:53.451262169Z" level=info msg="StopPodSandbox for \"74ae4b1be46b408cae2c9bf7e6f76615dae66ae1932445dcb61e39f19c9e0e23\"" Aug 13 00:41:53.452117 containerd[1995]: time="2025-08-13T00:41:53.452090046Z" level=info msg="TearDown network for sandbox \"74ae4b1be46b408cae2c9bf7e6f76615dae66ae1932445dcb61e39f19c9e0e23\" successfully" Aug 13 00:41:53.452316 containerd[1995]: time="2025-08-13T00:41:53.452231445Z" level=info msg="StopPodSandbox for \"74ae4b1be46b408cae2c9bf7e6f76615dae66ae1932445dcb61e39f19c9e0e23\" returns successfully" Aug 13 00:41:53.454987 containerd[1995]: time="2025-08-13T00:41:53.454919520Z" level=info msg="RemovePodSandbox for \"74ae4b1be46b408cae2c9bf7e6f76615dae66ae1932445dcb61e39f19c9e0e23\"" Aug 13 00:41:53.455381 containerd[1995]: time="2025-08-13T00:41:53.455126137Z" level=info msg="Forcibly stopping sandbox \"74ae4b1be46b408cae2c9bf7e6f76615dae66ae1932445dcb61e39f19c9e0e23\"" Aug 13 00:41:53.455381 containerd[1995]: time="2025-08-13T00:41:53.455277941Z" level=info msg="TearDown network for sandbox \"74ae4b1be46b408cae2c9bf7e6f76615dae66ae1932445dcb61e39f19c9e0e23\" successfully" Aug 13 00:41:53.457012 containerd[1995]: time="2025-08-13T00:41:53.456983972Z" level=info msg="Ensure that sandbox 74ae4b1be46b408cae2c9bf7e6f76615dae66ae1932445dcb61e39f19c9e0e23 in task-service has been cleanup successfully" Aug 13 00:41:53.464207 containerd[1995]: time="2025-08-13T00:41:53.464104604Z" level=info msg="RemovePodSandbox \"74ae4b1be46b408cae2c9bf7e6f76615dae66ae1932445dcb61e39f19c9e0e23\" returns successfully" Aug 13 00:41:53.620741 systemd-networkd[1839]: lxc_health: Gained IPv6LL Aug 13 00:41:53.920745 kubelet[3300]: I0813 00:41:53.920312 3300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2dscn" podStartSLOduration=10.920287377 podStartE2EDuration="10.920287377s" podCreationTimestamp="2025-08-13 00:41:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:41:49.247702041 +0000 UTC m=+115.966891422" watchObservedRunningTime="2025-08-13 00:41:53.920287377 +0000 UTC m=+120.639476759" Aug 13 00:41:55.403037 containerd[1995]: time="2025-08-13T00:41:55.402989197Z" level=info msg="TaskExit event in podsandbox handler container_id:\"131277162d05d4775e79b8b4ccca59ddee9694440ed86ebd560919123c9b16fd\" id:\"2d0e05e22f89f243a3f201779722b1e1680cc17b4718f5fc4b423087d01b69e7\" pid:6121 exited_at:{seconds:1755045715 nanos:401361380}" Aug 13 00:41:56.262907 ntpd[1965]: Listen normally on 15 lxc_health [fe80::74c7:57ff:fe0e:d079%14]:123 Aug 13 00:41:56.263315 ntpd[1965]: 13 Aug 00:41:56 ntpd[1965]: Listen normally on 15 lxc_health [fe80::74c7:57ff:fe0e:d079%14]:123 Aug 13 00:41:57.604540 containerd[1995]: time="2025-08-13T00:41:57.604495318Z" level=info msg="TaskExit event in podsandbox handler container_id:\"131277162d05d4775e79b8b4ccca59ddee9694440ed86ebd560919123c9b16fd\" id:\"489038f9b3e1adfb9e1479f88c25f79b509349334c750eb06148a725d3b26913\" pid:6146 exited_at:{seconds:1755045717 nanos:603622233}" Aug 13 00:41:57.635703 sshd[5388]: Connection closed by 139.178.68.195 port 48134 Aug 13 00:41:57.637722 sshd-session[5345]: pam_unix(sshd:session): session closed for user core Aug 13 00:41:57.648533 systemd-logind[1973]: Session 29 logged out. Waiting for processes to exit. Aug 13 00:41:57.651504 systemd[1]: sshd@28-172.31.31.138:22-139.178.68.195:48134.service: Deactivated successfully. Aug 13 00:41:57.659166 systemd[1]: session-29.scope: Deactivated successfully. Aug 13 00:41:57.665551 systemd-logind[1973]: Removed session 29. Aug 13 00:42:25.047707 systemd[1]: cri-containerd-d3b17116ddedea3c963a604933e32678b5fa024ff6e0f7828c20d293ff45c27d.scope: Deactivated successfully. Aug 13 00:42:25.048011 systemd[1]: cri-containerd-d3b17116ddedea3c963a604933e32678b5fa024ff6e0f7828c20d293ff45c27d.scope: Consumed 3.513s CPU time, 65.5M memory peak, 20M read from disk. Aug 13 00:42:25.050887 containerd[1995]: time="2025-08-13T00:42:25.050857191Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d3b17116ddedea3c963a604933e32678b5fa024ff6e0f7828c20d293ff45c27d\" id:\"d3b17116ddedea3c963a604933e32678b5fa024ff6e0f7828c20d293ff45c27d\" pid:3130 exit_status:1 exited_at:{seconds:1755045745 nanos:50291432}" Aug 13 00:42:25.051324 containerd[1995]: time="2025-08-13T00:42:25.051291169Z" level=info msg="received exit event container_id:\"d3b17116ddedea3c963a604933e32678b5fa024ff6e0f7828c20d293ff45c27d\" id:\"d3b17116ddedea3c963a604933e32678b5fa024ff6e0f7828c20d293ff45c27d\" pid:3130 exit_status:1 exited_at:{seconds:1755045745 nanos:50291432}" Aug 13 00:42:25.078509 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d3b17116ddedea3c963a604933e32678b5fa024ff6e0f7828c20d293ff45c27d-rootfs.mount: Deactivated successfully. Aug 13 00:42:25.317876 kubelet[3300]: I0813 00:42:25.317739 3300 scope.go:117] "RemoveContainer" containerID="d3b17116ddedea3c963a604933e32678b5fa024ff6e0f7828c20d293ff45c27d" Aug 13 00:42:25.322654 containerd[1995]: time="2025-08-13T00:42:25.322602645Z" level=info msg="CreateContainer within sandbox \"685abbe05335a04938f95adc8993dc0da94fd7c743286ddf6550d15e6750fea0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Aug 13 00:42:25.340749 containerd[1995]: time="2025-08-13T00:42:25.340706121Z" level=info msg="Container b83ddddae020930feab6fa5e35e89f6c4e2835e037de92c38c07fea3db6c1969: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:42:25.345544 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3571237433.mount: Deactivated successfully. Aug 13 00:42:25.357794 containerd[1995]: time="2025-08-13T00:42:25.357734632Z" level=info msg="CreateContainer within sandbox \"685abbe05335a04938f95adc8993dc0da94fd7c743286ddf6550d15e6750fea0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"b83ddddae020930feab6fa5e35e89f6c4e2835e037de92c38c07fea3db6c1969\"" Aug 13 00:42:25.358243 containerd[1995]: time="2025-08-13T00:42:25.358213237Z" level=info msg="StartContainer for \"b83ddddae020930feab6fa5e35e89f6c4e2835e037de92c38c07fea3db6c1969\"" Aug 13 00:42:25.359517 containerd[1995]: time="2025-08-13T00:42:25.359483543Z" level=info msg="connecting to shim b83ddddae020930feab6fa5e35e89f6c4e2835e037de92c38c07fea3db6c1969" address="unix:///run/containerd/s/c19dd42daa9de68a5fac359becafeb75350aed64cb41cd8412406747fb2fbea0" protocol=ttrpc version=3 Aug 13 00:42:25.383872 systemd[1]: Started cri-containerd-b83ddddae020930feab6fa5e35e89f6c4e2835e037de92c38c07fea3db6c1969.scope - libcontainer container b83ddddae020930feab6fa5e35e89f6c4e2835e037de92c38c07fea3db6c1969. Aug 13 00:42:25.444513 containerd[1995]: time="2025-08-13T00:42:25.444470629Z" level=info msg="StartContainer for \"b83ddddae020930feab6fa5e35e89f6c4e2835e037de92c38c07fea3db6c1969\" returns successfully" Aug 13 00:42:26.390578 kubelet[3300]: E0813 00:42:26.390345 3300 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-138?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Aug 13 00:42:30.611218 systemd[1]: cri-containerd-53a9184b1f578ad7a0aaca081605870d0e5c3d30bfb561108809ab39f1356e1b.scope: Deactivated successfully. Aug 13 00:42:30.611641 systemd[1]: cri-containerd-53a9184b1f578ad7a0aaca081605870d0e5c3d30bfb561108809ab39f1356e1b.scope: Consumed 1.859s CPU time, 27.1M memory peak, 8.9M read from disk. Aug 13 00:42:30.615016 containerd[1995]: time="2025-08-13T00:42:30.614967974Z" level=info msg="received exit event container_id:\"53a9184b1f578ad7a0aaca081605870d0e5c3d30bfb561108809ab39f1356e1b\" id:\"53a9184b1f578ad7a0aaca081605870d0e5c3d30bfb561108809ab39f1356e1b\" pid:3149 exit_status:1 exited_at:{seconds:1755045750 nanos:612994112}" Aug 13 00:42:30.616016 containerd[1995]: time="2025-08-13T00:42:30.615235040Z" level=info msg="TaskExit event in podsandbox handler container_id:\"53a9184b1f578ad7a0aaca081605870d0e5c3d30bfb561108809ab39f1356e1b\" id:\"53a9184b1f578ad7a0aaca081605870d0e5c3d30bfb561108809ab39f1356e1b\" pid:3149 exit_status:1 exited_at:{seconds:1755045750 nanos:612994112}" Aug 13 00:42:30.643960 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-53a9184b1f578ad7a0aaca081605870d0e5c3d30bfb561108809ab39f1356e1b-rootfs.mount: Deactivated successfully. Aug 13 00:42:31.337589 kubelet[3300]: I0813 00:42:31.337546 3300 scope.go:117] "RemoveContainer" containerID="53a9184b1f578ad7a0aaca081605870d0e5c3d30bfb561108809ab39f1356e1b" Aug 13 00:42:31.340318 containerd[1995]: time="2025-08-13T00:42:31.340222692Z" level=info msg="CreateContainer within sandbox \"0ed558137596ff8ad5073838b09479f1baab8b94a7ec004b76832bcea0683e0b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Aug 13 00:42:31.358759 containerd[1995]: time="2025-08-13T00:42:31.358707274Z" level=info msg="Container 5fa65e047be0e9dfa9d6c7f1206aea1a121f33bb95f8903372c7111c84fc194d: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:42:31.373724 containerd[1995]: time="2025-08-13T00:42:31.373678192Z" level=info msg="CreateContainer within sandbox \"0ed558137596ff8ad5073838b09479f1baab8b94a7ec004b76832bcea0683e0b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"5fa65e047be0e9dfa9d6c7f1206aea1a121f33bb95f8903372c7111c84fc194d\"" Aug 13 00:42:31.374274 containerd[1995]: time="2025-08-13T00:42:31.374249040Z" level=info msg="StartContainer for \"5fa65e047be0e9dfa9d6c7f1206aea1a121f33bb95f8903372c7111c84fc194d\"" Aug 13 00:42:31.375559 containerd[1995]: time="2025-08-13T00:42:31.375525386Z" level=info msg="connecting to shim 5fa65e047be0e9dfa9d6c7f1206aea1a121f33bb95f8903372c7111c84fc194d" address="unix:///run/containerd/s/2e13b7f34a79709ef33d344725a75712c83e79cd645842e93db19b1039ada265" protocol=ttrpc version=3 Aug 13 00:42:31.402825 systemd[1]: Started cri-containerd-5fa65e047be0e9dfa9d6c7f1206aea1a121f33bb95f8903372c7111c84fc194d.scope - libcontainer container 5fa65e047be0e9dfa9d6c7f1206aea1a121f33bb95f8903372c7111c84fc194d. Aug 13 00:42:31.464534 containerd[1995]: time="2025-08-13T00:42:31.464484675Z" level=info msg="StartContainer for \"5fa65e047be0e9dfa9d6c7f1206aea1a121f33bb95f8903372c7111c84fc194d\" returns successfully"