Aug 13 01:00:48.987198 kernel: Linux version 5.15.189-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Tue Aug 12 23:01:50 -00 2025 Aug 13 01:00:48.987228 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 01:00:48.987246 kernel: BIOS-provided physical RAM map: Aug 13 01:00:48.987258 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Aug 13 01:00:48.987268 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Aug 13 01:00:48.987279 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Aug 13 01:00:48.987292 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Aug 13 01:00:48.987304 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Aug 13 01:00:48.987318 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Aug 13 01:00:48.987329 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Aug 13 01:00:48.987341 kernel: NX (Execute Disable) protection: active Aug 13 01:00:48.987352 kernel: e820: update [mem 0x76813018-0x7681be57] usable ==> usable Aug 13 01:00:48.987364 kernel: e820: update [mem 0x76813018-0x7681be57] usable ==> usable Aug 13 01:00:48.987375 kernel: extended physical RAM map: Aug 13 01:00:48.987392 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Aug 13 01:00:48.987405 kernel: reserve setup_data: [mem 0x0000000000100000-0x0000000076813017] usable Aug 13 01:00:48.987417 kernel: reserve setup_data: [mem 0x0000000076813018-0x000000007681be57] usable Aug 13 01:00:48.987429 kernel: reserve setup_data: [mem 0x000000007681be58-0x00000000786cdfff] usable Aug 13 01:00:48.987442 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Aug 13 01:00:48.987454 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Aug 13 01:00:48.987467 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Aug 13 01:00:48.987479 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Aug 13 01:00:48.987491 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Aug 13 01:00:48.987503 kernel: efi: EFI v2.70 by EDK II Aug 13 01:00:48.987518 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77004a98 Aug 13 01:00:48.989602 kernel: SMBIOS 2.7 present. Aug 13 01:00:48.989622 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Aug 13 01:00:48.989636 kernel: Hypervisor detected: KVM Aug 13 01:00:48.989649 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 01:00:48.989661 kernel: kvm-clock: cpu 0, msr 2619e001, primary cpu clock Aug 13 01:00:48.989673 kernel: kvm-clock: using sched offset of 4008628060 cycles Aug 13 01:00:48.989687 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 01:00:48.989700 kernel: tsc: Detected 2499.996 MHz processor Aug 13 01:00:48.989712 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 01:00:48.989725 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 01:00:48.989743 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Aug 13 01:00:48.989755 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 01:00:48.989768 kernel: Using GB pages for direct mapping Aug 13 01:00:48.989780 kernel: Secure boot disabled Aug 13 01:00:48.989794 kernel: ACPI: Early table checksum verification disabled Aug 13 01:00:48.989811 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Aug 13 01:00:48.989825 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Aug 13 01:00:48.989841 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Aug 13 01:00:48.989855 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Aug 13 01:00:48.989869 kernel: ACPI: FACS 0x00000000789D0000 000040 Aug 13 01:00:48.989882 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Aug 13 01:00:48.989896 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Aug 13 01:00:48.989910 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Aug 13 01:00:48.989923 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Aug 13 01:00:48.989939 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Aug 13 01:00:48.989953 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Aug 13 01:00:48.989967 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Aug 13 01:00:48.989980 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Aug 13 01:00:48.989994 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Aug 13 01:00:48.990008 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Aug 13 01:00:48.990021 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Aug 13 01:00:48.990035 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Aug 13 01:00:48.990048 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Aug 13 01:00:48.990064 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Aug 13 01:00:48.990078 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Aug 13 01:00:48.990091 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Aug 13 01:00:48.990105 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Aug 13 01:00:48.990118 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Aug 13 01:00:48.990132 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Aug 13 01:00:48.990145 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 13 01:00:48.990159 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Aug 13 01:00:48.990172 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Aug 13 01:00:48.990188 kernel: NUMA: Initialized distance table, cnt=1 Aug 13 01:00:48.990201 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] Aug 13 01:00:48.990215 kernel: Zone ranges: Aug 13 01:00:48.990229 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 01:00:48.990242 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Aug 13 01:00:48.990256 kernel: Normal empty Aug 13 01:00:48.990269 kernel: Movable zone start for each node Aug 13 01:00:48.990288 kernel: Early memory node ranges Aug 13 01:00:48.990300 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Aug 13 01:00:48.990314 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Aug 13 01:00:48.990325 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Aug 13 01:00:48.990335 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Aug 13 01:00:48.990346 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 01:00:48.990357 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Aug 13 01:00:48.990369 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Aug 13 01:00:48.990380 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Aug 13 01:00:48.990394 kernel: ACPI: PM-Timer IO Port: 0xb008 Aug 13 01:00:48.990407 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 01:00:48.990425 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Aug 13 01:00:48.990439 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 01:00:48.990453 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 01:00:48.990467 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 01:00:48.990481 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 01:00:48.990495 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 01:00:48.990510 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 01:00:48.990524 kernel: TSC deadline timer available Aug 13 01:00:48.997613 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Aug 13 01:00:48.997639 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Aug 13 01:00:48.997651 kernel: Booting paravirtualized kernel on KVM Aug 13 01:00:48.997664 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 01:00:48.997677 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Aug 13 01:00:48.997689 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Aug 13 01:00:48.997701 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Aug 13 01:00:48.997714 kernel: pcpu-alloc: [0] 0 1 Aug 13 01:00:48.997726 kernel: kvm-guest: stealtime: cpu 0, msr 7a41c0c0 Aug 13 01:00:48.997738 kernel: kvm-guest: PV spinlocks enabled Aug 13 01:00:48.997753 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 01:00:48.997766 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Aug 13 01:00:48.997778 kernel: Policy zone: DMA32 Aug 13 01:00:48.997793 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 01:00:48.997806 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 01:00:48.997818 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 01:00:48.997831 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 13 01:00:48.997843 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 01:00:48.997859 kernel: Memory: 1876640K/2037804K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47488K init, 4092K bss, 160904K reserved, 0K cma-reserved) Aug 13 01:00:48.997871 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 01:00:48.997883 kernel: Kernel/User page tables isolation: enabled Aug 13 01:00:48.997896 kernel: ftrace: allocating 34608 entries in 136 pages Aug 13 01:00:48.997908 kernel: ftrace: allocated 136 pages with 2 groups Aug 13 01:00:48.997920 kernel: rcu: Hierarchical RCU implementation. Aug 13 01:00:48.997934 kernel: rcu: RCU event tracing is enabled. Aug 13 01:00:48.997960 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 01:00:48.997972 kernel: Rude variant of Tasks RCU enabled. Aug 13 01:00:48.997985 kernel: Tracing variant of Tasks RCU enabled. Aug 13 01:00:48.997998 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 01:00:48.998011 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 01:00:48.998026 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 13 01:00:48.998038 kernel: random: crng init done Aug 13 01:00:48.998051 kernel: Console: colour dummy device 80x25 Aug 13 01:00:48.998064 kernel: printk: console [tty0] enabled Aug 13 01:00:48.998076 kernel: printk: console [ttyS0] enabled Aug 13 01:00:48.998089 kernel: ACPI: Core revision 20210730 Aug 13 01:00:48.998102 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Aug 13 01:00:48.998118 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 01:00:48.998131 kernel: x2apic enabled Aug 13 01:00:48.998144 kernel: Switched APIC routing to physical x2apic. Aug 13 01:00:48.998157 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Aug 13 01:00:48.998170 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Aug 13 01:00:48.998183 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Aug 13 01:00:48.998196 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Aug 13 01:00:48.998213 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 01:00:48.998226 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 01:00:48.998239 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 01:00:48.998252 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Aug 13 01:00:48.998265 kernel: RETBleed: Vulnerable Aug 13 01:00:48.998278 kernel: Speculative Store Bypass: Vulnerable Aug 13 01:00:48.998290 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 01:00:48.998303 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 01:00:48.998315 kernel: GDS: Unknown: Dependent on hypervisor status Aug 13 01:00:48.998328 kernel: ITS: Mitigation: Aligned branch/return thunks Aug 13 01:00:48.998340 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 01:00:48.998356 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 01:00:48.998368 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 01:00:48.998381 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Aug 13 01:00:48.998394 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Aug 13 01:00:48.998406 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Aug 13 01:00:48.998419 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Aug 13 01:00:48.998431 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Aug 13 01:00:48.998444 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Aug 13 01:00:48.998457 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 01:00:48.998469 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Aug 13 01:00:48.998482 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Aug 13 01:00:48.998497 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Aug 13 01:00:48.998509 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Aug 13 01:00:48.998522 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Aug 13 01:00:48.998545 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Aug 13 01:00:48.998564 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Aug 13 01:00:48.998575 kernel: Freeing SMP alternatives memory: 32K Aug 13 01:00:48.998586 kernel: pid_max: default: 32768 minimum: 301 Aug 13 01:00:48.998597 kernel: LSM: Security Framework initializing Aug 13 01:00:48.998609 kernel: SELinux: Initializing. Aug 13 01:00:48.998622 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 01:00:48.998635 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 01:00:48.998650 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Aug 13 01:00:48.998662 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Aug 13 01:00:48.998676 kernel: signal: max sigframe size: 3632 Aug 13 01:00:48.998688 kernel: rcu: Hierarchical SRCU implementation. Aug 13 01:00:48.998703 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 13 01:00:48.998717 kernel: smp: Bringing up secondary CPUs ... Aug 13 01:00:48.998730 kernel: x86: Booting SMP configuration: Aug 13 01:00:48.998742 kernel: .... node #0, CPUs: #1 Aug 13 01:00:48.998756 kernel: kvm-clock: cpu 1, msr 2619e041, secondary cpu clock Aug 13 01:00:48.998770 kernel: kvm-guest: stealtime: cpu 1, msr 7a51c0c0 Aug 13 01:00:48.998788 kernel: Transient Scheduler Attacks: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Aug 13 01:00:48.998804 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Aug 13 01:00:48.998819 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 01:00:48.998834 kernel: smpboot: Max logical packages: 1 Aug 13 01:00:48.998848 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Aug 13 01:00:48.998863 kernel: devtmpfs: initialized Aug 13 01:00:48.998878 kernel: x86/mm: Memory block size: 128MB Aug 13 01:00:48.998893 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Aug 13 01:00:48.998911 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 01:00:48.998926 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 01:00:48.998941 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 01:00:48.998956 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 01:00:48.998971 kernel: audit: initializing netlink subsys (disabled) Aug 13 01:00:48.998986 kernel: audit: type=2000 audit(1755046848.879:1): state=initialized audit_enabled=0 res=1 Aug 13 01:00:48.999001 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 01:00:48.999016 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 01:00:48.999030 kernel: cpuidle: using governor menu Aug 13 01:00:48.999048 kernel: ACPI: bus type PCI registered Aug 13 01:00:48.999063 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 01:00:48.999078 kernel: dca service started, version 1.12.1 Aug 13 01:00:48.999093 kernel: PCI: Using configuration type 1 for base access Aug 13 01:00:48.999109 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 01:00:48.999124 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 01:00:48.999139 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 01:00:48.999154 kernel: ACPI: Added _OSI(Module Device) Aug 13 01:00:48.999169 kernel: ACPI: Added _OSI(Processor Device) Aug 13 01:00:48.999187 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 01:00:48.999201 kernel: ACPI: Added _OSI(Linux-Dell-Video) Aug 13 01:00:48.999216 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Aug 13 01:00:48.999231 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Aug 13 01:00:48.999246 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Aug 13 01:00:48.999261 kernel: ACPI: Interpreter enabled Aug 13 01:00:48.999276 kernel: ACPI: PM: (supports S0 S5) Aug 13 01:00:48.999291 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 01:00:48.999306 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 01:00:48.999323 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Aug 13 01:00:48.999338 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 01:00:48.999567 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Aug 13 01:00:48.999706 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Aug 13 01:00:48.999725 kernel: acpiphp: Slot [3] registered Aug 13 01:00:48.999740 kernel: acpiphp: Slot [4] registered Aug 13 01:00:48.999756 kernel: acpiphp: Slot [5] registered Aug 13 01:00:48.999774 kernel: acpiphp: Slot [6] registered Aug 13 01:00:48.999789 kernel: acpiphp: Slot [7] registered Aug 13 01:00:48.999803 kernel: acpiphp: Slot [8] registered Aug 13 01:00:48.999818 kernel: acpiphp: Slot [9] registered Aug 13 01:00:48.999832 kernel: acpiphp: Slot [10] registered Aug 13 01:00:48.999846 kernel: acpiphp: Slot [11] registered Aug 13 01:00:48.999861 kernel: acpiphp: Slot [12] registered Aug 13 01:00:48.999876 kernel: acpiphp: Slot [13] registered Aug 13 01:00:48.999891 kernel: acpiphp: Slot [14] registered Aug 13 01:00:48.999906 kernel: acpiphp: Slot [15] registered Aug 13 01:00:48.999923 kernel: acpiphp: Slot [16] registered Aug 13 01:00:48.999938 kernel: acpiphp: Slot [17] registered Aug 13 01:00:48.999953 kernel: acpiphp: Slot [18] registered Aug 13 01:00:48.999967 kernel: acpiphp: Slot [19] registered Aug 13 01:00:48.999982 kernel: acpiphp: Slot [20] registered Aug 13 01:00:48.999997 kernel: acpiphp: Slot [21] registered Aug 13 01:00:49.000011 kernel: acpiphp: Slot [22] registered Aug 13 01:00:49.000026 kernel: acpiphp: Slot [23] registered Aug 13 01:00:49.000041 kernel: acpiphp: Slot [24] registered Aug 13 01:00:49.000058 kernel: acpiphp: Slot [25] registered Aug 13 01:00:49.000072 kernel: acpiphp: Slot [26] registered Aug 13 01:00:49.000087 kernel: acpiphp: Slot [27] registered Aug 13 01:00:49.000102 kernel: acpiphp: Slot [28] registered Aug 13 01:00:49.000116 kernel: acpiphp: Slot [29] registered Aug 13 01:00:49.000131 kernel: acpiphp: Slot [30] registered Aug 13 01:00:49.000145 kernel: acpiphp: Slot [31] registered Aug 13 01:00:49.000160 kernel: PCI host bridge to bus 0000:00 Aug 13 01:00:49.000289 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 01:00:49.000410 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 01:00:49.000537 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 01:00:49.000667 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Aug 13 01:00:49.000800 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Aug 13 01:00:49.000912 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 01:00:49.001067 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Aug 13 01:00:49.001206 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Aug 13 01:00:49.001344 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Aug 13 01:00:49.001471 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Aug 13 01:00:49.001656 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Aug 13 01:00:49.001783 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Aug 13 01:00:49.001943 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Aug 13 01:00:49.002982 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Aug 13 01:00:49.003134 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Aug 13 01:00:49.003261 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Aug 13 01:00:49.003389 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Aug 13 01:00:49.003511 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Aug 13 01:00:49.003644 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Aug 13 01:00:49.003763 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Aug 13 01:00:49.003880 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 01:00:49.004010 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Aug 13 01:00:49.004130 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Aug 13 01:00:49.004252 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Aug 13 01:00:49.004371 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Aug 13 01:00:49.004388 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 01:00:49.004401 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 01:00:49.004415 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 01:00:49.004430 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 01:00:49.004444 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Aug 13 01:00:49.004457 kernel: iommu: Default domain type: Translated Aug 13 01:00:49.004470 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 01:00:49.005695 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Aug 13 01:00:49.006002 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 01:00:49.006515 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Aug 13 01:00:49.006587 kernel: vgaarb: loaded Aug 13 01:00:49.006604 kernel: pps_core: LinuxPPS API ver. 1 registered Aug 13 01:00:49.006625 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Aug 13 01:00:49.006640 kernel: PTP clock support registered Aug 13 01:00:49.006655 kernel: Registered efivars operations Aug 13 01:00:49.006671 kernel: PCI: Using ACPI for IRQ routing Aug 13 01:00:49.006686 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 01:00:49.006701 kernel: e820: reserve RAM buffer [mem 0x76813018-0x77ffffff] Aug 13 01:00:49.006716 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Aug 13 01:00:49.006730 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Aug 13 01:00:49.006745 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Aug 13 01:00:49.006763 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Aug 13 01:00:49.006778 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 01:00:49.006793 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 01:00:49.006808 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 01:00:49.006823 kernel: pnp: PnP ACPI init Aug 13 01:00:49.006838 kernel: pnp: PnP ACPI: found 5 devices Aug 13 01:00:49.006854 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 01:00:49.006867 kernel: NET: Registered PF_INET protocol family Aug 13 01:00:49.006883 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 01:00:49.006901 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Aug 13 01:00:49.006916 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 01:00:49.006931 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 01:00:49.006947 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Aug 13 01:00:49.006962 kernel: TCP: Hash tables configured (established 16384 bind 16384) Aug 13 01:00:49.006977 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 01:00:49.006992 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 01:00:49.007007 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 01:00:49.007025 kernel: NET: Registered PF_XDP protocol family Aug 13 01:00:49.007165 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 01:00:49.007281 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 01:00:49.007396 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 01:00:49.007509 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Aug 13 01:00:49.007636 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Aug 13 01:00:49.007768 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Aug 13 01:00:49.007897 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Aug 13 01:00:49.007919 kernel: PCI: CLS 0 bytes, default 64 Aug 13 01:00:49.007933 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 13 01:00:49.007947 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Aug 13 01:00:49.007960 kernel: clocksource: Switched to clocksource tsc Aug 13 01:00:49.007973 kernel: Initialise system trusted keyrings Aug 13 01:00:49.007986 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Aug 13 01:00:49.007999 kernel: Key type asymmetric registered Aug 13 01:00:49.008011 kernel: Asymmetric key parser 'x509' registered Aug 13 01:00:49.008025 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Aug 13 01:00:49.008041 kernel: io scheduler mq-deadline registered Aug 13 01:00:49.008054 kernel: io scheduler kyber registered Aug 13 01:00:49.008067 kernel: io scheduler bfq registered Aug 13 01:00:49.008080 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 01:00:49.008093 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 01:00:49.008106 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 01:00:49.008120 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 01:00:49.008132 kernel: i8042: Warning: Keylock active Aug 13 01:00:49.008145 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 01:00:49.008161 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 01:00:49.008297 kernel: rtc_cmos 00:00: RTC can wake from S4 Aug 13 01:00:49.008412 kernel: rtc_cmos 00:00: registered as rtc0 Aug 13 01:00:49.008523 kernel: rtc_cmos 00:00: setting system clock to 2025-08-13T01:00:48 UTC (1755046848) Aug 13 01:00:49.008647 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Aug 13 01:00:49.008662 kernel: intel_pstate: CPU model not supported Aug 13 01:00:49.008676 kernel: efifb: probing for efifb Aug 13 01:00:49.008688 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Aug 13 01:00:49.008705 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Aug 13 01:00:49.008717 kernel: efifb: scrolling: redraw Aug 13 01:00:49.008730 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Aug 13 01:00:49.008751 kernel: Console: switching to colour frame buffer device 100x37 Aug 13 01:00:49.008765 kernel: fb0: EFI VGA frame buffer device Aug 13 01:00:49.008778 kernel: pstore: Registered efi as persistent store backend Aug 13 01:00:49.008814 kernel: NET: Registered PF_INET6 protocol family Aug 13 01:00:49.008831 kernel: Segment Routing with IPv6 Aug 13 01:00:49.008845 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 01:00:49.008862 kernel: NET: Registered PF_PACKET protocol family Aug 13 01:00:49.008875 kernel: Key type dns_resolver registered Aug 13 01:00:49.008888 kernel: IPI shorthand broadcast: enabled Aug 13 01:00:49.008903 kernel: sched_clock: Marking stable (372640346, 135061409)->(575226453, -67524698) Aug 13 01:00:49.008917 kernel: registered taskstats version 1 Aug 13 01:00:49.008933 kernel: Loading compiled-in X.509 certificates Aug 13 01:00:49.008947 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.189-flatcar: 1d5a64b5798e654719a8bd91d683e7e9894bd433' Aug 13 01:00:49.008960 kernel: Key type .fscrypt registered Aug 13 01:00:49.008973 kernel: Key type fscrypt-provisioning registered Aug 13 01:00:49.008990 kernel: pstore: Using crash dump compression: deflate Aug 13 01:00:49.009005 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 01:00:49.009018 kernel: ima: Allocated hash algorithm: sha1 Aug 13 01:00:49.009031 kernel: ima: No architecture policies found Aug 13 01:00:49.009045 kernel: clk: Disabling unused clocks Aug 13 01:00:49.009058 kernel: Freeing unused kernel image (initmem) memory: 47488K Aug 13 01:00:49.009072 kernel: Write protecting the kernel read-only data: 28672k Aug 13 01:00:49.009086 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Aug 13 01:00:49.009099 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Aug 13 01:00:49.009115 kernel: Run /init as init process Aug 13 01:00:49.009129 kernel: with arguments: Aug 13 01:00:49.009143 kernel: /init Aug 13 01:00:49.009156 kernel: with environment: Aug 13 01:00:49.009169 kernel: HOME=/ Aug 13 01:00:49.009182 kernel: TERM=linux Aug 13 01:00:49.009195 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 01:00:49.009212 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 13 01:00:49.009231 systemd[1]: Detected virtualization amazon. Aug 13 01:00:49.009245 systemd[1]: Detected architecture x86-64. Aug 13 01:00:49.009258 systemd[1]: Running in initrd. Aug 13 01:00:49.009272 systemd[1]: No hostname configured, using default hostname. Aug 13 01:00:49.009285 systemd[1]: Hostname set to . Aug 13 01:00:49.009300 systemd[1]: Initializing machine ID from VM UUID. Aug 13 01:00:49.009314 systemd[1]: Queued start job for default target initrd.target. Aug 13 01:00:49.009331 systemd[1]: Started systemd-ask-password-console.path. Aug 13 01:00:49.009347 systemd[1]: Reached target cryptsetup.target. Aug 13 01:00:49.009361 systemd[1]: Reached target paths.target. Aug 13 01:00:49.009374 systemd[1]: Reached target slices.target. Aug 13 01:00:49.009388 systemd[1]: Reached target swap.target. Aug 13 01:00:49.009401 systemd[1]: Reached target timers.target. Aug 13 01:00:49.009418 systemd[1]: Listening on iscsid.socket. Aug 13 01:00:49.009432 systemd[1]: Listening on iscsiuio.socket. Aug 13 01:00:49.009446 systemd[1]: Listening on systemd-journald-audit.socket. Aug 13 01:00:49.009459 systemd[1]: Listening on systemd-journald-dev-log.socket. Aug 13 01:00:49.009474 systemd[1]: Listening on systemd-journald.socket. Aug 13 01:00:49.009488 systemd[1]: Listening on systemd-networkd.socket. Aug 13 01:00:49.009501 systemd[1]: Listening on systemd-udevd-control.socket. Aug 13 01:00:49.009515 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 13 01:00:49.009541 systemd[1]: Reached target sockets.target. Aug 13 01:00:49.009555 systemd[1]: Starting kmod-static-nodes.service... Aug 13 01:00:49.009569 systemd[1]: Finished network-cleanup.service. Aug 13 01:00:49.009583 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 01:00:49.009597 systemd[1]: Starting systemd-journald.service... Aug 13 01:00:49.009611 systemd[1]: Starting systemd-modules-load.service... Aug 13 01:00:49.009626 systemd[1]: Starting systemd-resolved.service... Aug 13 01:00:49.009640 systemd[1]: Starting systemd-vconsole-setup.service... Aug 13 01:00:49.009654 systemd[1]: Finished kmod-static-nodes.service. Aug 13 01:00:49.009670 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 01:00:49.009684 kernel: audit: type=1130 audit(1755046848.990:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:49.021638 systemd[1]: Finished systemd-vconsole-setup.service. Aug 13 01:00:49.021660 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 01:00:49.021677 kernel: audit: type=1130 audit(1755046849.009:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:49.021694 systemd[1]: Starting dracut-cmdline-ask.service... Aug 13 01:00:49.021718 systemd-journald[185]: Journal started Aug 13 01:00:49.021813 systemd-journald[185]: Runtime Journal (/run/log/journal/ec2ea6e0d803d95e6bdf93e2961734c7) is 4.8M, max 38.3M, 33.5M free. Aug 13 01:00:48.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:49.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:48.998583 systemd-modules-load[186]: Inserted module 'overlay' Aug 13 01:00:49.035161 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Aug 13 01:00:49.039561 systemd[1]: Started systemd-journald.service. Aug 13 01:00:49.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:49.062557 kernel: audit: type=1130 audit(1755046849.051:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:49.062618 kernel: audit: type=1130 audit(1755046849.059:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:49.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:49.060073 systemd[1]: Finished dracut-cmdline-ask.service. Aug 13 01:00:49.062542 systemd[1]: Starting dracut-cmdline.service... Aug 13 01:00:49.072664 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Aug 13 01:00:49.076196 systemd-resolved[187]: Positive Trust Anchors: Aug 13 01:00:49.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:49.078580 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 01:00:49.093205 kernel: audit: type=1130 audit(1755046849.074:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:49.078638 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 13 01:00:49.118401 kernel: audit: type=1130 audit(1755046849.090:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:49.118438 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 01:00:49.118459 kernel: Bridge firewalling registered Aug 13 01:00:49.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:49.118582 dracut-cmdline[202]: dracut-dracut-053 Aug 13 01:00:49.118582 dracut-cmdline[202]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 01:00:49.087258 systemd-resolved[187]: Defaulting to hostname 'linux'. Aug 13 01:00:49.088480 systemd[1]: Started systemd-resolved.service. Aug 13 01:00:49.091982 systemd[1]: Reached target nss-lookup.target. Aug 13 01:00:49.107366 systemd-modules-load[186]: Inserted module 'br_netfilter' Aug 13 01:00:49.151558 kernel: SCSI subsystem initialized Aug 13 01:00:49.166380 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 01:00:49.166455 kernel: device-mapper: uevent: version 1.0.3 Aug 13 01:00:49.169555 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Aug 13 01:00:49.173996 systemd-modules-load[186]: Inserted module 'dm_multipath' Aug 13 01:00:49.175019 systemd[1]: Finished systemd-modules-load.service. Aug 13 01:00:49.179240 systemd[1]: Starting systemd-sysctl.service... Aug 13 01:00:49.195464 kernel: audit: type=1130 audit(1755046849.176:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:49.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:49.198894 systemd[1]: Finished systemd-sysctl.service. Aug 13 01:00:49.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:49.207564 kernel: audit: type=1130 audit(1755046849.199:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:49.215555 kernel: Loading iSCSI transport class v2.0-870. Aug 13 01:00:49.235563 kernel: iscsi: registered transport (tcp) Aug 13 01:00:49.260456 kernel: iscsi: registered transport (qla4xxx) Aug 13 01:00:49.260551 kernel: QLogic iSCSI HBA Driver Aug 13 01:00:49.292641 systemd[1]: Finished dracut-cmdline.service. Aug 13 01:00:49.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:49.301557 kernel: audit: type=1130 audit(1755046849.291:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:49.294668 systemd[1]: Starting dracut-pre-udev.service... Aug 13 01:00:49.349597 kernel: raid6: avx512x4 gen() 18055 MB/s Aug 13 01:00:49.367580 kernel: raid6: avx512x4 xor() 7439 MB/s Aug 13 01:00:49.385584 kernel: raid6: avx512x2 gen() 17387 MB/s Aug 13 01:00:49.403578 kernel: raid6: avx512x2 xor() 24492 MB/s Aug 13 01:00:49.421583 kernel: raid6: avx512x1 gen() 17813 MB/s Aug 13 01:00:49.439572 kernel: raid6: avx512x1 xor() 21967 MB/s Aug 13 01:00:49.457575 kernel: raid6: avx2x4 gen() 17717 MB/s Aug 13 01:00:49.475575 kernel: raid6: avx2x4 xor() 7065 MB/s Aug 13 01:00:49.493574 kernel: raid6: avx2x2 gen() 17632 MB/s Aug 13 01:00:49.511588 kernel: raid6: avx2x2 xor() 17953 MB/s Aug 13 01:00:49.529587 kernel: raid6: avx2x1 gen() 13624 MB/s Aug 13 01:00:49.547577 kernel: raid6: avx2x1 xor() 15881 MB/s Aug 13 01:00:49.565565 kernel: raid6: sse2x4 gen() 9572 MB/s Aug 13 01:00:49.583591 kernel: raid6: sse2x4 xor() 5473 MB/s Aug 13 01:00:49.601590 kernel: raid6: sse2x2 gen() 10495 MB/s Aug 13 01:00:49.619586 kernel: raid6: sse2x2 xor() 6106 MB/s Aug 13 01:00:49.637580 kernel: raid6: sse2x1 gen() 9518 MB/s Aug 13 01:00:49.655844 kernel: raid6: sse2x1 xor() 4838 MB/s Aug 13 01:00:49.655896 kernel: raid6: using algorithm avx512x4 gen() 18055 MB/s Aug 13 01:00:49.655915 kernel: raid6: .... xor() 7439 MB/s, rmw enabled Aug 13 01:00:49.656978 kernel: raid6: using avx512x2 recovery algorithm Aug 13 01:00:49.671556 kernel: xor: automatically using best checksumming function avx Aug 13 01:00:49.775565 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Aug 13 01:00:49.784498 systemd[1]: Finished dracut-pre-udev.service. Aug 13 01:00:49.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:49.783000 audit: BPF prog-id=7 op=LOAD Aug 13 01:00:49.784000 audit: BPF prog-id=8 op=LOAD Aug 13 01:00:49.786063 systemd[1]: Starting systemd-udevd.service... Aug 13 01:00:49.799165 systemd-udevd[384]: Using default interface naming scheme 'v252'. Aug 13 01:00:49.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:49.804599 systemd[1]: Started systemd-udevd.service. Aug 13 01:00:49.806459 systemd[1]: Starting dracut-pre-trigger.service... Aug 13 01:00:49.826691 dracut-pre-trigger[389]: rd.md=0: removing MD RAID activation Aug 13 01:00:49.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:49.858208 systemd[1]: Finished dracut-pre-trigger.service. Aug 13 01:00:49.859423 systemd[1]: Starting systemd-udev-trigger.service... Aug 13 01:00:49.908341 systemd[1]: Finished systemd-udev-trigger.service. Aug 13 01:00:49.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:49.967041 kernel: ena 0000:00:05.0: ENA device version: 0.10 Aug 13 01:00:49.980727 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Aug 13 01:00:49.980925 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Aug 13 01:00:49.981079 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 01:00:49.981104 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:44:50:56:33:b1 Aug 13 01:00:49.985391 (udev-worker)[439]: Network interface NamePolicy= disabled on kernel command line. Aug 13 01:00:50.016388 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 01:00:50.016462 kernel: AES CTR mode by8 optimization enabled Aug 13 01:00:50.028562 kernel: nvme nvme0: pci function 0000:00:04.0 Aug 13 01:00:50.028846 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Aug 13 01:00:50.041549 kernel: nvme nvme0: 2/0/0 default/read/poll queues Aug 13 01:00:50.049415 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 01:00:50.049479 kernel: GPT:9289727 != 16777215 Aug 13 01:00:50.049502 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 01:00:50.050765 kernel: GPT:9289727 != 16777215 Aug 13 01:00:50.051748 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 01:00:50.053018 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 13 01:00:50.104552 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (441) Aug 13 01:00:50.129440 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Aug 13 01:00:50.151612 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Aug 13 01:00:50.157264 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 13 01:00:50.169240 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Aug 13 01:00:50.170020 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Aug 13 01:00:50.176802 systemd[1]: Starting disk-uuid.service... Aug 13 01:00:50.183832 disk-uuid[593]: Primary Header is updated. Aug 13 01:00:50.183832 disk-uuid[593]: Secondary Entries is updated. Aug 13 01:00:50.183832 disk-uuid[593]: Secondary Header is updated. Aug 13 01:00:50.190575 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 13 01:00:50.197566 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 13 01:00:50.201562 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 13 01:00:51.208577 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 13 01:00:51.208631 disk-uuid[594]: The operation has completed successfully. Aug 13 01:00:51.342736 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 01:00:51.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:51.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:51.342854 systemd[1]: Finished disk-uuid.service. Aug 13 01:00:51.353584 systemd[1]: Starting verity-setup.service... Aug 13 01:00:51.374919 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Aug 13 01:00:51.481841 systemd[1]: Found device dev-mapper-usr.device. Aug 13 01:00:51.483914 systemd[1]: Finished verity-setup.service. Aug 13 01:00:51.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:51.485749 systemd[1]: Mounting sysusr-usr.mount... Aug 13 01:00:51.588546 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Aug 13 01:00:51.589811 systemd[1]: Mounted sysusr-usr.mount. Aug 13 01:00:51.590549 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Aug 13 01:00:51.591278 systemd[1]: Starting ignition-setup.service... Aug 13 01:00:51.594121 systemd[1]: Starting parse-ip-for-networkd.service... Aug 13 01:00:51.623128 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:00:51.623202 kernel: BTRFS info (device nvme0n1p6): using free space tree Aug 13 01:00:51.623223 kernel: BTRFS info (device nvme0n1p6): has skinny extents Aug 13 01:00:51.646562 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Aug 13 01:00:51.661729 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 01:00:51.672376 systemd[1]: Finished ignition-setup.service. Aug 13 01:00:51.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:51.674397 systemd[1]: Starting ignition-fetch-offline.service... Aug 13 01:00:51.681120 systemd[1]: Finished parse-ip-for-networkd.service. Aug 13 01:00:51.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:51.681000 audit: BPF prog-id=9 op=LOAD Aug 13 01:00:51.683432 systemd[1]: Starting systemd-networkd.service... Aug 13 01:00:51.706814 systemd-networkd[1106]: lo: Link UP Aug 13 01:00:51.706827 systemd-networkd[1106]: lo: Gained carrier Aug 13 01:00:51.707467 systemd-networkd[1106]: Enumeration completed Aug 13 01:00:51.707757 systemd-networkd[1106]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 01:00:51.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:51.709673 systemd[1]: Started systemd-networkd.service. Aug 13 01:00:51.710506 systemd[1]: Reached target network.target. Aug 13 01:00:51.711779 systemd-networkd[1106]: eth0: Link UP Aug 13 01:00:51.711785 systemd-networkd[1106]: eth0: Gained carrier Aug 13 01:00:51.714506 systemd[1]: Starting iscsiuio.service... Aug 13 01:00:51.722555 systemd[1]: Started iscsiuio.service. Aug 13 01:00:51.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:51.724324 systemd[1]: Starting iscsid.service... Aug 13 01:00:51.728690 systemd-networkd[1106]: eth0: DHCPv4 address 172.31.20.232/20, gateway 172.31.16.1 acquired from 172.31.16.1 Aug 13 01:00:51.731884 iscsid[1111]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Aug 13 01:00:51.731884 iscsid[1111]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Aug 13 01:00:51.731884 iscsid[1111]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Aug 13 01:00:51.731884 iscsid[1111]: If using hardware iscsi like qla4xxx this message can be ignored. Aug 13 01:00:51.731884 iscsid[1111]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Aug 13 01:00:51.731884 iscsid[1111]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Aug 13 01:00:51.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:51.732219 systemd[1]: Started iscsid.service. Aug 13 01:00:51.735363 systemd[1]: Starting dracut-initqueue.service... Aug 13 01:00:51.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:51.749964 systemd[1]: Finished dracut-initqueue.service. Aug 13 01:00:51.750829 systemd[1]: Reached target remote-fs-pre.target. Aug 13 01:00:51.751927 systemd[1]: Reached target remote-cryptsetup.target. Aug 13 01:00:51.753792 systemd[1]: Reached target remote-fs.target. Aug 13 01:00:51.757041 systemd[1]: Starting dracut-pre-mount.service... Aug 13 01:00:51.768277 systemd[1]: Finished dracut-pre-mount.service. Aug 13 01:00:51.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:52.167857 ignition[1102]: Ignition 2.14.0 Aug 13 01:00:52.167869 ignition[1102]: Stage: fetch-offline Aug 13 01:00:52.167983 ignition[1102]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 01:00:52.168023 ignition[1102]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Aug 13 01:00:52.181228 ignition[1102]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 01:00:52.181623 ignition[1102]: Ignition finished successfully Aug 13 01:00:52.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:52.183688 systemd[1]: Finished ignition-fetch-offline.service. Aug 13 01:00:52.185381 systemd[1]: Starting ignition-fetch.service... Aug 13 01:00:52.193682 ignition[1130]: Ignition 2.14.0 Aug 13 01:00:52.193696 ignition[1130]: Stage: fetch Aug 13 01:00:52.193855 ignition[1130]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 01:00:52.193895 ignition[1130]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Aug 13 01:00:52.200098 ignition[1130]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 01:00:52.200948 ignition[1130]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 01:00:52.233378 ignition[1130]: INFO : PUT result: OK Aug 13 01:00:52.238762 ignition[1130]: DEBUG : parsed url from cmdline: "" Aug 13 01:00:52.238762 ignition[1130]: INFO : no config URL provided Aug 13 01:00:52.238762 ignition[1130]: INFO : reading system config file "/usr/lib/ignition/user.ign" Aug 13 01:00:52.238762 ignition[1130]: INFO : no config at "/usr/lib/ignition/user.ign" Aug 13 01:00:52.241292 ignition[1130]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 01:00:52.241292 ignition[1130]: INFO : PUT result: OK Aug 13 01:00:52.241292 ignition[1130]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Aug 13 01:00:52.241292 ignition[1130]: INFO : GET result: OK Aug 13 01:00:52.241292 ignition[1130]: DEBUG : parsing config with SHA512: e8ccd8c46fc727dd4dd7d784f5583fca2f6dbe7d7f6863800935348ce9cb6ca1c2d9d3311e6f7dc747de0758d8d0ab09125cc4f29bdcda83dda1e4e039c42d43 Aug 13 01:00:52.250462 unknown[1130]: fetched base config from "system" Aug 13 01:00:52.250478 unknown[1130]: fetched base config from "system" Aug 13 01:00:52.250485 unknown[1130]: fetched user config from "aws" Aug 13 01:00:52.251621 ignition[1130]: fetch: fetch complete Aug 13 01:00:52.251628 ignition[1130]: fetch: fetch passed Aug 13 01:00:52.251684 ignition[1130]: Ignition finished successfully Aug 13 01:00:52.255120 systemd[1]: Finished ignition-fetch.service. Aug 13 01:00:52.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:52.257261 systemd[1]: Starting ignition-kargs.service... Aug 13 01:00:52.267919 ignition[1136]: Ignition 2.14.0 Aug 13 01:00:52.267932 ignition[1136]: Stage: kargs Aug 13 01:00:52.268123 ignition[1136]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 01:00:52.268162 ignition[1136]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Aug 13 01:00:52.275974 ignition[1136]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 01:00:52.276895 ignition[1136]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 01:00:52.277653 ignition[1136]: INFO : PUT result: OK Aug 13 01:00:52.279927 ignition[1136]: kargs: kargs passed Aug 13 01:00:52.279989 ignition[1136]: Ignition finished successfully Aug 13 01:00:52.282027 systemd[1]: Finished ignition-kargs.service. Aug 13 01:00:52.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:52.283718 systemd[1]: Starting ignition-disks.service... Aug 13 01:00:52.293130 ignition[1142]: Ignition 2.14.0 Aug 13 01:00:52.293144 ignition[1142]: Stage: disks Aug 13 01:00:52.293345 ignition[1142]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 01:00:52.293379 ignition[1142]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Aug 13 01:00:52.300967 ignition[1142]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 01:00:52.301821 ignition[1142]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 01:00:52.302568 ignition[1142]: INFO : PUT result: OK Aug 13 01:00:52.304457 ignition[1142]: disks: disks passed Aug 13 01:00:52.304524 ignition[1142]: Ignition finished successfully Aug 13 01:00:52.306550 systemd[1]: Finished ignition-disks.service. Aug 13 01:00:52.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:52.307432 systemd[1]: Reached target initrd-root-device.target. Aug 13 01:00:52.308342 systemd[1]: Reached target local-fs-pre.target. Aug 13 01:00:52.309424 systemd[1]: Reached target local-fs.target. Aug 13 01:00:52.310350 systemd[1]: Reached target sysinit.target. Aug 13 01:00:52.311256 systemd[1]: Reached target basic.target. Aug 13 01:00:52.313541 systemd[1]: Starting systemd-fsck-root.service... Aug 13 01:00:52.351834 systemd-fsck[1150]: ROOT: clean, 629/553520 files, 56027/553472 blocks Aug 13 01:00:52.354985 systemd[1]: Finished systemd-fsck-root.service. Aug 13 01:00:52.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:52.357488 systemd[1]: Mounting sysroot.mount... Aug 13 01:00:52.380555 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Aug 13 01:00:52.383378 systemd[1]: Mounted sysroot.mount. Aug 13 01:00:52.387172 systemd[1]: Reached target initrd-root-fs.target. Aug 13 01:00:52.395410 systemd[1]: Mounting sysroot-usr.mount... Aug 13 01:00:52.397365 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Aug 13 01:00:52.398188 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 01:00:52.398217 systemd[1]: Reached target ignition-diskful.target. Aug 13 01:00:52.399978 systemd[1]: Mounted sysroot-usr.mount. Aug 13 01:00:52.401802 systemd[1]: Starting initrd-setup-root.service... Aug 13 01:00:52.407723 initrd-setup-root[1171]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 01:00:52.452104 initrd-setup-root[1179]: cut: /sysroot/etc/group: No such file or directory Aug 13 01:00:52.457224 initrd-setup-root[1187]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 01:00:52.461821 initrd-setup-root[1195]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 01:00:52.520678 systemd[1]: Mounting sysroot-usr-share-oem.mount... Aug 13 01:00:52.541567 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1203) Aug 13 01:00:52.549150 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:00:52.549230 kernel: BTRFS info (device nvme0n1p6): using free space tree Aug 13 01:00:52.549250 kernel: BTRFS info (device nvme0n1p6): has skinny extents Aug 13 01:00:52.557566 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Aug 13 01:00:52.566383 systemd[1]: Mounted sysroot-usr-share-oem.mount. Aug 13 01:00:52.579435 systemd[1]: Finished initrd-setup-root.service. Aug 13 01:00:52.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:52.581051 systemd[1]: Starting ignition-mount.service... Aug 13 01:00:52.584500 systemd[1]: Starting sysroot-boot.service... Aug 13 01:00:52.590411 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Aug 13 01:00:52.590752 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Aug 13 01:00:52.607123 ignition[1232]: INFO : Ignition 2.14.0 Aug 13 01:00:52.608465 ignition[1232]: INFO : Stage: mount Aug 13 01:00:52.609774 ignition[1232]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 01:00:52.611753 ignition[1232]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Aug 13 01:00:52.623848 ignition[1232]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 01:00:52.625036 ignition[1232]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 01:00:52.628024 ignition[1232]: INFO : PUT result: OK Aug 13 01:00:52.633077 ignition[1232]: INFO : mount: mount passed Aug 13 01:00:52.633780 systemd[1]: Finished sysroot-boot.service. Aug 13 01:00:52.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:52.635713 ignition[1232]: INFO : Ignition finished successfully Aug 13 01:00:52.635743 systemd[1]: Finished ignition-mount.service. Aug 13 01:00:52.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:52.638370 systemd[1]: Starting ignition-files.service... Aug 13 01:00:52.647227 systemd[1]: Mounting sysroot-usr-share-oem.mount... Aug 13 01:00:52.666555 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by mount (1242) Aug 13 01:00:52.666610 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:00:52.669932 kernel: BTRFS info (device nvme0n1p6): using free space tree Aug 13 01:00:52.669996 kernel: BTRFS info (device nvme0n1p6): has skinny extents Aug 13 01:00:52.718578 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Aug 13 01:00:52.721945 systemd[1]: Mounted sysroot-usr-share-oem.mount. Aug 13 01:00:52.733864 ignition[1261]: INFO : Ignition 2.14.0 Aug 13 01:00:52.733864 ignition[1261]: INFO : Stage: files Aug 13 01:00:52.736057 ignition[1261]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 01:00:52.736057 ignition[1261]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Aug 13 01:00:52.744028 ignition[1261]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 01:00:52.745091 ignition[1261]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 01:00:52.745943 ignition[1261]: INFO : PUT result: OK Aug 13 01:00:52.748921 ignition[1261]: DEBUG : files: compiled without relabeling support, skipping Aug 13 01:00:52.754755 ignition[1261]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 01:00:52.754755 ignition[1261]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 01:00:52.765678 systemd-networkd[1106]: eth0: Gained IPv6LL Aug 13 01:00:52.767348 ignition[1261]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 01:00:52.769005 ignition[1261]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 01:00:52.770461 unknown[1261]: wrote ssh authorized keys file for user: core Aug 13 01:00:52.771641 ignition[1261]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 01:00:52.773318 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 01:00:52.773318 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 01:00:52.773318 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 01:00:52.773318 ignition[1261]: INFO : GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 13 01:00:52.852295 ignition[1261]: INFO : GET result: OK Aug 13 01:00:53.553829 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 01:00:53.555208 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 01:00:53.555208 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 01:00:53.555208 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 01:00:53.555208 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 01:00:53.555208 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Aug 13 01:00:53.555208 ignition[1261]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Aug 13 01:00:53.564672 ignition[1261]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3915788091" Aug 13 01:00:53.564672 ignition[1261]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3915788091": device or resource busy Aug 13 01:00:53.564672 ignition[1261]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3915788091", trying btrfs: device or resource busy Aug 13 01:00:53.564672 ignition[1261]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3915788091" Aug 13 01:00:53.572188 ignition[1261]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3915788091" Aug 13 01:00:53.572188 ignition[1261]: INFO : op(3): [started] unmounting "/mnt/oem3915788091" Aug 13 01:00:53.574783 ignition[1261]: INFO : op(3): [finished] unmounting "/mnt/oem3915788091" Aug 13 01:00:53.574783 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Aug 13 01:00:53.574783 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 01:00:53.574783 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 01:00:53.574783 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 01:00:53.574783 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 01:00:53.573225 systemd[1]: mnt-oem3915788091.mount: Deactivated successfully. Aug 13 01:00:53.584147 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 01:00:53.584147 ignition[1261]: INFO : GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Aug 13 01:00:53.638597 ignition[1261]: INFO : GET result: OK Aug 13 01:00:53.754787 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 01:00:53.757228 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Aug 13 01:00:53.757228 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 01:00:53.757228 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 01:00:53.757228 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 01:00:53.757228 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Aug 13 01:00:53.757228 ignition[1261]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Aug 13 01:00:53.772684 ignition[1261]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2083300756" Aug 13 01:00:53.772684 ignition[1261]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2083300756": device or resource busy Aug 13 01:00:53.772684 ignition[1261]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2083300756", trying btrfs: device or resource busy Aug 13 01:00:53.772684 ignition[1261]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2083300756" Aug 13 01:00:53.783221 ignition[1261]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2083300756" Aug 13 01:00:53.783221 ignition[1261]: INFO : op(6): [started] unmounting "/mnt/oem2083300756" Aug 13 01:00:53.783221 ignition[1261]: INFO : op(6): [finished] unmounting "/mnt/oem2083300756" Aug 13 01:00:53.783221 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Aug 13 01:00:53.783221 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 01:00:53.783221 ignition[1261]: INFO : GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Aug 13 01:00:53.780120 systemd[1]: mnt-oem2083300756.mount: Deactivated successfully. Aug 13 01:00:54.123360 ignition[1261]: INFO : GET result: OK Aug 13 01:00:54.436191 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 01:00:54.436191 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Aug 13 01:00:54.441639 ignition[1261]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Aug 13 01:00:54.445921 ignition[1261]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3223378646" Aug 13 01:00:54.448143 ignition[1261]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3223378646": device or resource busy Aug 13 01:00:54.448143 ignition[1261]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3223378646", trying btrfs: device or resource busy Aug 13 01:00:54.448143 ignition[1261]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3223378646" Aug 13 01:00:54.458482 ignition[1261]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3223378646" Aug 13 01:00:54.458482 ignition[1261]: INFO : op(9): [started] unmounting "/mnt/oem3223378646" Aug 13 01:00:54.458482 ignition[1261]: INFO : op(9): [finished] unmounting "/mnt/oem3223378646" Aug 13 01:00:54.458482 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Aug 13 01:00:54.458482 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Aug 13 01:00:54.458482 ignition[1261]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Aug 13 01:00:54.473116 ignition[1261]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1749437034" Aug 13 01:00:54.475550 ignition[1261]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1749437034": device or resource busy Aug 13 01:00:54.475550 ignition[1261]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1749437034", trying btrfs: device or resource busy Aug 13 01:00:54.475550 ignition[1261]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1749437034" Aug 13 01:00:54.475550 ignition[1261]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1749437034" Aug 13 01:00:54.475550 ignition[1261]: INFO : op(c): [started] unmounting "/mnt/oem1749437034" Aug 13 01:00:54.475550 ignition[1261]: INFO : op(c): [finished] unmounting "/mnt/oem1749437034" Aug 13 01:00:54.475550 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Aug 13 01:00:54.475550 ignition[1261]: INFO : files: op(11): [started] processing unit "coreos-metadata-sshkeys@.service" Aug 13 01:00:54.475550 ignition[1261]: INFO : files: op(11): [finished] processing unit "coreos-metadata-sshkeys@.service" Aug 13 01:00:54.475550 ignition[1261]: INFO : files: op(12): [started] processing unit "amazon-ssm-agent.service" Aug 13 01:00:54.475550 ignition[1261]: INFO : files: op(12): op(13): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Aug 13 01:00:54.475550 ignition[1261]: INFO : files: op(12): op(13): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Aug 13 01:00:54.475550 ignition[1261]: INFO : files: op(12): [finished] processing unit "amazon-ssm-agent.service" Aug 13 01:00:54.475550 ignition[1261]: INFO : files: op(14): [started] processing unit "nvidia.service" Aug 13 01:00:54.475550 ignition[1261]: INFO : files: op(14): [finished] processing unit "nvidia.service" Aug 13 01:00:54.475550 ignition[1261]: INFO : files: op(15): [started] processing unit "containerd.service" Aug 13 01:00:54.475550 ignition[1261]: INFO : files: op(15): op(16): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 01:00:54.475550 ignition[1261]: INFO : files: op(15): op(16): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 01:00:54.475550 ignition[1261]: INFO : files: op(15): [finished] processing unit "containerd.service" Aug 13 01:00:54.475550 ignition[1261]: INFO : files: op(17): [started] processing unit "prepare-helm.service" Aug 13 01:00:54.549314 kernel: kauditd_printk_skb: 25 callbacks suppressed Aug 13 01:00:54.549349 kernel: audit: type=1130 audit(1755046854.489:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:54.549370 kernel: audit: type=1130 audit(1755046854.511:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:54.549389 kernel: audit: type=1131 audit(1755046854.511:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:54.549409 kernel: audit: type=1130 audit(1755046854.524:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:54.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:54.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:54.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:54.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:54.549655 ignition[1261]: INFO : files: op(17): op(18): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 01:00:54.549655 ignition[1261]: INFO : files: op(17): op(18): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 01:00:54.549655 ignition[1261]: INFO : files: op(17): [finished] processing unit "prepare-helm.service" Aug 13 01:00:54.549655 ignition[1261]: INFO : files: op(19): [started] setting preset to enabled for "prepare-helm.service" Aug 13 01:00:54.549655 ignition[1261]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 01:00:54.549655 ignition[1261]: INFO : files: op(1a): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Aug 13 01:00:54.549655 ignition[1261]: INFO : files: op(1a): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Aug 13 01:00:54.549655 ignition[1261]: INFO : files: op(1b): [started] setting preset to enabled for "amazon-ssm-agent.service" Aug 13 01:00:54.549655 ignition[1261]: INFO : files: op(1b): [finished] setting preset to enabled for "amazon-ssm-agent.service" Aug 13 01:00:54.549655 ignition[1261]: INFO : files: op(1c): [started] setting preset to enabled for "nvidia.service" Aug 13 01:00:54.549655 ignition[1261]: INFO : files: op(1c): [finished] setting preset to enabled for "nvidia.service" Aug 13 01:00:54.549655 ignition[1261]: INFO : files: createResultFile: createFiles: op(1d): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 01:00:54.549655 ignition[1261]: INFO : files: createResultFile: createFiles: op(1d): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 01:00:54.549655 ignition[1261]: INFO : files: files passed Aug 13 01:00:54.549655 ignition[1261]: INFO : Ignition finished successfully Aug 13 01:00:54.605977 kernel: audit: type=1130 audit(1755046854.555:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:54.606025 kernel: audit: type=1131 audit(1755046854.555:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:54.606047 kernel: audit: type=1130 audit(1755046854.589:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:54.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:54.555000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:54.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:54.488467 systemd[1]: Finished ignition-files.service. Aug 13 01:00:54.497379 systemd[1]: Starting initrd-setup-root-after-ignition.service... Aug 13 01:00:54.609137 initrd-setup-root-after-ignition[1286]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 01:00:54.503754 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Aug 13 01:00:54.504979 systemd[1]: Starting ignition-quench.service... Aug 13 01:00:54.625275 kernel: audit: type=1130 audit(1755046854.612:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:54.625323 kernel: audit: type=1131 audit(1755046854.612:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:54.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:54.612000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:54.510133 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 01:00:54.510273 systemd[1]: Finished ignition-quench.service. Aug 13 01:00:54.513017 systemd[1]: Finished initrd-setup-root-after-ignition.service. Aug 13 01:00:54.525811 systemd[1]: Reached target ignition-complete.target. Aug 13 01:00:54.635980 kernel: audit: type=1131 audit(1755046854.628:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:54.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:54.533231 systemd[1]: Starting initrd-parse-etc.service... Aug 13 01:00:54.555426 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 01:00:54.555618 systemd[1]: Finished initrd-parse-etc.service. Aug 13 01:00:54.557190 systemd[1]: Reached target initrd-fs.target. Aug 13 01:00:54.569062 systemd[1]: Reached target initrd.target. Aug 13 01:00:54.571478 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Aug 13 01:00:54.572886 systemd[1]: Starting dracut-pre-pivot.service... Aug 13 01:00:54.584194 systemd[1]: mnt-oem3223378646.mount: Deactivated successfully. Aug 13 01:00:54.589769 systemd[1]: Finished dracut-pre-pivot.service. Aug 13 01:00:54.591799 systemd[1]: Starting initrd-cleanup.service... Aug 13 01:00:54.612094 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 01:00:54.612216 systemd[1]: Finished initrd-cleanup.service. Aug 13 01:00:54.649000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:54.615764 systemd[1]: Stopped target nss-lookup.target. Aug 13 01:00:54.626129 systemd[1]: Stopped target remote-cryptsetup.target. Aug 13 01:00:54.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:54.627650 systemd[1]: Stopped target timers.target. Aug 13 01:00:54.653000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:54.629077 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 01:00:54.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:54.629157 systemd[1]: Stopped dracut-pre-pivot.service. Aug 13 01:00:54.659000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:54.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:54.630551 systemd[1]: Stopped target initrd.target. Aug 13 01:00:54.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:54.636772 systemd[1]: Stopped target basic.target. Aug 13 01:00:54.638143 systemd[1]: Stopped target ignition-complete.target. Aug 13 01:00:54.639408 systemd[1]: Stopped target ignition-diskful.target. Aug 13 01:00:54.640676 systemd[1]: Stopped target initrd-root-device.target. Aug 13 01:00:54.641963 systemd[1]: Stopped target remote-fs.target. Aug 13 01:00:54.643160 systemd[1]: Stopped target remote-fs-pre.target. Aug 13 01:00:54.680680 ignition[1299]: INFO : Ignition 2.14.0 Aug 13 01:00:54.680680 ignition[1299]: INFO : Stage: umount Aug 13 01:00:54.680680 ignition[1299]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 01:00:54.680680 ignition[1299]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Aug 13 01:00:54.645469 systemd[1]: Stopped target sysinit.target. Aug 13 01:00:54.646684 systemd[1]: Stopped target local-fs.target. Aug 13 01:00:54.691677 ignition[1299]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 01:00:54.691677 ignition[1299]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 01:00:54.647874 systemd[1]: Stopped target local-fs-pre.target. Aug 13 01:00:54.649130 systemd[1]: Stopped target swap.target. Aug 13 01:00:54.650299 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 01:00:54.696482 ignition[1299]: INFO : PUT result: OK Aug 13 01:00:54.650383 systemd[1]: Stopped dracut-pre-mount.service. Aug 13 01:00:54.651571 systemd[1]: Stopped target cryptsetup.target. Aug 13 01:00:54.652675 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 01:00:54.652798 systemd[1]: Stopped dracut-initqueue.service. Aug 13 01:00:54.654007 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 01:00:54.701282 ignition[1299]: INFO : umount: umount passed Aug 13 01:00:54.701282 ignition[1299]: INFO : Ignition finished successfully Aug 13 01:00:54.654073 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Aug 13 01:00:54.655167 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 01:00:54.655227 systemd[1]: Stopped ignition-files.service. Aug 13 01:00:54.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:54.657415 systemd[1]: Stopping ignition-mount.service... Aug 13 01:00:54.705000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:54.659586 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 01:00:54.706000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:54.659662 systemd[1]: Stopped kmod-static-nodes.service. Aug 13 01:00:54.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:54.661767 systemd[1]: Stopping sysroot-boot.service... Aug 13 01:00:54.662775 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 01:00:54.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:54.662857 systemd[1]: Stopped systemd-udev-trigger.service. Aug 13 01:00:54.663915 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 01:00:54.663993 systemd[1]: Stopped dracut-pre-trigger.service. Aug 13 01:00:54.702495 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 01:00:54.704330 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 01:00:54.704453 systemd[1]: Stopped ignition-mount.service. Aug 13 01:00:54.706108 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 01:00:54.706179 systemd[1]: Stopped ignition-disks.service. Aug 13 01:00:54.707150 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 01:00:54.707211 systemd[1]: Stopped ignition-kargs.service. Aug 13 01:00:54.708202 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 01:00:54.708259 systemd[1]: Stopped ignition-fetch.service. Aug 13 01:00:54.709301 systemd[1]: Stopped target network.target. Aug 13 01:00:54.710316 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 01:00:54.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:54.710379 systemd[1]: Stopped ignition-fetch-offline.service. Aug 13 01:00:54.711456 systemd[1]: Stopped target paths.target. Aug 13 01:00:54.712524 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 01:00:54.716598 systemd[1]: Stopped systemd-ask-password-console.path. Aug 13 01:00:54.717610 systemd[1]: Stopped target slices.target. Aug 13 01:00:54.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:54.718641 systemd[1]: Stopped target sockets.target. Aug 13 01:00:54.719738 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 01:00:54.719784 systemd[1]: Closed iscsid.socket. Aug 13 01:00:54.731000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:54.720819 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 01:00:54.720857 systemd[1]: Closed iscsiuio.socket. Aug 13 01:00:54.734000 audit: BPF prog-id=6 op=UNLOAD Aug 13 01:00:54.721915 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 01:00:54.721984 systemd[1]: Stopped ignition-setup.service. Aug 13 01:00:54.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:54.723339 systemd[1]: Stopping systemd-networkd.service... Aug 13 01:00:54.738000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:54.724368 systemd[1]: Stopping systemd-resolved.service... Aug 13 01:00:54.739000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:54.726616 systemd-networkd[1106]: eth0: DHCPv6 lease lost Aug 13 01:00:54.741000 audit: BPF prog-id=9 op=UNLOAD Aug 13 01:00:54.728136 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 01:00:54.728267 systemd[1]: Stopped systemd-networkd.service. Aug 13 01:00:54.731455 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 01:00:54.731765 systemd[1]: Stopped systemd-resolved.service. Aug 13 01:00:54.733169 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 01:00:54.733217 systemd[1]: Closed systemd-networkd.socket. Aug 13 01:00:54.735172 systemd[1]: Stopping network-cleanup.service... Aug 13 01:00:54.737431 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 01:00:54.737510 systemd[1]: Stopped parse-ip-for-networkd.service. Aug 13 01:00:54.738665 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 01:00:54.738731 systemd[1]: Stopped systemd-sysctl.service. Aug 13 01:00:54.739937 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 01:00:54.753000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:54.740016 systemd[1]: Stopped systemd-modules-load.service. Aug 13 01:00:54.747017 systemd[1]: Stopping systemd-udevd.service... Aug 13 01:00:54.749512 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 01:00:54.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:54.754496 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 01:00:54.761000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:54.754750 systemd[1]: Stopped systemd-udevd.service. Aug 13 01:00:54.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:54.756944 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 01:00:54.757001 systemd[1]: Closed systemd-udevd-control.socket. Aug 13 01:00:54.757823 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 01:00:54.757871 systemd[1]: Closed systemd-udevd-kernel.socket. Aug 13 01:00:54.760485 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 01:00:54.760563 systemd[1]: Stopped dracut-pre-udev.service. Aug 13 01:00:54.761336 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 01:00:54.761398 systemd[1]: Stopped dracut-cmdline.service. Aug 13 01:00:54.762883 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 01:00:54.762930 systemd[1]: Stopped dracut-cmdline-ask.service. Aug 13 01:00:54.764909 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Aug 13 01:00:54.772627 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 01:00:54.771000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:54.772878 systemd[1]: Stopped systemd-vconsole-setup.service. Aug 13 01:00:54.773000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:54.774033 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 01:00:54.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:54.776000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:54.774167 systemd[1]: Stopped network-cleanup.service. Aug 13 01:00:54.775588 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 01:00:54.775700 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Aug 13 01:00:54.818577 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 01:00:54.818680 systemd[1]: Stopped sysroot-boot.service. Aug 13 01:00:54.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:54.819948 systemd[1]: Reached target initrd-switch-root.target. Aug 13 01:00:54.821026 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 01:00:54.820000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:00:54.821089 systemd[1]: Stopped initrd-setup-root.service. Aug 13 01:00:54.822903 systemd[1]: Starting initrd-switch-root.service... Aug 13 01:00:54.855839 systemd[1]: Switching root. Aug 13 01:00:54.856000 audit: BPF prog-id=5 op=UNLOAD Aug 13 01:00:54.856000 audit: BPF prog-id=4 op=UNLOAD Aug 13 01:00:54.856000 audit: BPF prog-id=3 op=UNLOAD Aug 13 01:00:54.860000 audit: BPF prog-id=8 op=UNLOAD Aug 13 01:00:54.860000 audit: BPF prog-id=7 op=UNLOAD Aug 13 01:00:54.880758 iscsid[1111]: iscsid shutting down. Aug 13 01:00:54.881457 systemd-journald[185]: Journal stopped Aug 13 01:01:00.042043 systemd-journald[185]: Received SIGTERM from PID 1 (n/a). Aug 13 01:01:00.042112 kernel: SELinux: Class mctp_socket not defined in policy. Aug 13 01:01:00.042131 kernel: SELinux: Class anon_inode not defined in policy. Aug 13 01:01:00.042145 kernel: SELinux: the above unknown classes and permissions will be allowed Aug 13 01:01:00.042156 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 01:01:00.042167 kernel: SELinux: policy capability open_perms=1 Aug 13 01:01:00.042179 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 01:01:00.042190 kernel: SELinux: policy capability always_check_network=0 Aug 13 01:01:00.042206 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 01:01:00.042218 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 01:01:00.042234 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 01:01:00.042244 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 01:01:00.042260 systemd[1]: Successfully loaded SELinux policy in 78.580ms. Aug 13 01:01:00.042281 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.013ms. Aug 13 01:01:00.042294 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 13 01:01:00.042307 systemd[1]: Detected virtualization amazon. Aug 13 01:01:00.042322 systemd[1]: Detected architecture x86-64. Aug 13 01:01:00.042334 systemd[1]: Detected first boot. Aug 13 01:01:00.042346 systemd[1]: Initializing machine ID from VM UUID. Aug 13 01:01:00.042359 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Aug 13 01:01:00.042370 systemd[1]: Populated /etc with preset unit settings. Aug 13 01:01:00.042382 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 01:01:00.042399 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 01:01:00.042413 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:01:00.042430 systemd[1]: Queued start job for default target multi-user.target. Aug 13 01:01:00.042442 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device. Aug 13 01:01:00.042454 systemd[1]: Created slice system-addon\x2dconfig.slice. Aug 13 01:01:00.042467 systemd[1]: Created slice system-addon\x2drun.slice. Aug 13 01:01:00.042479 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Aug 13 01:01:00.042491 systemd[1]: Created slice system-getty.slice. Aug 13 01:01:00.042504 systemd[1]: Created slice system-modprobe.slice. Aug 13 01:01:00.042515 systemd[1]: Created slice system-serial\x2dgetty.slice. Aug 13 01:01:00.042545 systemd[1]: Created slice system-system\x2dcloudinit.slice. Aug 13 01:01:00.042558 systemd[1]: Created slice system-systemd\x2dfsck.slice. Aug 13 01:01:00.042570 systemd[1]: Created slice user.slice. Aug 13 01:01:00.042582 systemd[1]: Started systemd-ask-password-console.path. Aug 13 01:01:00.042594 systemd[1]: Started systemd-ask-password-wall.path. Aug 13 01:01:00.042606 systemd[1]: Set up automount boot.automount. Aug 13 01:01:00.042618 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Aug 13 01:01:00.042631 systemd[1]: Reached target integritysetup.target. Aug 13 01:01:00.042643 systemd[1]: Reached target remote-cryptsetup.target. Aug 13 01:01:00.042658 systemd[1]: Reached target remote-fs.target. Aug 13 01:01:00.042670 systemd[1]: Reached target slices.target. Aug 13 01:01:00.042681 systemd[1]: Reached target swap.target. Aug 13 01:01:00.042692 systemd[1]: Reached target torcx.target. Aug 13 01:01:00.042707 systemd[1]: Reached target veritysetup.target. Aug 13 01:01:00.042721 systemd[1]: Listening on systemd-coredump.socket. Aug 13 01:01:00.042740 systemd[1]: Listening on systemd-initctl.socket. Aug 13 01:01:00.042755 kernel: kauditd_printk_skb: 46 callbacks suppressed Aug 13 01:01:00.042767 kernel: audit: type=1400 audit(1755046859.882:85): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Aug 13 01:01:00.042779 systemd[1]: Listening on systemd-journald-audit.socket. Aug 13 01:01:00.042792 kernel: audit: type=1335 audit(1755046859.882:86): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Aug 13 01:01:00.042806 systemd[1]: Listening on systemd-journald-dev-log.socket. Aug 13 01:01:00.042817 systemd[1]: Listening on systemd-journald.socket. Aug 13 01:01:00.042829 systemd[1]: Listening on systemd-networkd.socket. Aug 13 01:01:00.042841 systemd[1]: Listening on systemd-udevd-control.socket. Aug 13 01:01:00.042853 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 13 01:01:00.042866 systemd[1]: Listening on systemd-userdbd.socket. Aug 13 01:01:00.042878 systemd[1]: Mounting dev-hugepages.mount... Aug 13 01:01:00.042890 systemd[1]: Mounting dev-mqueue.mount... Aug 13 01:01:00.042902 systemd[1]: Mounting media.mount... Aug 13 01:01:00.042914 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:01:00.042929 systemd[1]: Mounting sys-kernel-debug.mount... Aug 13 01:01:00.042940 systemd[1]: Mounting sys-kernel-tracing.mount... Aug 13 01:01:00.042953 systemd[1]: Mounting tmp.mount... Aug 13 01:01:00.042965 systemd[1]: Starting flatcar-tmpfiles.service... Aug 13 01:01:00.042978 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 01:01:00.042990 systemd[1]: Starting kmod-static-nodes.service... Aug 13 01:01:00.043001 systemd[1]: Starting modprobe@configfs.service... Aug 13 01:01:00.043014 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 01:01:00.043028 systemd[1]: Starting modprobe@drm.service... Aug 13 01:01:00.043040 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 01:01:00.043052 systemd[1]: Starting modprobe@fuse.service... Aug 13 01:01:00.043064 systemd[1]: Starting modprobe@loop.service... Aug 13 01:01:00.043077 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 01:01:00.043089 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Aug 13 01:01:00.043100 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Aug 13 01:01:00.043115 systemd[1]: Starting systemd-journald.service... Aug 13 01:01:00.043127 kernel: fuse: init (API version 7.34) Aug 13 01:01:00.043140 systemd[1]: Starting systemd-modules-load.service... Aug 13 01:01:00.043152 systemd[1]: Starting systemd-network-generator.service... Aug 13 01:01:00.043165 systemd[1]: Starting systemd-remount-fs.service... Aug 13 01:01:00.043177 systemd[1]: Starting systemd-udev-trigger.service... Aug 13 01:01:00.043189 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:01:00.043201 systemd[1]: Mounted dev-hugepages.mount. Aug 13 01:01:00.043212 systemd[1]: Mounted dev-mqueue.mount. Aug 13 01:01:00.043224 systemd[1]: Mounted media.mount. Aug 13 01:01:00.043237 kernel: loop: module loaded Aug 13 01:01:00.043251 systemd[1]: Mounted sys-kernel-debug.mount. Aug 13 01:01:00.043263 systemd[1]: Mounted sys-kernel-tracing.mount. Aug 13 01:01:00.043275 kernel: audit: type=1305 audit(1755046860.039:87): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Aug 13 01:01:00.043287 kernel: audit: type=1300 audit(1755046860.039:87): arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffd9f9ecca0 a2=4000 a3=7ffd9f9ecd3c items=0 ppid=1 pid=1449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:01:00.043298 kernel: audit: type=1327 audit(1755046860.039:87): proctitle="/usr/lib/systemd/systemd-journald" Aug 13 01:01:00.043314 systemd-journald[1449]: Journal started Aug 13 01:01:00.043364 systemd-journald[1449]: Runtime Journal (/run/log/journal/ec2ea6e0d803d95e6bdf93e2961734c7) is 4.8M, max 38.3M, 33.5M free. Aug 13 01:00:59.882000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Aug 13 01:00:59.882000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Aug 13 01:01:00.039000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Aug 13 01:01:00.039000 audit[1449]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffd9f9ecca0 a2=4000 a3=7ffd9f9ecd3c items=0 ppid=1 pid=1449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:01:00.039000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Aug 13 01:01:00.076668 systemd[1]: Started systemd-journald.service. Aug 13 01:01:00.076762 kernel: audit: type=1130 audit(1755046860.063:88): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:00.076790 kernel: audit: type=1130 audit(1755046860.072:89): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:00.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:00.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:00.071163 systemd[1]: Mounted tmp.mount. Aug 13 01:01:00.073306 systemd[1]: Finished kmod-static-nodes.service. Aug 13 01:01:00.077593 systemd[1]: Finished flatcar-tmpfiles.service. Aug 13 01:01:00.080318 kernel: audit: type=1130 audit(1755046860.078:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:00.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:00.079936 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 01:01:00.080109 systemd[1]: Finished modprobe@configfs.service. Aug 13 01:01:00.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:00.085652 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:01:00.085821 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 01:01:00.090360 kernel: audit: type=1130 audit(1755046860.083:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:00.090430 kernel: audit: type=1131 audit(1755046860.083:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:00.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:00.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:00.094000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:00.096947 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 01:01:00.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:00.096000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:00.097282 systemd[1]: Finished modprobe@drm.service. Aug 13 01:01:00.098302 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:01:00.098499 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 01:01:00.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:00.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:00.099458 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 01:01:00.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:00.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:00.099671 systemd[1]: Finished modprobe@fuse.service. Aug 13 01:01:00.100962 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:01:00.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:00.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:00.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:00.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:00.102678 systemd[1]: Finished modprobe@loop.service. Aug 13 01:01:00.103734 systemd[1]: Finished systemd-modules-load.service. Aug 13 01:01:00.104704 systemd[1]: Finished systemd-network-generator.service. Aug 13 01:01:00.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:00.106118 systemd[1]: Finished systemd-remount-fs.service. Aug 13 01:01:00.107261 systemd[1]: Reached target network-pre.target. Aug 13 01:01:00.109428 systemd[1]: Mounting sys-fs-fuse-connections.mount... Aug 13 01:01:00.111429 systemd[1]: Mounting sys-kernel-config.mount... Aug 13 01:01:00.112339 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 01:01:00.118249 systemd[1]: Starting systemd-hwdb-update.service... Aug 13 01:01:00.120242 systemd[1]: Starting systemd-journal-flush.service... Aug 13 01:01:00.123983 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:01:00.129442 systemd[1]: Starting systemd-random-seed.service... Aug 13 01:01:00.130685 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 01:01:00.132251 systemd[1]: Starting systemd-sysctl.service... Aug 13 01:01:00.134126 systemd[1]: Starting systemd-sysusers.service... Aug 13 01:01:00.136240 systemd[1]: Mounted sys-fs-fuse-connections.mount. Aug 13 01:01:00.136963 systemd[1]: Mounted sys-kernel-config.mount. Aug 13 01:01:00.139682 systemd-journald[1449]: Time spent on flushing to /var/log/journal/ec2ea6e0d803d95e6bdf93e2961734c7 is 37.699ms for 1162 entries. Aug 13 01:01:00.139682 systemd-journald[1449]: System Journal (/var/log/journal/ec2ea6e0d803d95e6bdf93e2961734c7) is 8.0M, max 195.6M, 187.6M free. Aug 13 01:01:00.189040 systemd-journald[1449]: Received client request to flush runtime journal. Aug 13 01:01:00.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:00.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:00.145663 systemd[1]: Finished systemd-random-seed.service. Aug 13 01:01:00.146333 systemd[1]: Reached target first-boot-complete.target. Aug 13 01:01:00.175003 systemd[1]: Finished systemd-udev-trigger.service. Aug 13 01:01:00.176900 systemd[1]: Starting systemd-udev-settle.service... Aug 13 01:01:00.189974 systemd[1]: Finished systemd-journal-flush.service. Aug 13 01:01:00.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:00.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:00.195142 systemd[1]: Finished systemd-sysctl.service. Aug 13 01:01:00.196444 udevadm[1500]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 13 01:01:00.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:00.362278 systemd[1]: Finished systemd-sysusers.service. Aug 13 01:01:00.369281 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Aug 13 01:01:00.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:00.617008 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Aug 13 01:01:01.929826 systemd[1]: Finished systemd-hwdb-update.service. Aug 13 01:01:01.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:01.932436 systemd[1]: Starting systemd-udevd.service... Aug 13 01:01:02.001286 systemd-udevd[1509]: Using default interface naming scheme 'v252'. Aug 13 01:01:02.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:02.193834 systemd[1]: Started systemd-udevd.service. Aug 13 01:01:02.197473 systemd[1]: Starting systemd-networkd.service... Aug 13 01:01:02.246855 systemd[1]: Starting systemd-userdbd.service... Aug 13 01:01:02.417067 (udev-worker)[1518]: Network interface NamePolicy= disabled on kernel command line. Aug 13 01:01:02.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:02.418702 systemd[1]: Started systemd-userdbd.service. Aug 13 01:01:02.435990 systemd[1]: Found device dev-ttyS0.device. Aug 13 01:01:02.635559 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Aug 13 01:01:02.719103 systemd-networkd[1517]: lo: Link UP Aug 13 01:01:02.719125 systemd-networkd[1517]: lo: Gained carrier Aug 13 01:01:02.719807 systemd-networkd[1517]: Enumeration completed Aug 13 01:01:02.719989 systemd[1]: Started systemd-networkd.service. Aug 13 01:01:02.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:02.728046 systemd[1]: Starting systemd-networkd-wait-online.service... Aug 13 01:01:02.732596 systemd-networkd[1517]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 01:01:02.756759 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 13 01:01:02.747124 systemd-networkd[1517]: eth0: Link UP Aug 13 01:01:02.747300 systemd-networkd[1517]: eth0: Gained carrier Aug 13 01:01:02.770736 systemd-networkd[1517]: eth0: DHCPv4 address 172.31.20.232/20, gateway 172.31.16.1 acquired from 172.31.16.1 Aug 13 01:01:02.792571 kernel: ACPI: button: Power Button [PWRF] Aug 13 01:01:02.780000 audit[1525]: AVC avc: denied { confidentiality } for pid=1525 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Aug 13 01:01:02.795595 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Aug 13 01:01:02.780000 audit[1525]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55587e41d2f0 a1=338ac a2=7fe74e7b3bc5 a3=5 items=110 ppid=1509 pid=1525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:01:02.780000 audit: CWD cwd="/" Aug 13 01:01:02.780000 audit: PATH item=0 name=(null) inode=43 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=1 name=(null) inode=14710 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=2 name=(null) inode=14710 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=3 name=(null) inode=14711 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=4 name=(null) inode=14710 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=5 name=(null) inode=14712 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=6 name=(null) inode=14710 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=7 name=(null) inode=14713 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=8 name=(null) inode=14713 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=9 name=(null) inode=14714 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=10 name=(null) inode=14713 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=11 name=(null) inode=14715 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=12 name=(null) inode=14713 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=13 name=(null) inode=14716 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=14 name=(null) inode=14713 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=15 name=(null) inode=14717 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=16 name=(null) inode=14713 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=17 name=(null) inode=14718 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=18 name=(null) inode=14710 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=19 name=(null) inode=14719 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=20 name=(null) inode=14719 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=21 name=(null) inode=14720 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=22 name=(null) inode=14719 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=23 name=(null) inode=14721 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=24 name=(null) inode=14719 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=25 name=(null) inode=14722 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=26 name=(null) inode=14719 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=27 name=(null) inode=14723 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=28 name=(null) inode=14719 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=29 name=(null) inode=14724 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=30 name=(null) inode=14710 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=31 name=(null) inode=14725 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=32 name=(null) inode=14725 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=33 name=(null) inode=14726 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=34 name=(null) inode=14725 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=35 name=(null) inode=14727 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=36 name=(null) inode=14725 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=37 name=(null) inode=14728 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=38 name=(null) inode=14725 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=39 name=(null) inode=14729 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=40 name=(null) inode=14725 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=41 name=(null) inode=14730 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=42 name=(null) inode=14710 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=43 name=(null) inode=14731 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=44 name=(null) inode=14731 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=45 name=(null) inode=14732 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=46 name=(null) inode=14731 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=47 name=(null) inode=14733 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=48 name=(null) inode=14731 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=49 name=(null) inode=14734 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=50 name=(null) inode=14731 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=51 name=(null) inode=14735 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=52 name=(null) inode=14731 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=53 name=(null) inode=14736 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=54 name=(null) inode=43 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=55 name=(null) inode=14737 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=56 name=(null) inode=14737 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=57 name=(null) inode=14738 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=58 name=(null) inode=14737 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=59 name=(null) inode=14739 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=60 name=(null) inode=14737 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=61 name=(null) inode=14740 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=62 name=(null) inode=14740 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=63 name=(null) inode=14741 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=64 name=(null) inode=14740 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=65 name=(null) inode=14742 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=66 name=(null) inode=14740 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=67 name=(null) inode=14743 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=68 name=(null) inode=14740 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=69 name=(null) inode=14744 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=70 name=(null) inode=14740 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=71 name=(null) inode=14745 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=72 name=(null) inode=14737 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=73 name=(null) inode=14746 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=74 name=(null) inode=14746 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=75 name=(null) inode=14747 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=76 name=(null) inode=14746 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=77 name=(null) inode=14748 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=78 name=(null) inode=14746 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=79 name=(null) inode=14749 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=80 name=(null) inode=14746 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=81 name=(null) inode=14750 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=82 name=(null) inode=14746 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=83 name=(null) inode=14751 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=84 name=(null) inode=14737 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=85 name=(null) inode=14752 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=86 name=(null) inode=14752 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=87 name=(null) inode=14753 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=88 name=(null) inode=14752 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=89 name=(null) inode=14754 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=90 name=(null) inode=14752 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=91 name=(null) inode=14755 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=92 name=(null) inode=14752 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=93 name=(null) inode=14756 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=94 name=(null) inode=14752 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=95 name=(null) inode=14757 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=96 name=(null) inode=14737 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=97 name=(null) inode=14758 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=98 name=(null) inode=14758 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=99 name=(null) inode=14759 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=100 name=(null) inode=14758 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=101 name=(null) inode=14760 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=102 name=(null) inode=14758 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=103 name=(null) inode=14761 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=104 name=(null) inode=14758 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=105 name=(null) inode=14762 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=106 name=(null) inode=14758 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=107 name=(null) inode=14763 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PATH item=109 name=(null) inode=14764 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:01:02.780000 audit: PROCTITLE proctitle="(udev-worker)" Aug 13 01:01:02.823561 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Aug 13 01:01:02.852075 kernel: ACPI: button: Sleep Button [SLPF] Aug 13 01:01:02.887664 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Aug 13 01:01:02.895551 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 01:01:03.051672 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 13 01:01:03.052755 systemd[1]: Finished systemd-udev-settle.service. Aug 13 01:01:03.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:03.054741 systemd[1]: Starting lvm2-activation-early.service... Aug 13 01:01:03.119884 lvm[1624]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 01:01:03.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:03.170377 systemd[1]: Finished lvm2-activation-early.service. Aug 13 01:01:03.171491 systemd[1]: Reached target cryptsetup.target. Aug 13 01:01:03.177885 systemd[1]: Starting lvm2-activation.service... Aug 13 01:01:03.185338 lvm[1626]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 01:01:03.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:03.213498 systemd[1]: Finished lvm2-activation.service. Aug 13 01:01:03.214615 systemd[1]: Reached target local-fs-pre.target. Aug 13 01:01:03.215598 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 01:01:03.215631 systemd[1]: Reached target local-fs.target. Aug 13 01:01:03.216433 systemd[1]: Reached target machines.target. Aug 13 01:01:03.219760 systemd[1]: Starting ldconfig.service... Aug 13 01:01:03.222163 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 01:01:03.222251 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 01:01:03.223993 systemd[1]: Starting systemd-boot-update.service... Aug 13 01:01:03.226460 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Aug 13 01:01:03.229254 systemd[1]: Starting systemd-machine-id-commit.service... Aug 13 01:01:03.232056 systemd[1]: Starting systemd-sysext.service... Aug 13 01:01:03.240232 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1629 (bootctl) Aug 13 01:01:03.242258 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Aug 13 01:01:03.259900 systemd[1]: Unmounting usr-share-oem.mount... Aug 13 01:01:03.267381 systemd[1]: usr-share-oem.mount: Deactivated successfully. Aug 13 01:01:03.267735 systemd[1]: Unmounted usr-share-oem.mount. Aug 13 01:01:03.269838 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Aug 13 01:01:03.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:03.291569 kernel: loop0: detected capacity change from 0 to 221472 Aug 13 01:01:03.457554 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 01:01:03.492568 kernel: loop1: detected capacity change from 0 to 221472 Aug 13 01:01:03.507299 systemd-fsck[1641]: fsck.fat 4.2 (2021-01-31) Aug 13 01:01:03.507299 systemd-fsck[1641]: /dev/nvme0n1p1: 789 files, 119324/258078 clusters Aug 13 01:01:03.511219 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Aug 13 01:01:03.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:03.514305 systemd[1]: Mounting boot.mount... Aug 13 01:01:03.528974 (sd-sysext)[1644]: Using extensions 'kubernetes'. Aug 13 01:01:03.529550 (sd-sysext)[1644]: Merged extensions into '/usr'. Aug 13 01:01:03.565036 systemd[1]: Mounted boot.mount. Aug 13 01:01:03.569262 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:01:03.572215 systemd[1]: Mounting usr-share-oem.mount... Aug 13 01:01:03.573486 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 01:01:03.575695 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 01:01:03.578456 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 01:01:03.582951 systemd[1]: Starting modprobe@loop.service... Aug 13 01:01:03.586469 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 01:01:03.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:03.588000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:03.586725 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 01:01:03.586917 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:01:03.588412 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:01:03.588696 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 01:01:03.590477 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:01:03.590722 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 01:01:03.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:03.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:03.598998 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 01:01:03.602087 systemd[1]: Finished systemd-machine-id-commit.service. Aug 13 01:01:03.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:03.609049 systemd[1]: Mounted usr-share-oem.mount. Aug 13 01:01:03.614218 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:01:03.614777 systemd[1]: Finished modprobe@loop.service. Aug 13 01:01:03.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:03.614000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:03.624059 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:01:03.624226 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 01:01:03.627191 systemd[1]: Finished systemd-sysext.service. Aug 13 01:01:03.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:03.630405 systemd[1]: Starting ensure-sysext.service... Aug 13 01:01:03.633324 systemd[1]: Starting systemd-tmpfiles-setup.service... Aug 13 01:01:03.645059 systemd[1]: Reloading. Aug 13 01:01:03.666819 systemd-tmpfiles[1675]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Aug 13 01:01:03.668564 systemd-tmpfiles[1675]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 01:01:03.671520 systemd-tmpfiles[1675]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 01:01:03.734749 /usr/lib/systemd/system-generators/torcx-generator[1699]: time="2025-08-13T01:01:03Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 01:01:03.736659 /usr/lib/systemd/system-generators/torcx-generator[1699]: time="2025-08-13T01:01:03Z" level=info msg="torcx already run" Aug 13 01:01:03.871467 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 01:01:03.871494 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 01:01:03.892798 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:01:04.016841 systemd[1]: Finished systemd-boot-update.service. Aug 13 01:01:04.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:04.036933 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 01:01:04.039507 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 01:01:04.042302 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 01:01:04.045141 systemd[1]: Starting modprobe@loop.service... Aug 13 01:01:04.046156 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 01:01:04.046659 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 01:01:04.048273 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:01:04.048778 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 01:01:04.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:04.051000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:04.053767 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:01:04.054013 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 01:01:04.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:04.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:04.058944 systemd[1]: Finished systemd-tmpfiles-setup.service. Aug 13 01:01:04.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:04.060933 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:01:04.061299 systemd[1]: Finished modprobe@loop.service. Aug 13 01:01:04.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:04.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:04.065973 systemd[1]: Starting audit-rules.service... Aug 13 01:01:04.074607 systemd[1]: Starting clean-ca-certificates.service... Aug 13 01:01:04.083347 systemd[1]: Starting systemd-journal-catalog-update.service... Aug 13 01:01:04.084087 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:01:04.084247 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 01:01:04.086693 systemd[1]: Starting systemd-resolved.service... Aug 13 01:01:04.092482 systemd[1]: Starting systemd-timesyncd.service... Aug 13 01:01:04.097937 systemd[1]: Starting systemd-update-utmp.service... Aug 13 01:01:04.105832 systemd[1]: Finished clean-ca-certificates.service. Aug 13 01:01:04.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:04.124000 audit[1776]: SYSTEM_BOOT pid=1776 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Aug 13 01:01:04.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:04.130000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:04.114622 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 01:01:04.116810 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 01:01:04.121119 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 01:01:04.124852 systemd[1]: Starting modprobe@loop.service... Aug 13 01:01:04.127622 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 01:01:04.127854 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 01:01:04.128033 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 01:01:04.129204 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:01:04.129449 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 01:01:04.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:04.140000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:04.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:04.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:04.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:04.140642 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:01:04.140872 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 01:01:04.142461 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:01:04.142693 systemd[1]: Finished modprobe@loop.service. Aug 13 01:01:04.143782 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:01:04.144008 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 01:01:04.147003 systemd[1]: Finished systemd-update-utmp.service. Aug 13 01:01:04.165297 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 01:01:04.167345 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 01:01:04.170500 systemd[1]: Starting modprobe@drm.service... Aug 13 01:01:04.174500 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 01:01:04.179825 systemd[1]: Starting modprobe@loop.service... Aug 13 01:01:04.180644 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 01:01:04.180855 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 01:01:04.181058 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 01:01:04.182632 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:01:04.182886 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 01:01:04.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:04.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:04.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:04.187417 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 01:01:04.187679 systemd[1]: Finished modprobe@drm.service. Aug 13 01:01:04.188000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:04.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:04.194163 systemd[1]: Finished ensure-sysext.service. Aug 13 01:01:04.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:04.196000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:04.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:04.198000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:04.196767 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:01:04.197016 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 01:01:04.197742 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:01:04.199126 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:01:04.199374 systemd[1]: Finished modprobe@loop.service. Aug 13 01:01:04.200133 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 01:01:04.241585 systemd[1]: Finished systemd-journal-catalog-update.service. Aug 13 01:01:04.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:01:04.252000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Aug 13 01:01:04.252000 audit[1807]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdbfd1abf0 a2=420 a3=0 items=0 ppid=1766 pid=1807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:01:04.253938 augenrules[1807]: No rules Aug 13 01:01:04.252000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Aug 13 01:01:04.256051 systemd[1]: Finished audit-rules.service. Aug 13 01:01:04.268765 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:01:04.268798 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:01:04.323057 systemd[1]: Started systemd-timesyncd.service. Aug 13 01:01:04.323756 systemd[1]: Reached target time-set.target. Aug 13 01:01:04.349823 systemd-resolved[1771]: Positive Trust Anchors: Aug 13 01:01:04.349840 systemd-resolved[1771]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 01:01:04.349883 systemd-resolved[1771]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 13 01:01:04.390881 systemd-resolved[1771]: Defaulting to hostname 'linux'. Aug 13 01:01:04.392889 systemd[1]: Started systemd-resolved.service. Aug 13 01:01:04.393351 systemd[1]: Reached target network.target. Aug 13 01:01:04.393697 systemd[1]: Reached target nss-lookup.target. Aug 13 01:01:05.345407 systemd-timesyncd[1774]: Contacted time server 144.202.0.197:123 (0.flatcar.pool.ntp.org). Aug 13 01:01:05.345474 systemd-timesyncd[1774]: Initial clock synchronization to Wed 2025-08-13 01:01:05.345255 UTC. Aug 13 01:01:05.345539 systemd-resolved[1771]: Clock change detected. Flushing caches. Aug 13 01:01:05.547431 ldconfig[1628]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 01:01:05.554068 systemd[1]: Finished ldconfig.service. Aug 13 01:01:05.556919 systemd[1]: Starting systemd-update-done.service... Aug 13 01:01:05.566653 systemd[1]: Finished systemd-update-done.service. Aug 13 01:01:05.567558 systemd[1]: Reached target sysinit.target. Aug 13 01:01:05.568506 systemd[1]: Started motdgen.path. Aug 13 01:01:05.569214 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Aug 13 01:01:05.569999 systemd[1]: Started logrotate.timer. Aug 13 01:01:05.571373 systemd[1]: Started mdadm.timer. Aug 13 01:01:05.572051 systemd[1]: Started systemd-tmpfiles-clean.timer. Aug 13 01:01:05.572625 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 01:01:05.572675 systemd[1]: Reached target paths.target. Aug 13 01:01:05.573062 systemd[1]: Reached target timers.target. Aug 13 01:01:05.573794 systemd[1]: Listening on dbus.socket. Aug 13 01:01:05.576591 systemd[1]: Starting docker.socket... Aug 13 01:01:05.579397 systemd[1]: Listening on sshd.socket. Aug 13 01:01:05.580511 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 01:01:05.581496 systemd[1]: Listening on docker.socket. Aug 13 01:01:05.582278 systemd[1]: Reached target sockets.target. Aug 13 01:01:05.582968 systemd[1]: Reached target basic.target. Aug 13 01:01:05.583791 systemd[1]: System is tainted: cgroupsv1 Aug 13 01:01:05.583935 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 13 01:01:05.583980 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 13 01:01:05.585351 systemd[1]: Starting containerd.service... Aug 13 01:01:05.587396 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Aug 13 01:01:05.589836 systemd[1]: Starting dbus.service... Aug 13 01:01:05.592166 systemd[1]: Starting enable-oem-cloudinit.service... Aug 13 01:01:05.594548 systemd[1]: Starting extend-filesystems.service... Aug 13 01:01:05.599542 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Aug 13 01:01:05.602812 systemd[1]: Starting motdgen.service... Aug 13 01:01:05.607176 systemd[1]: Starting prepare-helm.service... Aug 13 01:01:05.610387 systemd-networkd[1517]: eth0: Gained IPv6LL Aug 13 01:01:05.610968 systemd[1]: Starting ssh-key-proc-cmdline.service... Aug 13 01:01:05.616258 jq[1823]: false Aug 13 01:01:05.619441 systemd[1]: Starting sshd-keygen.service... Aug 13 01:01:05.629032 systemd[1]: Starting systemd-logind.service... Aug 13 01:01:05.629734 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 01:01:05.629822 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 01:01:05.632989 systemd[1]: Starting update-engine.service... Aug 13 01:01:05.636178 systemd[1]: Starting update-ssh-keys-after-ignition.service... Aug 13 01:01:05.639251 systemd[1]: Finished systemd-networkd-wait-online.service. Aug 13 01:01:05.644096 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 01:01:05.644590 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Aug 13 01:01:05.646089 systemd[1]: Reached target network-online.target. Aug 13 01:01:05.650129 systemd[1]: Started amazon-ssm-agent.service. Aug 13 01:01:05.674570 jq[1838]: true Aug 13 01:01:05.656846 systemd[1]: Starting kubelet.service... Aug 13 01:01:05.662565 systemd[1]: Started nvidia.service. Aug 13 01:01:05.684761 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 01:01:05.710504 jq[1848]: true Aug 13 01:01:05.685128 systemd[1]: Finished ssh-key-proc-cmdline.service. Aug 13 01:01:05.736944 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 01:01:05.737320 systemd[1]: Finished motdgen.service. Aug 13 01:01:05.762222 tar[1840]: linux-amd64/helm Aug 13 01:01:05.775175 extend-filesystems[1824]: Found loop1 Aug 13 01:01:05.779304 extend-filesystems[1824]: Found nvme0n1 Aug 13 01:01:05.780031 extend-filesystems[1824]: Found nvme0n1p1 Aug 13 01:01:05.780031 extend-filesystems[1824]: Found nvme0n1p2 Aug 13 01:01:05.780031 extend-filesystems[1824]: Found nvme0n1p3 Aug 13 01:01:05.780031 extend-filesystems[1824]: Found usr Aug 13 01:01:05.780031 extend-filesystems[1824]: Found nvme0n1p4 Aug 13 01:01:05.780031 extend-filesystems[1824]: Found nvme0n1p6 Aug 13 01:01:05.780031 extend-filesystems[1824]: Found nvme0n1p7 Aug 13 01:01:05.780031 extend-filesystems[1824]: Found nvme0n1p9 Aug 13 01:01:05.780031 extend-filesystems[1824]: Checking size of /dev/nvme0n1p9 Aug 13 01:01:05.821734 env[1846]: time="2025-08-13T01:01:05.821633601Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Aug 13 01:01:05.972913 env[1846]: time="2025-08-13T01:01:05.972861028Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 01:01:05.973251 env[1846]: time="2025-08-13T01:01:05.973228918Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 01:01:05.976388 env[1846]: time="2025-08-13T01:01:05.976336336Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.189-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 01:01:05.976562 env[1846]: time="2025-08-13T01:01:05.976543576Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 01:01:05.976997 env[1846]: time="2025-08-13T01:01:05.976970536Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 01:01:05.977086 env[1846]: time="2025-08-13T01:01:05.977070498Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 01:01:05.977161 env[1846]: time="2025-08-13T01:01:05.977144987Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 13 01:01:05.977256 env[1846]: time="2025-08-13T01:01:05.977240655Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 01:01:05.977434 env[1846]: time="2025-08-13T01:01:05.977416435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 01:01:05.977804 env[1846]: time="2025-08-13T01:01:05.977783500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 01:01:05.978205 env[1846]: time="2025-08-13T01:01:05.978154368Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 01:01:05.978311 env[1846]: time="2025-08-13T01:01:05.978293290Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 01:01:05.978450 env[1846]: time="2025-08-13T01:01:05.978432057Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 13 01:01:05.978527 env[1846]: time="2025-08-13T01:01:05.978512902Z" level=info msg="metadata content store policy set" policy=shared Aug 13 01:01:06.045026 extend-filesystems[1824]: Resized partition /dev/nvme0n1p9 Aug 13 01:01:06.061277 systemd-logind[1835]: Watching system buttons on /dev/input/event1 (Power Button) Aug 13 01:01:06.061309 systemd-logind[1835]: Watching system buttons on /dev/input/event2 (Sleep Button) Aug 13 01:01:06.061335 systemd-logind[1835]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 01:01:06.066549 systemd-logind[1835]: New seat seat0. Aug 13 01:01:06.067051 amazon-ssm-agent[1842]: 2025/08/13 01:01:06 Failed to load instance info from vault. RegistrationKey does not exist. Aug 13 01:01:06.070590 amazon-ssm-agent[1842]: Initializing new seelog logger Aug 13 01:01:06.074256 amazon-ssm-agent[1842]: New Seelog Logger Creation Complete Aug 13 01:01:06.075974 env[1846]: time="2025-08-13T01:01:06.074480797Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 01:01:06.075974 env[1846]: time="2025-08-13T01:01:06.074535399Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 01:01:06.075974 env[1846]: time="2025-08-13T01:01:06.074555919Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 01:01:06.075974 env[1846]: time="2025-08-13T01:01:06.074668606Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 01:01:06.075974 env[1846]: time="2025-08-13T01:01:06.074693458Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 01:01:06.075974 env[1846]: time="2025-08-13T01:01:06.074714696Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 01:01:06.075974 env[1846]: time="2025-08-13T01:01:06.074734756Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 01:01:06.075974 env[1846]: time="2025-08-13T01:01:06.074755941Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 01:01:06.075974 env[1846]: time="2025-08-13T01:01:06.074775963Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Aug 13 01:01:06.075974 env[1846]: time="2025-08-13T01:01:06.074796583Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 01:01:06.075974 env[1846]: time="2025-08-13T01:01:06.074815519Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 01:01:06.075974 env[1846]: time="2025-08-13T01:01:06.074833992Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 01:01:06.075974 env[1846]: time="2025-08-13T01:01:06.074985933Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 01:01:06.075974 env[1846]: time="2025-08-13T01:01:06.075081857Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 01:01:06.078414 env[1846]: time="2025-08-13T01:01:06.075555916Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 01:01:06.078414 env[1846]: time="2025-08-13T01:01:06.075597273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 01:01:06.078414 env[1846]: time="2025-08-13T01:01:06.075615644Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 01:01:06.078414 env[1846]: time="2025-08-13T01:01:06.075669660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 01:01:06.078414 env[1846]: time="2025-08-13T01:01:06.075686117Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 01:01:06.078414 env[1846]: time="2025-08-13T01:01:06.075703740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 01:01:06.078414 env[1846]: time="2025-08-13T01:01:06.075720049Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 01:01:06.078414 env[1846]: time="2025-08-13T01:01:06.075736287Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 01:01:06.078414 env[1846]: time="2025-08-13T01:01:06.075751721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 01:01:06.078414 env[1846]: time="2025-08-13T01:01:06.075765989Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 01:01:06.078414 env[1846]: time="2025-08-13T01:01:06.075781767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 01:01:06.078414 env[1846]: time="2025-08-13T01:01:06.075802279Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 01:01:06.079699 env[1846]: time="2025-08-13T01:01:06.078940755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 01:01:06.079699 env[1846]: time="2025-08-13T01:01:06.078983442Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 01:01:06.079699 env[1846]: time="2025-08-13T01:01:06.079003652Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 01:01:06.079699 env[1846]: time="2025-08-13T01:01:06.079023024Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 01:01:06.079699 env[1846]: time="2025-08-13T01:01:06.079045175Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Aug 13 01:01:06.079699 env[1846]: time="2025-08-13T01:01:06.079062349Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 01:01:06.079699 env[1846]: time="2025-08-13T01:01:06.079094564Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Aug 13 01:01:06.079699 env[1846]: time="2025-08-13T01:01:06.079145820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 01:01:06.080055 bash[1882]: Updated "/home/core/.ssh/authorized_keys" Aug 13 01:01:06.080379 env[1846]: time="2025-08-13T01:01:06.079434663Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 01:01:06.080379 env[1846]: time="2025-08-13T01:01:06.079522494Z" level=info msg="Connect containerd service" Aug 13 01:01:06.080379 env[1846]: time="2025-08-13T01:01:06.079578335Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 01:01:06.086796 env[1846]: time="2025-08-13T01:01:06.081388010Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 01:01:06.086796 env[1846]: time="2025-08-13T01:01:06.081777942Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 01:01:06.086796 env[1846]: time="2025-08-13T01:01:06.081850831Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 01:01:06.086796 env[1846]: time="2025-08-13T01:01:06.081923931Z" level=info msg="containerd successfully booted in 0.261280s" Aug 13 01:01:06.082075 systemd[1]: Finished update-ssh-keys-after-ignition.service. Aug 13 01:01:06.082830 systemd[1]: Started containerd.service. Aug 13 01:01:06.089246 amazon-ssm-agent[1842]: 2025/08/13 01:01:06 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 01:01:06.089359 amazon-ssm-agent[1842]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 01:01:06.089676 amazon-ssm-agent[1842]: 2025/08/13 01:01:06 processing appconfig overrides Aug 13 01:01:06.089884 env[1846]: time="2025-08-13T01:01:06.089815585Z" level=info msg="Start subscribing containerd event" Aug 13 01:01:06.090335 env[1846]: time="2025-08-13T01:01:06.090297933Z" level=info msg="Start recovering state" Aug 13 01:01:06.090546 env[1846]: time="2025-08-13T01:01:06.090528985Z" level=info msg="Start event monitor" Aug 13 01:01:06.090679 env[1846]: time="2025-08-13T01:01:06.090659078Z" level=info msg="Start snapshots syncer" Aug 13 01:01:06.090781 env[1846]: time="2025-08-13T01:01:06.090765464Z" level=info msg="Start cni network conf syncer for default" Aug 13 01:01:06.090892 env[1846]: time="2025-08-13T01:01:06.090873241Z" level=info msg="Start streaming server" Aug 13 01:01:06.109821 dbus-daemon[1822]: [system] SELinux support is enabled Aug 13 01:01:06.110056 systemd[1]: Started dbus.service. Aug 13 01:01:06.113692 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 01:01:06.113732 systemd[1]: Reached target system-config.target. Aug 13 01:01:06.114404 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 01:01:06.114434 systemd[1]: Reached target user-config.target. Aug 13 01:01:06.117509 dbus-daemon[1822]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1517 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 13 01:01:06.119391 extend-filesystems[1900]: resize2fs 1.46.5 (30-Dec-2021) Aug 13 01:01:06.133216 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Aug 13 01:01:06.142536 dbus-daemon[1822]: [system] Successfully activated service 'org.freedesktop.systemd1' Aug 13 01:01:06.149232 systemd[1]: Starting systemd-hostnamed.service... Aug 13 01:01:06.160071 systemd[1]: Started systemd-logind.service. Aug 13 01:01:06.218990 update_engine[1837]: I0813 01:01:06.217504 1837 main.cc:92] Flatcar Update Engine starting Aug 13 01:01:06.227688 systemd[1]: Started update-engine.service. Aug 13 01:01:06.229388 update_engine[1837]: I0813 01:01:06.229234 1837 update_check_scheduler.cc:74] Next update check in 7m52s Aug 13 01:01:06.233104 systemd[1]: Started locksmithd.service. Aug 13 01:01:06.356544 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Aug 13 01:01:06.362664 systemd[1]: nvidia.service: Deactivated successfully. Aug 13 01:01:06.384313 extend-filesystems[1900]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Aug 13 01:01:06.384313 extend-filesystems[1900]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 01:01:06.384313 extend-filesystems[1900]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Aug 13 01:01:06.396717 extend-filesystems[1824]: Resized filesystem in /dev/nvme0n1p9 Aug 13 01:01:06.385346 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 01:01:06.385692 systemd[1]: Finished extend-filesystems.service. Aug 13 01:01:06.534600 dbus-daemon[1822]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 13 01:01:06.534794 systemd[1]: Started systemd-hostnamed.service. Aug 13 01:01:06.537017 dbus-daemon[1822]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1909 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 13 01:01:06.542423 systemd[1]: Starting polkit.service... Aug 13 01:01:06.571562 polkitd[1936]: Started polkitd version 121 Aug 13 01:01:06.605792 polkitd[1936]: Loading rules from directory /etc/polkit-1/rules.d Aug 13 01:01:06.606043 polkitd[1936]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 13 01:01:06.608099 polkitd[1936]: Finished loading, compiling and executing 2 rules Aug 13 01:01:06.608801 dbus-daemon[1822]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 13 01:01:06.608993 systemd[1]: Started polkit.service. Aug 13 01:01:06.611172 polkitd[1936]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 13 01:01:06.640854 systemd-hostnamed[1909]: Hostname set to (transient) Aug 13 01:01:06.640854 systemd-resolved[1771]: System hostname changed to 'ip-172-31-20-232'. Aug 13 01:01:06.693375 coreos-metadata[1820]: Aug 13 01:01:06.693 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Aug 13 01:01:06.696102 coreos-metadata[1820]: Aug 13 01:01:06.696 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Aug 13 01:01:06.697035 coreos-metadata[1820]: Aug 13 01:01:06.697 INFO Fetch successful Aug 13 01:01:06.697180 coreos-metadata[1820]: Aug 13 01:01:06.697 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Aug 13 01:01:06.698072 coreos-metadata[1820]: Aug 13 01:01:06.698 INFO Fetch successful Aug 13 01:01:06.701030 unknown[1820]: wrote ssh authorized keys file for user: core Aug 13 01:01:06.720217 update-ssh-keys[1956]: Updated "/home/core/.ssh/authorized_keys" Aug 13 01:01:06.720594 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Aug 13 01:01:06.784442 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO Create new startup processor Aug 13 01:01:06.784830 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO [LongRunningPluginsManager] registered plugins: {} Aug 13 01:01:06.784932 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO Initializing bookkeeping folders Aug 13 01:01:06.785026 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO removing the completed state files Aug 13 01:01:06.785112 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO Initializing bookkeeping folders for long running plugins Aug 13 01:01:06.785224 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Aug 13 01:01:06.785325 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO Initializing healthcheck folders for long running plugins Aug 13 01:01:06.785427 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO Initializing locations for inventory plugin Aug 13 01:01:06.785521 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO Initializing default location for custom inventory Aug 13 01:01:06.785610 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO Initializing default location for file inventory Aug 13 01:01:06.785695 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO Initializing default location for role inventory Aug 13 01:01:06.785784 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO Init the cloudwatchlogs publisher Aug 13 01:01:06.785872 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO [instanceID=i-02c30592cfc4aa939] Successfully loaded platform independent plugin aws:updateSsmAgent Aug 13 01:01:06.785960 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO [instanceID=i-02c30592cfc4aa939] Successfully loaded platform independent plugin aws:runDockerAction Aug 13 01:01:06.786049 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO [instanceID=i-02c30592cfc4aa939] Successfully loaded platform independent plugin aws:configurePackage Aug 13 01:01:06.786129 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO [instanceID=i-02c30592cfc4aa939] Successfully loaded platform independent plugin aws:downloadContent Aug 13 01:01:06.786220 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO [instanceID=i-02c30592cfc4aa939] Successfully loaded platform independent plugin aws:runDocument Aug 13 01:01:06.786294 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO [instanceID=i-02c30592cfc4aa939] Successfully loaded platform independent plugin aws:softwareInventory Aug 13 01:01:06.786374 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO [instanceID=i-02c30592cfc4aa939] Successfully loaded platform independent plugin aws:runPowerShellScript Aug 13 01:01:06.786572 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO [instanceID=i-02c30592cfc4aa939] Successfully loaded platform independent plugin aws:configureDocker Aug 13 01:01:06.788933 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO [instanceID=i-02c30592cfc4aa939] Successfully loaded platform independent plugin aws:refreshAssociation Aug 13 01:01:06.789056 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO [instanceID=i-02c30592cfc4aa939] Successfully loaded platform dependent plugin aws:runShellScript Aug 13 01:01:06.789124 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Aug 13 01:01:06.789218 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO OS: linux, Arch: amd64 Aug 13 01:01:06.794211 amazon-ssm-agent[1842]: datastore file /var/lib/amazon/ssm/i-02c30592cfc4aa939/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Aug 13 01:01:06.796889 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO [MessagingDeliveryService] Starting document processing engine... Aug 13 01:01:06.892791 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO [MessagingDeliveryService] [EngineProcessor] Starting Aug 13 01:01:06.987147 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Aug 13 01:01:07.082483 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO [MessagingDeliveryService] Starting message polling Aug 13 01:01:07.177616 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO [MessagingDeliveryService] Starting send replies to MDS Aug 13 01:01:07.191437 tar[1840]: linux-amd64/LICENSE Aug 13 01:01:07.191880 tar[1840]: linux-amd64/README.md Aug 13 01:01:07.202895 systemd[1]: Finished prepare-helm.service. Aug 13 01:01:07.271251 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO [instanceID=i-02c30592cfc4aa939] Starting association polling Aug 13 01:01:07.366360 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Aug 13 01:01:07.461695 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO [MessagingDeliveryService] [Association] Launching response handler Aug 13 01:01:07.509819 sshd_keygen[1866]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 01:01:07.534890 systemd[1]: Finished sshd-keygen.service. Aug 13 01:01:07.538328 systemd[1]: Starting issuegen.service... Aug 13 01:01:07.547248 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 01:01:07.547607 systemd[1]: Finished issuegen.service. Aug 13 01:01:07.550987 systemd[1]: Starting systemd-user-sessions.service... Aug 13 01:01:07.557145 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Aug 13 01:01:07.576950 systemd[1]: Finished systemd-user-sessions.service. Aug 13 01:01:07.580157 systemd[1]: Started getty@tty1.service. Aug 13 01:01:07.583464 systemd[1]: Started serial-getty@ttyS0.service. Aug 13 01:01:07.584570 systemd[1]: Reached target getty.target. Aug 13 01:01:07.653165 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Aug 13 01:01:07.711124 locksmithd[1917]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 01:01:07.749089 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Aug 13 01:01:07.845109 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO [MessageGatewayService] Starting session document processing engine... Aug 13 01:01:07.941563 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO [MessageGatewayService] [EngineProcessor] Starting Aug 13 01:01:08.038146 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Aug 13 01:01:08.135079 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-02c30592cfc4aa939, requestId: c64f0d62-531d-44f0-b959-776466d5b911 Aug 13 01:01:08.232011 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO [OfflineService] Starting document processing engine... Aug 13 01:01:08.329142 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO [OfflineService] [EngineProcessor] Starting Aug 13 01:01:08.426447 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO [OfflineService] [EngineProcessor] Initial processing Aug 13 01:01:08.523942 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO [OfflineService] Starting message polling Aug 13 01:01:08.621603 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO [OfflineService] Starting send replies to MDS Aug 13 01:01:08.719402 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO [LongRunningPluginsManager] starting long running plugin manager Aug 13 01:01:08.817409 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Aug 13 01:01:08.915738 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO [HealthCheck] HealthCheck reporting agent health. Aug 13 01:01:09.014127 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO [MessageGatewayService] listening reply. Aug 13 01:01:09.112785 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Aug 13 01:01:09.211717 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO [StartupProcessor] Executing startup processor tasks Aug 13 01:01:09.310658 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Aug 13 01:01:09.352255 systemd[1]: Started kubelet.service. Aug 13 01:01:09.353246 systemd[1]: Reached target multi-user.target. Aug 13 01:01:09.355220 systemd[1]: Starting systemd-update-utmp-runlevel.service... Aug 13 01:01:09.362859 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Aug 13 01:01:09.363108 systemd[1]: Finished systemd-update-utmp-runlevel.service. Aug 13 01:01:09.364986 systemd[1]: Startup finished in 7.450s (kernel) + 12.886s (userspace) = 20.336s. Aug 13 01:01:09.409914 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Aug 13 01:01:09.509396 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.8 Aug 13 01:01:09.608910 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-02c30592cfc4aa939?role=subscribe&stream=input Aug 13 01:01:09.708919 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-02c30592cfc4aa939?role=subscribe&stream=input Aug 13 01:01:09.809055 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO [MessageGatewayService] Starting receiving message from control channel Aug 13 01:01:09.909169 amazon-ssm-agent[1842]: 2025-08-13 01:01:06 INFO [MessageGatewayService] [EngineProcessor] Initial processing Aug 13 01:01:10.492770 kubelet[2066]: E0813 01:01:10.492714 2066 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:01:10.494250 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:01:10.494418 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:01:14.230268 systemd[1]: Created slice system-sshd.slice. Aug 13 01:01:14.231503 systemd[1]: Started sshd@0-172.31.20.232:22-147.75.109.163:44392.service. Aug 13 01:01:14.423584 sshd[2074]: Accepted publickey for core from 147.75.109.163 port 44392 ssh2: RSA SHA256:LgPaLKY3LN4TBfyOIoir69nxEAguUoPITi+qXSaDutg Aug 13 01:01:14.427442 sshd[2074]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:01:14.442038 systemd[1]: Created slice user-500.slice. Aug 13 01:01:14.443426 systemd[1]: Starting user-runtime-dir@500.service... Aug 13 01:01:14.446647 systemd-logind[1835]: New session 1 of user core. Aug 13 01:01:14.459059 systemd[1]: Finished user-runtime-dir@500.service. Aug 13 01:01:14.461971 systemd[1]: Starting user@500.service... Aug 13 01:01:14.468706 (systemd)[2079]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:01:14.560675 systemd[2079]: Queued start job for default target default.target. Aug 13 01:01:14.561434 systemd[2079]: Reached target paths.target. Aug 13 01:01:14.561464 systemd[2079]: Reached target sockets.target. Aug 13 01:01:14.561477 systemd[2079]: Reached target timers.target. Aug 13 01:01:14.561489 systemd[2079]: Reached target basic.target. Aug 13 01:01:14.561626 systemd[1]: Started user@500.service. Aug 13 01:01:14.562293 systemd[2079]: Reached target default.target. Aug 13 01:01:14.562346 systemd[2079]: Startup finished in 86ms. Aug 13 01:01:14.562558 systemd[1]: Started session-1.scope. Aug 13 01:01:14.701918 systemd[1]: Started sshd@1-172.31.20.232:22-147.75.109.163:44400.service. Aug 13 01:01:14.870761 sshd[2088]: Accepted publickey for core from 147.75.109.163 port 44400 ssh2: RSA SHA256:LgPaLKY3LN4TBfyOIoir69nxEAguUoPITi+qXSaDutg Aug 13 01:01:14.872367 sshd[2088]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:01:14.877911 systemd[1]: Started session-2.scope. Aug 13 01:01:14.878919 systemd-logind[1835]: New session 2 of user core. Aug 13 01:01:15.005545 sshd[2088]: pam_unix(sshd:session): session closed for user core Aug 13 01:01:15.008851 systemd[1]: sshd@1-172.31.20.232:22-147.75.109.163:44400.service: Deactivated successfully. Aug 13 01:01:15.010298 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 01:01:15.010318 systemd-logind[1835]: Session 2 logged out. Waiting for processes to exit. Aug 13 01:01:15.011763 systemd-logind[1835]: Removed session 2. Aug 13 01:01:15.028914 systemd[1]: Started sshd@2-172.31.20.232:22-147.75.109.163:44404.service. Aug 13 01:01:15.194367 sshd[2095]: Accepted publickey for core from 147.75.109.163 port 44404 ssh2: RSA SHA256:LgPaLKY3LN4TBfyOIoir69nxEAguUoPITi+qXSaDutg Aug 13 01:01:15.195449 sshd[2095]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:01:15.201124 systemd[1]: Started session-3.scope. Aug 13 01:01:15.201606 systemd-logind[1835]: New session 3 of user core. Aug 13 01:01:15.323357 sshd[2095]: pam_unix(sshd:session): session closed for user core Aug 13 01:01:15.326389 systemd[1]: sshd@2-172.31.20.232:22-147.75.109.163:44404.service: Deactivated successfully. Aug 13 01:01:15.327211 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 01:01:15.328867 systemd-logind[1835]: Session 3 logged out. Waiting for processes to exit. Aug 13 01:01:15.329805 systemd-logind[1835]: Removed session 3. Aug 13 01:01:15.347312 systemd[1]: Started sshd@3-172.31.20.232:22-147.75.109.163:44406.service. Aug 13 01:01:15.509449 sshd[2102]: Accepted publickey for core from 147.75.109.163 port 44406 ssh2: RSA SHA256:LgPaLKY3LN4TBfyOIoir69nxEAguUoPITi+qXSaDutg Aug 13 01:01:15.510430 sshd[2102]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:01:15.515435 systemd[1]: Started session-4.scope. Aug 13 01:01:15.515632 systemd-logind[1835]: New session 4 of user core. Aug 13 01:01:15.642805 sshd[2102]: pam_unix(sshd:session): session closed for user core Aug 13 01:01:15.645321 systemd[1]: sshd@3-172.31.20.232:22-147.75.109.163:44406.service: Deactivated successfully. Aug 13 01:01:15.646157 systemd-logind[1835]: Session 4 logged out. Waiting for processes to exit. Aug 13 01:01:15.646228 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 01:01:15.647097 systemd-logind[1835]: Removed session 4. Aug 13 01:01:15.665799 systemd[1]: Started sshd@4-172.31.20.232:22-147.75.109.163:44410.service. Aug 13 01:01:15.826798 sshd[2109]: Accepted publickey for core from 147.75.109.163 port 44410 ssh2: RSA SHA256:LgPaLKY3LN4TBfyOIoir69nxEAguUoPITi+qXSaDutg Aug 13 01:01:15.827750 sshd[2109]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:01:15.833735 systemd[1]: Started session-5.scope. Aug 13 01:01:15.834183 systemd-logind[1835]: New session 5 of user core. Aug 13 01:01:15.954619 sudo[2113]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 01:01:15.954866 sudo[2113]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 13 01:01:15.982989 systemd[1]: Starting docker.service... Aug 13 01:01:16.023501 env[2123]: time="2025-08-13T01:01:16.023445811Z" level=info msg="Starting up" Aug 13 01:01:16.024869 env[2123]: time="2025-08-13T01:01:16.024842914Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 13 01:01:16.024983 env[2123]: time="2025-08-13T01:01:16.024971768Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 13 01:01:16.025043 env[2123]: time="2025-08-13T01:01:16.025031205Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 13 01:01:16.025083 env[2123]: time="2025-08-13T01:01:16.025075706Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 13 01:01:16.027120 env[2123]: time="2025-08-13T01:01:16.027100246Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 13 01:01:16.027272 env[2123]: time="2025-08-13T01:01:16.027252548Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 13 01:01:16.027337 env[2123]: time="2025-08-13T01:01:16.027325930Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 13 01:01:16.027378 env[2123]: time="2025-08-13T01:01:16.027370803Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 13 01:01:16.035458 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport838339282-merged.mount: Deactivated successfully. Aug 13 01:01:16.110382 env[2123]: time="2025-08-13T01:01:16.109777166Z" level=warning msg="Your kernel does not support cgroup blkio weight" Aug 13 01:01:16.110382 env[2123]: time="2025-08-13T01:01:16.109806118Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Aug 13 01:01:16.110382 env[2123]: time="2025-08-13T01:01:16.109947342Z" level=info msg="Loading containers: start." Aug 13 01:01:16.291392 kernel: Initializing XFRM netlink socket Aug 13 01:01:16.329871 env[2123]: time="2025-08-13T01:01:16.329825227Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Aug 13 01:01:16.330787 (udev-worker)[2134]: Network interface NamePolicy= disabled on kernel command line. Aug 13 01:01:16.413389 systemd-networkd[1517]: docker0: Link UP Aug 13 01:01:16.435745 env[2123]: time="2025-08-13T01:01:16.435700811Z" level=info msg="Loading containers: done." Aug 13 01:01:16.460458 env[2123]: time="2025-08-13T01:01:16.460396454Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 01:01:16.460674 env[2123]: time="2025-08-13T01:01:16.460589684Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Aug 13 01:01:16.460720 env[2123]: time="2025-08-13T01:01:16.460688760Z" level=info msg="Daemon has completed initialization" Aug 13 01:01:16.489398 systemd[1]: Started docker.service. Aug 13 01:01:16.500299 env[2123]: time="2025-08-13T01:01:16.500235646Z" level=info msg="API listen on /run/docker.sock" Aug 13 01:01:17.681516 env[1846]: time="2025-08-13T01:01:17.681461177Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Aug 13 01:01:18.303257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1819683128.mount: Deactivated successfully. Aug 13 01:01:19.936467 env[1846]: time="2025-08-13T01:01:19.936398885Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:01:19.938605 env[1846]: time="2025-08-13T01:01:19.938566084Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:01:19.940529 env[1846]: time="2025-08-13T01:01:19.940498612Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:01:19.942533 env[1846]: time="2025-08-13T01:01:19.942417398Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:01:19.943944 env[1846]: time="2025-08-13T01:01:19.943911558Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\"" Aug 13 01:01:19.946251 env[1846]: time="2025-08-13T01:01:19.946222302Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Aug 13 01:01:20.745377 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 01:01:20.745561 systemd[1]: Stopped kubelet.service. Aug 13 01:01:20.747477 systemd[1]: Starting kubelet.service... Aug 13 01:01:20.977516 systemd[1]: Started kubelet.service. Aug 13 01:01:21.059847 kubelet[2252]: E0813 01:01:21.059716 2252 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:01:21.065461 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:01:21.065679 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:01:21.809478 env[1846]: time="2025-08-13T01:01:21.809431757Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:01:21.815315 env[1846]: time="2025-08-13T01:01:21.815272132Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:01:21.818316 env[1846]: time="2025-08-13T01:01:21.818278349Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:01:21.820923 env[1846]: time="2025-08-13T01:01:21.820878000Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:01:21.821753 env[1846]: time="2025-08-13T01:01:21.821717563Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\"" Aug 13 01:01:21.822444 env[1846]: time="2025-08-13T01:01:21.822422338Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Aug 13 01:01:23.415513 env[1846]: time="2025-08-13T01:01:23.415458782Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:01:23.429291 env[1846]: time="2025-08-13T01:01:23.429250988Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:01:23.438215 env[1846]: time="2025-08-13T01:01:23.438150164Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:01:23.447945 env[1846]: time="2025-08-13T01:01:23.447900501Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:01:23.449382 env[1846]: time="2025-08-13T01:01:23.449335863Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\"" Aug 13 01:01:23.450037 env[1846]: time="2025-08-13T01:01:23.450009602Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 13 01:01:24.542943 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3601280543.mount: Deactivated successfully. Aug 13 01:01:25.219124 env[1846]: time="2025-08-13T01:01:25.219073143Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:01:25.221488 env[1846]: time="2025-08-13T01:01:25.221434434Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:01:25.223024 env[1846]: time="2025-08-13T01:01:25.222995890Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:01:25.224624 env[1846]: time="2025-08-13T01:01:25.224578296Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:01:25.225124 env[1846]: time="2025-08-13T01:01:25.225094681Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\"" Aug 13 01:01:25.225794 env[1846]: time="2025-08-13T01:01:25.225693722Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 01:01:25.765517 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4236722882.mount: Deactivated successfully. Aug 13 01:01:26.800920 env[1846]: time="2025-08-13T01:01:26.800871793Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:01:26.803103 env[1846]: time="2025-08-13T01:01:26.803058700Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:01:26.805608 env[1846]: time="2025-08-13T01:01:26.805556671Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:01:26.807784 env[1846]: time="2025-08-13T01:01:26.807742545Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:01:26.808991 env[1846]: time="2025-08-13T01:01:26.808943967Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 01:01:26.809647 env[1846]: time="2025-08-13T01:01:26.809619550Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 01:01:27.283446 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount608612725.mount: Deactivated successfully. Aug 13 01:01:27.288966 env[1846]: time="2025-08-13T01:01:27.288903530Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:01:27.291011 env[1846]: time="2025-08-13T01:01:27.290964836Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:01:27.292907 env[1846]: time="2025-08-13T01:01:27.292856578Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:01:27.294505 env[1846]: time="2025-08-13T01:01:27.294446353Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:01:27.294991 env[1846]: time="2025-08-13T01:01:27.294957641Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 01:01:27.295492 env[1846]: time="2025-08-13T01:01:27.295463762Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 01:01:27.859860 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4039777308.mount: Deactivated successfully. Aug 13 01:01:30.245974 env[1846]: time="2025-08-13T01:01:30.245905161Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:01:30.248484 env[1846]: time="2025-08-13T01:01:30.248444326Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:01:30.250353 env[1846]: time="2025-08-13T01:01:30.250324315Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:01:30.252183 env[1846]: time="2025-08-13T01:01:30.252011272Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:01:30.253102 env[1846]: time="2025-08-13T01:01:30.253074142Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 01:01:31.088174 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 01:01:31.088457 systemd[1]: Stopped kubelet.service. Aug 13 01:01:31.091300 systemd[1]: Starting kubelet.service... Aug 13 01:01:32.036757 systemd[1]: Started kubelet.service. Aug 13 01:01:32.126090 kubelet[2283]: E0813 01:01:32.126039 2283 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:01:32.128675 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:01:32.128893 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:01:32.974690 amazon-ssm-agent[1842]: 2025-08-13 01:01:32 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Aug 13 01:01:33.408358 systemd[1]: Stopped kubelet.service. Aug 13 01:01:33.411271 systemd[1]: Starting kubelet.service... Aug 13 01:01:33.449434 systemd[1]: Reloading. Aug 13 01:01:33.550437 /usr/lib/systemd/system-generators/torcx-generator[2318]: time="2025-08-13T01:01:33Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 01:01:33.550466 /usr/lib/systemd/system-generators/torcx-generator[2318]: time="2025-08-13T01:01:33Z" level=info msg="torcx already run" Aug 13 01:01:33.706631 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 01:01:33.706654 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 01:01:33.734733 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:01:33.853836 systemd[1]: Started kubelet.service. Aug 13 01:01:33.856332 systemd[1]: Stopping kubelet.service... Aug 13 01:01:33.857526 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 01:01:33.857855 systemd[1]: Stopped kubelet.service. Aug 13 01:01:33.860361 systemd[1]: Starting kubelet.service... Aug 13 01:01:34.105022 systemd[1]: Started kubelet.service. Aug 13 01:01:34.187581 kubelet[2395]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:01:34.187581 kubelet[2395]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 01:01:34.187581 kubelet[2395]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:01:34.188001 kubelet[2395]: I0813 01:01:34.187637 2395 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 01:01:34.489858 kubelet[2395]: I0813 01:01:34.489360 2395 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 01:01:34.489858 kubelet[2395]: I0813 01:01:34.489401 2395 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 01:01:34.490345 kubelet[2395]: I0813 01:01:34.490324 2395 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 01:01:34.531624 kubelet[2395]: E0813 01:01:34.531587 2395 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.20.232:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.20.232:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:01:34.534209 kubelet[2395]: I0813 01:01:34.534163 2395 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 01:01:34.544303 kubelet[2395]: E0813 01:01:34.544252 2395 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 01:01:34.544303 kubelet[2395]: I0813 01:01:34.544301 2395 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 01:01:34.549018 kubelet[2395]: I0813 01:01:34.548871 2395 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 01:01:34.549225 kubelet[2395]: I0813 01:01:34.549188 2395 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 01:01:34.549380 kubelet[2395]: I0813 01:01:34.549323 2395 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 01:01:34.549622 kubelet[2395]: I0813 01:01:34.549361 2395 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-20-232","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Aug 13 01:01:34.549622 kubelet[2395]: I0813 01:01:34.549613 2395 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 01:01:34.549815 kubelet[2395]: I0813 01:01:34.549630 2395 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 01:01:34.549815 kubelet[2395]: I0813 01:01:34.549784 2395 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:01:34.556241 kubelet[2395]: I0813 01:01:34.556178 2395 kubelet.go:408] "Attempting to sync node with API server" Aug 13 01:01:34.556241 kubelet[2395]: I0813 01:01:34.556248 2395 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 01:01:34.556392 kubelet[2395]: I0813 01:01:34.556289 2395 kubelet.go:314] "Adding apiserver pod source" Aug 13 01:01:34.556392 kubelet[2395]: I0813 01:01:34.556305 2395 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 01:01:34.573933 kubelet[2395]: W0813 01:01:34.573885 2395 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.20.232:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-232&limit=500&resourceVersion=0": dial tcp 172.31.20.232:6443: connect: connection refused Aug 13 01:01:34.574298 kubelet[2395]: E0813 01:01:34.574121 2395 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.20.232:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-232&limit=500&resourceVersion=0\": dial tcp 172.31.20.232:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:01:34.574298 kubelet[2395]: I0813 01:01:34.574224 2395 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 13 01:01:34.574653 kubelet[2395]: I0813 01:01:34.574612 2395 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 01:01:34.574758 kubelet[2395]: W0813 01:01:34.574668 2395 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 01:01:34.585109 kubelet[2395]: I0813 01:01:34.585061 2395 server.go:1274] "Started kubelet" Aug 13 01:01:34.592429 kubelet[2395]: W0813 01:01:34.592371 2395 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.20.232:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.20.232:6443: connect: connection refused Aug 13 01:01:34.592587 kubelet[2395]: E0813 01:01:34.592435 2395 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.20.232:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.20.232:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:01:34.592587 kubelet[2395]: I0813 01:01:34.592496 2395 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 01:01:34.596668 kubelet[2395]: I0813 01:01:34.596609 2395 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 01:01:34.597345 kubelet[2395]: I0813 01:01:34.597322 2395 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 01:01:34.600216 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Aug 13 01:01:34.600385 kubelet[2395]: I0813 01:01:34.600362 2395 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 01:01:34.601962 kubelet[2395]: E0813 01:01:34.599705 2395 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.20.232:6443/api/v1/namespaces/default/events\": dial tcp 172.31.20.232:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-20-232.185b2dda1be4a141 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-20-232,UID:ip-172-31-20-232,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-20-232,},FirstTimestamp:2025-08-13 01:01:34.585028929 +0000 UTC m=+0.472292331,LastTimestamp:2025-08-13 01:01:34.585028929 +0000 UTC m=+0.472292331,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-20-232,}" Aug 13 01:01:34.607540 kubelet[2395]: I0813 01:01:34.607507 2395 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 01:01:34.608886 kubelet[2395]: I0813 01:01:34.608862 2395 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 01:01:34.609188 kubelet[2395]: E0813 01:01:34.609158 2395 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-20-232\" not found" Aug 13 01:01:34.609533 kubelet[2395]: I0813 01:01:34.609507 2395 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 01:01:34.609604 kubelet[2395]: I0813 01:01:34.609583 2395 reconciler.go:26] "Reconciler: start to sync state" Aug 13 01:01:34.610700 kubelet[2395]: I0813 01:01:34.610614 2395 server.go:449] "Adding debug handlers to kubelet server" Aug 13 01:01:34.612521 kubelet[2395]: W0813 01:01:34.612469 2395 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.20.232:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.232:6443: connect: connection refused Aug 13 01:01:34.612603 kubelet[2395]: E0813 01:01:34.612529 2395 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.20.232:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.20.232:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:01:34.613808 kubelet[2395]: E0813 01:01:34.612645 2395 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.232:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-232?timeout=10s\": dial tcp 172.31.20.232:6443: connect: connection refused" interval="200ms" Aug 13 01:01:34.615371 kubelet[2395]: I0813 01:01:34.614617 2395 factory.go:221] Registration of the systemd container factory successfully Aug 13 01:01:34.615371 kubelet[2395]: I0813 01:01:34.614710 2395 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 01:01:34.618497 kubelet[2395]: E0813 01:01:34.618477 2395 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 01:01:34.618915 kubelet[2395]: I0813 01:01:34.618887 2395 factory.go:221] Registration of the containerd container factory successfully Aug 13 01:01:34.646073 kubelet[2395]: I0813 01:01:34.646051 2395 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 01:01:34.646280 kubelet[2395]: I0813 01:01:34.646267 2395 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 01:01:34.646380 kubelet[2395]: I0813 01:01:34.646371 2395 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:01:34.649147 kubelet[2395]: I0813 01:01:34.649127 2395 policy_none.go:49] "None policy: Start" Aug 13 01:01:34.650446 kubelet[2395]: I0813 01:01:34.650271 2395 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 01:01:34.651342 kubelet[2395]: I0813 01:01:34.651327 2395 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 01:01:34.651502 kubelet[2395]: I0813 01:01:34.651491 2395 state_mem.go:35] "Initializing new in-memory state store" Aug 13 01:01:34.656221 kubelet[2395]: I0813 01:01:34.656158 2395 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 01:01:34.656221 kubelet[2395]: I0813 01:01:34.656187 2395 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 01:01:34.656458 kubelet[2395]: I0813 01:01:34.656233 2395 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 01:01:34.656458 kubelet[2395]: E0813 01:01:34.656280 2395 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 01:01:34.658607 kubelet[2395]: I0813 01:01:34.658580 2395 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 01:01:34.658758 kubelet[2395]: I0813 01:01:34.658743 2395 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 01:01:34.658838 kubelet[2395]: I0813 01:01:34.658764 2395 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 01:01:34.658980 kubelet[2395]: W0813 01:01:34.658948 2395 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.20.232:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.232:6443: connect: connection refused Aug 13 01:01:34.659108 kubelet[2395]: E0813 01:01:34.659089 2395 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.20.232:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.20.232:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:01:34.661136 kubelet[2395]: I0813 01:01:34.661106 2395 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 01:01:34.664145 kubelet[2395]: E0813 01:01:34.664116 2395 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-20-232\" not found" Aug 13 01:01:34.769818 kubelet[2395]: I0813 01:01:34.762374 2395 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-20-232" Aug 13 01:01:34.772203 kubelet[2395]: E0813 01:01:34.772157 2395 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.20.232:6443/api/v1/nodes\": dial tcp 172.31.20.232:6443: connect: connection refused" node="ip-172-31-20-232" Aug 13 01:01:34.810449 kubelet[2395]: I0813 01:01:34.810409 2395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6f4329410a6e5f53aed016faebc2450d-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-232\" (UID: \"6f4329410a6e5f53aed016faebc2450d\") " pod="kube-system/kube-controller-manager-ip-172-31-20-232" Aug 13 01:01:34.810449 kubelet[2395]: I0813 01:01:34.810447 2395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6f4329410a6e5f53aed016faebc2450d-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-232\" (UID: \"6f4329410a6e5f53aed016faebc2450d\") " pod="kube-system/kube-controller-manager-ip-172-31-20-232" Aug 13 01:01:34.810652 kubelet[2395]: I0813 01:01:34.810476 2395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4705d8dce7decd26ae87ec8b7abcfe72-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-232\" (UID: \"4705d8dce7decd26ae87ec8b7abcfe72\") " pod="kube-system/kube-scheduler-ip-172-31-20-232" Aug 13 01:01:34.810652 kubelet[2395]: I0813 01:01:34.810497 2395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7d80783c88f41e7d01abd4cb2ee2a26e-ca-certs\") pod \"kube-apiserver-ip-172-31-20-232\" (UID: \"7d80783c88f41e7d01abd4cb2ee2a26e\") " pod="kube-system/kube-apiserver-ip-172-31-20-232" Aug 13 01:01:34.810652 kubelet[2395]: I0813 01:01:34.810512 2395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7d80783c88f41e7d01abd4cb2ee2a26e-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-232\" (UID: \"7d80783c88f41e7d01abd4cb2ee2a26e\") " pod="kube-system/kube-apiserver-ip-172-31-20-232" Aug 13 01:01:34.810652 kubelet[2395]: I0813 01:01:34.810527 2395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7d80783c88f41e7d01abd4cb2ee2a26e-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-232\" (UID: \"7d80783c88f41e7d01abd4cb2ee2a26e\") " pod="kube-system/kube-apiserver-ip-172-31-20-232" Aug 13 01:01:34.810652 kubelet[2395]: I0813 01:01:34.810542 2395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6f4329410a6e5f53aed016faebc2450d-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-232\" (UID: \"6f4329410a6e5f53aed016faebc2450d\") " pod="kube-system/kube-controller-manager-ip-172-31-20-232" Aug 13 01:01:34.810792 kubelet[2395]: I0813 01:01:34.810556 2395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6f4329410a6e5f53aed016faebc2450d-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-232\" (UID: \"6f4329410a6e5f53aed016faebc2450d\") " pod="kube-system/kube-controller-manager-ip-172-31-20-232" Aug 13 01:01:34.810792 kubelet[2395]: I0813 01:01:34.810570 2395 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f4329410a6e5f53aed016faebc2450d-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-232\" (UID: \"6f4329410a6e5f53aed016faebc2450d\") " pod="kube-system/kube-controller-manager-ip-172-31-20-232" Aug 13 01:01:34.813820 kubelet[2395]: E0813 01:01:34.813777 2395 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.232:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-232?timeout=10s\": dial tcp 172.31.20.232:6443: connect: connection refused" interval="400ms" Aug 13 01:01:34.974914 kubelet[2395]: I0813 01:01:34.974887 2395 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-20-232" Aug 13 01:01:34.975378 kubelet[2395]: E0813 01:01:34.975348 2395 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.20.232:6443/api/v1/nodes\": dial tcp 172.31.20.232:6443: connect: connection refused" node="ip-172-31-20-232" Aug 13 01:01:35.063380 env[1846]: time="2025-08-13T01:01:35.063336479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-232,Uid:7d80783c88f41e7d01abd4cb2ee2a26e,Namespace:kube-system,Attempt:0,}" Aug 13 01:01:35.066991 env[1846]: time="2025-08-13T01:01:35.066948478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-232,Uid:6f4329410a6e5f53aed016faebc2450d,Namespace:kube-system,Attempt:0,}" Aug 13 01:01:35.073207 env[1846]: time="2025-08-13T01:01:35.073142336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-232,Uid:4705d8dce7decd26ae87ec8b7abcfe72,Namespace:kube-system,Attempt:0,}" Aug 13 01:01:35.214775 kubelet[2395]: E0813 01:01:35.214723 2395 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.232:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-232?timeout=10s\": dial tcp 172.31.20.232:6443: connect: connection refused" interval="800ms" Aug 13 01:01:35.377839 kubelet[2395]: I0813 01:01:35.377488 2395 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-20-232" Aug 13 01:01:35.378018 kubelet[2395]: E0813 01:01:35.377969 2395 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.20.232:6443/api/v1/nodes\": dial tcp 172.31.20.232:6443: connect: connection refused" node="ip-172-31-20-232" Aug 13 01:01:35.495807 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2156068717.mount: Deactivated successfully. Aug 13 01:01:35.504974 env[1846]: time="2025-08-13T01:01:35.504924532Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:01:35.507590 env[1846]: time="2025-08-13T01:01:35.507544134Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:01:35.508783 env[1846]: time="2025-08-13T01:01:35.508752062Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:01:35.510065 env[1846]: time="2025-08-13T01:01:35.510034965Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:01:35.513265 env[1846]: time="2025-08-13T01:01:35.513227868Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:01:35.514027 env[1846]: time="2025-08-13T01:01:35.513986634Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:01:35.515019 env[1846]: time="2025-08-13T01:01:35.514991724Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:01:35.516751 env[1846]: time="2025-08-13T01:01:35.516729630Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:01:35.518071 env[1846]: time="2025-08-13T01:01:35.518042450Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:01:35.518659 env[1846]: time="2025-08-13T01:01:35.518632887Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:01:35.519245 env[1846]: time="2025-08-13T01:01:35.519192575Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:01:35.519903 env[1846]: time="2025-08-13T01:01:35.519875214Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:01:35.565911 env[1846]: time="2025-08-13T01:01:35.565838334Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:01:35.565911 env[1846]: time="2025-08-13T01:01:35.565879137Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:01:35.565911 env[1846]: time="2025-08-13T01:01:35.565891001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:01:35.566392 env[1846]: time="2025-08-13T01:01:35.566344432Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ed3ff6137eab0a1a351f636a0bc83133f7993394d777b3918298e0574dce2832 pid=2446 runtime=io.containerd.runc.v2 Aug 13 01:01:35.568621 env[1846]: time="2025-08-13T01:01:35.568556892Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:01:35.568730 env[1846]: time="2025-08-13T01:01:35.568637875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:01:35.568730 env[1846]: time="2025-08-13T01:01:35.568659542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:01:35.568858 env[1846]: time="2025-08-13T01:01:35.568787276Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6126f9aa2c6f75750f36079e4e15a165692c09006a3e04d1ca798c62733d295a pid=2439 runtime=io.containerd.runc.v2 Aug 13 01:01:35.576857 env[1846]: time="2025-08-13T01:01:35.576626772Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:01:35.576857 env[1846]: time="2025-08-13T01:01:35.576674256Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:01:35.576857 env[1846]: time="2025-08-13T01:01:35.576691558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:01:35.577286 env[1846]: time="2025-08-13T01:01:35.577208524Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bdd5c601e258017788ef393dfcc4163b7403150c223afea5a4e1b4c96050a3f1 pid=2456 runtime=io.containerd.runc.v2 Aug 13 01:01:35.686364 env[1846]: time="2025-08-13T01:01:35.682762821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-232,Uid:6f4329410a6e5f53aed016faebc2450d,Namespace:kube-system,Attempt:0,} returns sandbox id \"bdd5c601e258017788ef393dfcc4163b7403150c223afea5a4e1b4c96050a3f1\"" Aug 13 01:01:35.688002 env[1846]: time="2025-08-13T01:01:35.687388573Z" level=info msg="CreateContainer within sandbox \"bdd5c601e258017788ef393dfcc4163b7403150c223afea5a4e1b4c96050a3f1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 01:01:35.717262 env[1846]: time="2025-08-13T01:01:35.717154230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-232,Uid:4705d8dce7decd26ae87ec8b7abcfe72,Namespace:kube-system,Attempt:0,} returns sandbox id \"6126f9aa2c6f75750f36079e4e15a165692c09006a3e04d1ca798c62733d295a\"" Aug 13 01:01:35.722491 env[1846]: time="2025-08-13T01:01:35.722451458Z" level=info msg="CreateContainer within sandbox \"6126f9aa2c6f75750f36079e4e15a165692c09006a3e04d1ca798c62733d295a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 01:01:35.723810 env[1846]: time="2025-08-13T01:01:35.723759491Z" level=info msg="CreateContainer within sandbox \"bdd5c601e258017788ef393dfcc4163b7403150c223afea5a4e1b4c96050a3f1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"91c66dfa19a3dacee53a9a2e70304abbf54eb188dc9116fc151aeb195145daa1\"" Aug 13 01:01:35.725083 env[1846]: time="2025-08-13T01:01:35.725048640Z" level=info msg="StartContainer for \"91c66dfa19a3dacee53a9a2e70304abbf54eb188dc9116fc151aeb195145daa1\"" Aug 13 01:01:35.734990 env[1846]: time="2025-08-13T01:01:35.734943397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-232,Uid:7d80783c88f41e7d01abd4cb2ee2a26e,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed3ff6137eab0a1a351f636a0bc83133f7993394d777b3918298e0574dce2832\"" Aug 13 01:01:35.739885 env[1846]: time="2025-08-13T01:01:35.739840731Z" level=info msg="CreateContainer within sandbox \"ed3ff6137eab0a1a351f636a0bc83133f7993394d777b3918298e0574dce2832\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 01:01:35.742942 env[1846]: time="2025-08-13T01:01:35.742899314Z" level=info msg="CreateContainer within sandbox \"6126f9aa2c6f75750f36079e4e15a165692c09006a3e04d1ca798c62733d295a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"87d0f46361187528fbca31d1db150b110abdf8f5fac6ccfbbe88a94e43cefc43\"" Aug 13 01:01:35.743707 env[1846]: time="2025-08-13T01:01:35.743670811Z" level=info msg="StartContainer for \"87d0f46361187528fbca31d1db150b110abdf8f5fac6ccfbbe88a94e43cefc43\"" Aug 13 01:01:35.758249 env[1846]: time="2025-08-13T01:01:35.758185401Z" level=info msg="CreateContainer within sandbox \"ed3ff6137eab0a1a351f636a0bc83133f7993394d777b3918298e0574dce2832\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0ae4768334024285a9a4f7f3473e4ac81707afc7521a185a2ccf731bc1611e53\"" Aug 13 01:01:35.759174 env[1846]: time="2025-08-13T01:01:35.759137792Z" level=info msg="StartContainer for \"0ae4768334024285a9a4f7f3473e4ac81707afc7521a185a2ccf731bc1611e53\"" Aug 13 01:01:35.830044 kubelet[2395]: W0813 01:01:35.829899 2395 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.20.232:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-232&limit=500&resourceVersion=0": dial tcp 172.31.20.232:6443: connect: connection refused Aug 13 01:01:35.830044 kubelet[2395]: E0813 01:01:35.830006 2395 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.20.232:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-232&limit=500&resourceVersion=0\": dial tcp 172.31.20.232:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:01:35.886687 env[1846]: time="2025-08-13T01:01:35.886561727Z" level=info msg="StartContainer for \"91c66dfa19a3dacee53a9a2e70304abbf54eb188dc9116fc151aeb195145daa1\" returns successfully" Aug 13 01:01:35.893616 env[1846]: time="2025-08-13T01:01:35.893206228Z" level=info msg="StartContainer for \"0ae4768334024285a9a4f7f3473e4ac81707afc7521a185a2ccf731bc1611e53\" returns successfully" Aug 13 01:01:35.899260 kubelet[2395]: W0813 01:01:35.899173 2395 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.20.232:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.20.232:6443: connect: connection refused Aug 13 01:01:35.899417 kubelet[2395]: E0813 01:01:35.899273 2395 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.20.232:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.20.232:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:01:35.916240 env[1846]: time="2025-08-13T01:01:35.916167945Z" level=info msg="StartContainer for \"87d0f46361187528fbca31d1db150b110abdf8f5fac6ccfbbe88a94e43cefc43\" returns successfully" Aug 13 01:01:36.016253 kubelet[2395]: E0813 01:01:36.016096 2395 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.232:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-232?timeout=10s\": dial tcp 172.31.20.232:6443: connect: connection refused" interval="1.6s" Aug 13 01:01:36.117581 kubelet[2395]: W0813 01:01:36.117514 2395 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.20.232:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.232:6443: connect: connection refused Aug 13 01:01:36.117769 kubelet[2395]: E0813 01:01:36.117597 2395 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.20.232:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.20.232:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:01:36.149319 kubelet[2395]: W0813 01:01:36.149253 2395 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.20.232:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.232:6443: connect: connection refused Aug 13 01:01:36.149504 kubelet[2395]: E0813 01:01:36.149335 2395 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.20.232:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.20.232:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:01:36.180639 kubelet[2395]: I0813 01:01:36.180231 2395 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-20-232" Aug 13 01:01:36.180639 kubelet[2395]: E0813 01:01:36.180589 2395 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.20.232:6443/api/v1/nodes\": dial tcp 172.31.20.232:6443: connect: connection refused" node="ip-172-31-20-232" Aug 13 01:01:36.577926 kubelet[2395]: E0813 01:01:36.577874 2395 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.20.232:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.20.232:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:01:36.673295 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 13 01:01:37.616628 kubelet[2395]: E0813 01:01:37.616591 2395 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.232:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-232?timeout=10s\": dial tcp 172.31.20.232:6443: connect: connection refused" interval="3.2s" Aug 13 01:01:37.782543 kubelet[2395]: I0813 01:01:37.782518 2395 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-20-232" Aug 13 01:01:39.144755 kubelet[2395]: I0813 01:01:39.144723 2395 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-20-232" Aug 13 01:01:39.585673 kubelet[2395]: I0813 01:01:39.585625 2395 apiserver.go:52] "Watching apiserver" Aug 13 01:01:39.610534 kubelet[2395]: I0813 01:01:39.610475 2395 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 01:01:39.854961 kubelet[2395]: E0813 01:01:39.854727 2395 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-20-232\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-20-232" Aug 13 01:01:41.136785 systemd[1]: Reloading. Aug 13 01:01:41.202215 /usr/lib/systemd/system-generators/torcx-generator[2687]: time="2025-08-13T01:01:41Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 01:01:41.202632 /usr/lib/systemd/system-generators/torcx-generator[2687]: time="2025-08-13T01:01:41Z" level=info msg="torcx already run" Aug 13 01:01:41.322556 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 01:01:41.322577 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 01:01:41.342712 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:01:41.452957 kubelet[2395]: I0813 01:01:41.452869 2395 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 01:01:41.455533 systemd[1]: Stopping kubelet.service... Aug 13 01:01:41.474978 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 01:01:41.475319 systemd[1]: Stopped kubelet.service. Aug 13 01:01:41.477455 systemd[1]: Starting kubelet.service... Aug 13 01:01:43.001748 systemd[1]: Started kubelet.service. Aug 13 01:01:43.107894 kubelet[2758]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:01:43.107894 kubelet[2758]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 01:01:43.107894 kubelet[2758]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:01:43.108937 kubelet[2758]: I0813 01:01:43.107996 2758 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 01:01:43.115845 sudo[2768]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 13 01:01:43.116309 sudo[2768]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Aug 13 01:01:43.123463 kubelet[2758]: I0813 01:01:43.123428 2758 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 01:01:43.123463 kubelet[2758]: I0813 01:01:43.123455 2758 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 01:01:43.123903 kubelet[2758]: I0813 01:01:43.123851 2758 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 01:01:43.125630 kubelet[2758]: I0813 01:01:43.125603 2758 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 01:01:43.130674 kubelet[2758]: I0813 01:01:43.130634 2758 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 01:01:43.139188 kubelet[2758]: E0813 01:01:43.139146 2758 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 01:01:43.139696 kubelet[2758]: I0813 01:01:43.139667 2758 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 01:01:43.151478 kubelet[2758]: I0813 01:01:43.151447 2758 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 01:01:43.152254 kubelet[2758]: I0813 01:01:43.152230 2758 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 01:01:43.152543 kubelet[2758]: I0813 01:01:43.152512 2758 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 01:01:43.152787 kubelet[2758]: I0813 01:01:43.152621 2758 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-20-232","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Aug 13 01:01:43.152930 kubelet[2758]: I0813 01:01:43.152920 2758 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 01:01:43.152983 kubelet[2758]: I0813 01:01:43.152978 2758 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 01:01:43.153051 kubelet[2758]: I0813 01:01:43.153045 2758 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:01:43.153241 kubelet[2758]: I0813 01:01:43.153230 2758 kubelet.go:408] "Attempting to sync node with API server" Aug 13 01:01:43.153343 kubelet[2758]: I0813 01:01:43.153333 2758 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 01:01:43.153446 kubelet[2758]: I0813 01:01:43.153437 2758 kubelet.go:314] "Adding apiserver pod source" Aug 13 01:01:43.153525 kubelet[2758]: I0813 01:01:43.153517 2758 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 01:01:43.163495 kubelet[2758]: I0813 01:01:43.163470 2758 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 13 01:01:43.164249 kubelet[2758]: I0813 01:01:43.164229 2758 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 01:01:43.164876 kubelet[2758]: I0813 01:01:43.164862 2758 server.go:1274] "Started kubelet" Aug 13 01:01:43.175455 kubelet[2758]: I0813 01:01:43.175429 2758 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 01:01:43.178762 kubelet[2758]: I0813 01:01:43.178712 2758 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 01:01:43.180274 kubelet[2758]: I0813 01:01:43.180251 2758 server.go:449] "Adding debug handlers to kubelet server" Aug 13 01:01:43.183058 kubelet[2758]: I0813 01:01:43.183025 2758 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 01:01:43.183531 kubelet[2758]: I0813 01:01:43.183516 2758 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 01:01:43.184029 kubelet[2758]: I0813 01:01:43.183999 2758 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 01:01:43.185954 kubelet[2758]: I0813 01:01:43.185936 2758 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 01:01:43.193815 kubelet[2758]: I0813 01:01:43.191454 2758 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 01:01:43.193815 kubelet[2758]: I0813 01:01:43.191651 2758 reconciler.go:26] "Reconciler: start to sync state" Aug 13 01:01:43.207518 kubelet[2758]: I0813 01:01:43.207478 2758 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 01:01:43.213247 kubelet[2758]: I0813 01:01:43.213221 2758 factory.go:221] Registration of the containerd container factory successfully Aug 13 01:01:43.213247 kubelet[2758]: I0813 01:01:43.213242 2758 factory.go:221] Registration of the systemd container factory successfully Aug 13 01:01:43.227364 kubelet[2758]: E0813 01:01:43.227323 2758 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 01:01:43.248977 kubelet[2758]: I0813 01:01:43.248914 2758 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 01:01:43.251878 kubelet[2758]: I0813 01:01:43.251338 2758 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 01:01:43.251878 kubelet[2758]: I0813 01:01:43.251386 2758 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 01:01:43.251878 kubelet[2758]: I0813 01:01:43.251408 2758 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 01:01:43.251878 kubelet[2758]: E0813 01:01:43.251573 2758 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 01:01:43.318619 kubelet[2758]: I0813 01:01:43.318599 2758 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 01:01:43.318803 kubelet[2758]: I0813 01:01:43.318779 2758 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 01:01:43.318932 kubelet[2758]: I0813 01:01:43.318923 2758 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:01:43.319187 kubelet[2758]: I0813 01:01:43.319173 2758 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 01:01:43.319387 kubelet[2758]: I0813 01:01:43.319279 2758 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 01:01:43.319477 kubelet[2758]: I0813 01:01:43.319469 2758 policy_none.go:49] "None policy: Start" Aug 13 01:01:43.320398 kubelet[2758]: I0813 01:01:43.320378 2758 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 01:01:43.320496 kubelet[2758]: I0813 01:01:43.320404 2758 state_mem.go:35] "Initializing new in-memory state store" Aug 13 01:01:43.320615 kubelet[2758]: I0813 01:01:43.320596 2758 state_mem.go:75] "Updated machine memory state" Aug 13 01:01:43.323232 kubelet[2758]: I0813 01:01:43.322286 2758 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 01:01:43.323232 kubelet[2758]: I0813 01:01:43.322469 2758 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 01:01:43.323232 kubelet[2758]: I0813 01:01:43.322480 2758 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 01:01:43.326169 kubelet[2758]: I0813 01:01:43.326151 2758 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 01:01:43.374888 kubelet[2758]: E0813 01:01:43.374856 2758 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-20-232\" already exists" pod="kube-system/kube-scheduler-ip-172-31-20-232" Aug 13 01:01:43.398409 kubelet[2758]: I0813 01:01:43.393706 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f4329410a6e5f53aed016faebc2450d-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-232\" (UID: \"6f4329410a6e5f53aed016faebc2450d\") " pod="kube-system/kube-controller-manager-ip-172-31-20-232" Aug 13 01:01:43.398409 kubelet[2758]: I0813 01:01:43.393754 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4705d8dce7decd26ae87ec8b7abcfe72-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-232\" (UID: \"4705d8dce7decd26ae87ec8b7abcfe72\") " pod="kube-system/kube-scheduler-ip-172-31-20-232" Aug 13 01:01:43.398409 kubelet[2758]: I0813 01:01:43.393775 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7d80783c88f41e7d01abd4cb2ee2a26e-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-232\" (UID: \"7d80783c88f41e7d01abd4cb2ee2a26e\") " pod="kube-system/kube-apiserver-ip-172-31-20-232" Aug 13 01:01:43.398409 kubelet[2758]: I0813 01:01:43.393795 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6f4329410a6e5f53aed016faebc2450d-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-232\" (UID: \"6f4329410a6e5f53aed016faebc2450d\") " pod="kube-system/kube-controller-manager-ip-172-31-20-232" Aug 13 01:01:43.398409 kubelet[2758]: I0813 01:01:43.393811 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6f4329410a6e5f53aed016faebc2450d-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-232\" (UID: \"6f4329410a6e5f53aed016faebc2450d\") " pod="kube-system/kube-controller-manager-ip-172-31-20-232" Aug 13 01:01:43.398664 kubelet[2758]: I0813 01:01:43.393829 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6f4329410a6e5f53aed016faebc2450d-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-232\" (UID: \"6f4329410a6e5f53aed016faebc2450d\") " pod="kube-system/kube-controller-manager-ip-172-31-20-232" Aug 13 01:01:43.398664 kubelet[2758]: I0813 01:01:43.393847 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7d80783c88f41e7d01abd4cb2ee2a26e-ca-certs\") pod \"kube-apiserver-ip-172-31-20-232\" (UID: \"7d80783c88f41e7d01abd4cb2ee2a26e\") " pod="kube-system/kube-apiserver-ip-172-31-20-232" Aug 13 01:01:43.398664 kubelet[2758]: I0813 01:01:43.393861 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7d80783c88f41e7d01abd4cb2ee2a26e-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-232\" (UID: \"7d80783c88f41e7d01abd4cb2ee2a26e\") " pod="kube-system/kube-apiserver-ip-172-31-20-232" Aug 13 01:01:43.398664 kubelet[2758]: I0813 01:01:43.393876 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6f4329410a6e5f53aed016faebc2450d-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-232\" (UID: \"6f4329410a6e5f53aed016faebc2450d\") " pod="kube-system/kube-controller-manager-ip-172-31-20-232" Aug 13 01:01:43.449814 kubelet[2758]: I0813 01:01:43.449793 2758 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-20-232" Aug 13 01:01:43.458413 kubelet[2758]: I0813 01:01:43.458383 2758 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-20-232" Aug 13 01:01:43.458701 kubelet[2758]: I0813 01:01:43.458686 2758 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-20-232" Aug 13 01:01:44.163541 kubelet[2758]: I0813 01:01:44.163498 2758 apiserver.go:52] "Watching apiserver" Aug 13 01:01:44.192408 kubelet[2758]: I0813 01:01:44.192387 2758 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 01:01:44.215660 sudo[2768]: pam_unix(sudo:session): session closed for user root Aug 13 01:01:44.296690 kubelet[2758]: E0813 01:01:44.296655 2758 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-20-232\" already exists" pod="kube-system/kube-apiserver-ip-172-31-20-232" Aug 13 01:01:44.330566 kubelet[2758]: I0813 01:01:44.330237 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-20-232" podStartSLOduration=1.330215628 podStartE2EDuration="1.330215628s" podCreationTimestamp="2025-08-13 01:01:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:01:44.320491082 +0000 UTC m=+1.291334914" watchObservedRunningTime="2025-08-13 01:01:44.330215628 +0000 UTC m=+1.301059454" Aug 13 01:01:44.330796 kubelet[2758]: I0813 01:01:44.330764 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-20-232" podStartSLOduration=4.330750972 podStartE2EDuration="4.330750972s" podCreationTimestamp="2025-08-13 01:01:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:01:44.330507204 +0000 UTC m=+1.301351039" watchObservedRunningTime="2025-08-13 01:01:44.330750972 +0000 UTC m=+1.301594802" Aug 13 01:01:44.359715 kubelet[2758]: I0813 01:01:44.359655 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-20-232" podStartSLOduration=1.359617087 podStartE2EDuration="1.359617087s" podCreationTimestamp="2025-08-13 01:01:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:01:44.344084852 +0000 UTC m=+1.314928685" watchObservedRunningTime="2025-08-13 01:01:44.359617087 +0000 UTC m=+1.330460929" Aug 13 01:01:46.534587 sudo[2113]: pam_unix(sudo:session): session closed for user root Aug 13 01:01:46.563433 sshd[2109]: pam_unix(sshd:session): session closed for user core Aug 13 01:01:46.567016 systemd[1]: sshd@4-172.31.20.232:22-147.75.109.163:44410.service: Deactivated successfully. Aug 13 01:01:46.568504 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 01:01:46.569356 systemd-logind[1835]: Session 5 logged out. Waiting for processes to exit. Aug 13 01:01:46.570937 systemd-logind[1835]: Removed session 5. Aug 13 01:01:46.677782 kubelet[2758]: I0813 01:01:46.677754 2758 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 01:01:46.678346 env[1846]: time="2025-08-13T01:01:46.678268876Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 01:01:46.678721 kubelet[2758]: I0813 01:01:46.678618 2758 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 01:01:47.628583 kubelet[2758]: I0813 01:01:47.628545 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3a1cce10-0722-4fbb-ade6-78a2897e8100-clustermesh-secrets\") pod \"cilium-qx2nx\" (UID: \"3a1cce10-0722-4fbb-ade6-78a2897e8100\") " pod="kube-system/cilium-qx2nx" Aug 13 01:01:47.628583 kubelet[2758]: I0813 01:01:47.628584 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3a1cce10-0722-4fbb-ade6-78a2897e8100-cilium-run\") pod \"cilium-qx2nx\" (UID: \"3a1cce10-0722-4fbb-ade6-78a2897e8100\") " pod="kube-system/cilium-qx2nx" Aug 13 01:01:47.628798 kubelet[2758]: I0813 01:01:47.628608 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7e2bfdbc-b353-470b-abb0-48c5ea33fae8-kube-proxy\") pod \"kube-proxy-dxgm6\" (UID: \"7e2bfdbc-b353-470b-abb0-48c5ea33fae8\") " pod="kube-system/kube-proxy-dxgm6" Aug 13 01:01:47.628798 kubelet[2758]: I0813 01:01:47.628623 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3a1cce10-0722-4fbb-ade6-78a2897e8100-etc-cni-netd\") pod \"cilium-qx2nx\" (UID: \"3a1cce10-0722-4fbb-ade6-78a2897e8100\") " pod="kube-system/cilium-qx2nx" Aug 13 01:01:47.628798 kubelet[2758]: I0813 01:01:47.628648 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3a1cce10-0722-4fbb-ade6-78a2897e8100-xtables-lock\") pod \"cilium-qx2nx\" (UID: \"3a1cce10-0722-4fbb-ade6-78a2897e8100\") " pod="kube-system/cilium-qx2nx" Aug 13 01:01:47.628798 kubelet[2758]: I0813 01:01:47.628664 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3a1cce10-0722-4fbb-ade6-78a2897e8100-host-proc-sys-net\") pod \"cilium-qx2nx\" (UID: \"3a1cce10-0722-4fbb-ade6-78a2897e8100\") " pod="kube-system/cilium-qx2nx" Aug 13 01:01:47.628798 kubelet[2758]: I0813 01:01:47.628682 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxf7w\" (UniqueName: \"kubernetes.io/projected/7e2bfdbc-b353-470b-abb0-48c5ea33fae8-kube-api-access-zxf7w\") pod \"kube-proxy-dxgm6\" (UID: \"7e2bfdbc-b353-470b-abb0-48c5ea33fae8\") " pod="kube-system/kube-proxy-dxgm6" Aug 13 01:01:47.628798 kubelet[2758]: I0813 01:01:47.628698 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3a1cce10-0722-4fbb-ade6-78a2897e8100-cni-path\") pod \"cilium-qx2nx\" (UID: \"3a1cce10-0722-4fbb-ade6-78a2897e8100\") " pod="kube-system/cilium-qx2nx" Aug 13 01:01:47.628970 kubelet[2758]: I0813 01:01:47.628711 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3a1cce10-0722-4fbb-ade6-78a2897e8100-hubble-tls\") pod \"cilium-qx2nx\" (UID: \"3a1cce10-0722-4fbb-ade6-78a2897e8100\") " pod="kube-system/cilium-qx2nx" Aug 13 01:01:47.628970 kubelet[2758]: I0813 01:01:47.628733 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf7j6\" (UniqueName: \"kubernetes.io/projected/3a1cce10-0722-4fbb-ade6-78a2897e8100-kube-api-access-pf7j6\") pod \"cilium-qx2nx\" (UID: \"3a1cce10-0722-4fbb-ade6-78a2897e8100\") " pod="kube-system/cilium-qx2nx" Aug 13 01:01:47.628970 kubelet[2758]: I0813 01:01:47.628751 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3a1cce10-0722-4fbb-ade6-78a2897e8100-bpf-maps\") pod \"cilium-qx2nx\" (UID: \"3a1cce10-0722-4fbb-ade6-78a2897e8100\") " pod="kube-system/cilium-qx2nx" Aug 13 01:01:47.628970 kubelet[2758]: I0813 01:01:47.628766 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3a1cce10-0722-4fbb-ade6-78a2897e8100-cilium-cgroup\") pod \"cilium-qx2nx\" (UID: \"3a1cce10-0722-4fbb-ade6-78a2897e8100\") " pod="kube-system/cilium-qx2nx" Aug 13 01:01:47.628970 kubelet[2758]: I0813 01:01:47.628780 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7e2bfdbc-b353-470b-abb0-48c5ea33fae8-lib-modules\") pod \"kube-proxy-dxgm6\" (UID: \"7e2bfdbc-b353-470b-abb0-48c5ea33fae8\") " pod="kube-system/kube-proxy-dxgm6" Aug 13 01:01:47.628970 kubelet[2758]: I0813 01:01:47.628799 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3a1cce10-0722-4fbb-ade6-78a2897e8100-host-proc-sys-kernel\") pod \"cilium-qx2nx\" (UID: \"3a1cce10-0722-4fbb-ade6-78a2897e8100\") " pod="kube-system/cilium-qx2nx" Aug 13 01:01:47.629130 kubelet[2758]: I0813 01:01:47.628816 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7e2bfdbc-b353-470b-abb0-48c5ea33fae8-xtables-lock\") pod \"kube-proxy-dxgm6\" (UID: \"7e2bfdbc-b353-470b-abb0-48c5ea33fae8\") " pod="kube-system/kube-proxy-dxgm6" Aug 13 01:01:47.629130 kubelet[2758]: I0813 01:01:47.628830 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a1cce10-0722-4fbb-ade6-78a2897e8100-lib-modules\") pod \"cilium-qx2nx\" (UID: \"3a1cce10-0722-4fbb-ade6-78a2897e8100\") " pod="kube-system/cilium-qx2nx" Aug 13 01:01:47.629130 kubelet[2758]: I0813 01:01:47.628849 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3a1cce10-0722-4fbb-ade6-78a2897e8100-hostproc\") pod \"cilium-qx2nx\" (UID: \"3a1cce10-0722-4fbb-ade6-78a2897e8100\") " pod="kube-system/cilium-qx2nx" Aug 13 01:01:47.629130 kubelet[2758]: I0813 01:01:47.628865 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3a1cce10-0722-4fbb-ade6-78a2897e8100-cilium-config-path\") pod \"cilium-qx2nx\" (UID: \"3a1cce10-0722-4fbb-ade6-78a2897e8100\") " pod="kube-system/cilium-qx2nx" Aug 13 01:01:47.729977 kubelet[2758]: I0813 01:01:47.729943 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/98d0a19a-56f0-48cb-b1de-03bcce9bf4ed-cilium-config-path\") pod \"cilium-operator-5d85765b45-xpcx9\" (UID: \"98d0a19a-56f0-48cb-b1de-03bcce9bf4ed\") " pod="kube-system/cilium-operator-5d85765b45-xpcx9" Aug 13 01:01:47.730460 kubelet[2758]: I0813 01:01:47.730442 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxzjt\" (UniqueName: \"kubernetes.io/projected/98d0a19a-56f0-48cb-b1de-03bcce9bf4ed-kube-api-access-hxzjt\") pod \"cilium-operator-5d85765b45-xpcx9\" (UID: \"98d0a19a-56f0-48cb-b1de-03bcce9bf4ed\") " pod="kube-system/cilium-operator-5d85765b45-xpcx9" Aug 13 01:01:47.731428 kubelet[2758]: I0813 01:01:47.731403 2758 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Aug 13 01:01:47.879832 env[1846]: time="2025-08-13T01:01:47.879049601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dxgm6,Uid:7e2bfdbc-b353-470b-abb0-48c5ea33fae8,Namespace:kube-system,Attempt:0,}" Aug 13 01:01:47.882932 env[1846]: time="2025-08-13T01:01:47.882895367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qx2nx,Uid:3a1cce10-0722-4fbb-ade6-78a2897e8100,Namespace:kube-system,Attempt:0,}" Aug 13 01:01:47.911942 env[1846]: time="2025-08-13T01:01:47.908260284Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:01:47.911942 env[1846]: time="2025-08-13T01:01:47.908305742Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:01:47.911942 env[1846]: time="2025-08-13T01:01:47.908316152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:01:47.911942 env[1846]: time="2025-08-13T01:01:47.908522461Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5bee004c4155bfb754ccf37ce7d17310cbf446e3fd8ffe0018f5ce6159a7a956 pid=2845 runtime=io.containerd.runc.v2 Aug 13 01:01:47.920681 env[1846]: time="2025-08-13T01:01:47.920475473Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:01:47.920681 env[1846]: time="2025-08-13T01:01:47.920611165Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:01:47.920681 env[1846]: time="2025-08-13T01:01:47.920621871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:01:47.921417 env[1846]: time="2025-08-13T01:01:47.921363977Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5520eed74605f5d1be1956a9010b3a516dc17e0259073cb1886f47092902d694 pid=2862 runtime=io.containerd.runc.v2 Aug 13 01:01:47.982524 env[1846]: time="2025-08-13T01:01:47.982307003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-xpcx9,Uid:98d0a19a-56f0-48cb-b1de-03bcce9bf4ed,Namespace:kube-system,Attempt:0,}" Aug 13 01:01:47.987755 env[1846]: time="2025-08-13T01:01:47.987704490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qx2nx,Uid:3a1cce10-0722-4fbb-ade6-78a2897e8100,Namespace:kube-system,Attempt:0,} returns sandbox id \"5520eed74605f5d1be1956a9010b3a516dc17e0259073cb1886f47092902d694\"" Aug 13 01:01:47.991165 env[1846]: time="2025-08-13T01:01:47.991123557Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 13 01:01:47.997908 env[1846]: time="2025-08-13T01:01:47.997864829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dxgm6,Uid:7e2bfdbc-b353-470b-abb0-48c5ea33fae8,Namespace:kube-system,Attempt:0,} returns sandbox id \"5bee004c4155bfb754ccf37ce7d17310cbf446e3fd8ffe0018f5ce6159a7a956\"" Aug 13 01:01:48.004376 env[1846]: time="2025-08-13T01:01:48.002984360Z" level=info msg="CreateContainer within sandbox \"5bee004c4155bfb754ccf37ce7d17310cbf446e3fd8ffe0018f5ce6159a7a956\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 01:01:48.024240 env[1846]: time="2025-08-13T01:01:48.023961950Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:01:48.024364 env[1846]: time="2025-08-13T01:01:48.024243737Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:01:48.024364 env[1846]: time="2025-08-13T01:01:48.024270925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:01:48.025577 env[1846]: time="2025-08-13T01:01:48.024490827Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f04c0c0cdabac85e2e177fc32916467365fe139360098de97a8f80db145fb956 pid=2927 runtime=io.containerd.runc.v2 Aug 13 01:01:48.049646 env[1846]: time="2025-08-13T01:01:48.049594208Z" level=info msg="CreateContainer within sandbox \"5bee004c4155bfb754ccf37ce7d17310cbf446e3fd8ffe0018f5ce6159a7a956\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b397478f57ce2cbfbb46feb16cde8ce7c272c556fb1a7f581de1f5d982f87665\"" Aug 13 01:01:48.050805 env[1846]: time="2025-08-13T01:01:48.050770859Z" level=info msg="StartContainer for \"b397478f57ce2cbfbb46feb16cde8ce7c272c556fb1a7f581de1f5d982f87665\"" Aug 13 01:01:48.133498 env[1846]: time="2025-08-13T01:01:48.133316915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-xpcx9,Uid:98d0a19a-56f0-48cb-b1de-03bcce9bf4ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"f04c0c0cdabac85e2e177fc32916467365fe139360098de97a8f80db145fb956\"" Aug 13 01:01:48.159795 env[1846]: time="2025-08-13T01:01:48.159753497Z" level=info msg="StartContainer for \"b397478f57ce2cbfbb46feb16cde8ce7c272c556fb1a7f581de1f5d982f87665\" returns successfully" Aug 13 01:01:48.310914 kubelet[2758]: I0813 01:01:48.310856 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dxgm6" podStartSLOduration=1.310839274 podStartE2EDuration="1.310839274s" podCreationTimestamp="2025-08-13 01:01:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:01:48.310694193 +0000 UTC m=+5.281538025" watchObservedRunningTime="2025-08-13 01:01:48.310839274 +0000 UTC m=+5.281683106" Aug 13 01:01:51.034603 update_engine[1837]: I0813 01:01:51.033859 1837 update_attempter.cc:509] Updating boot flags... Aug 13 01:01:53.486648 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2826058548.mount: Deactivated successfully. Aug 13 01:01:56.520228 env[1846]: time="2025-08-13T01:01:56.519747152Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:01:56.530335 env[1846]: time="2025-08-13T01:01:56.530278290Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:01:56.534070 env[1846]: time="2025-08-13T01:01:56.534008988Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:01:56.534731 env[1846]: time="2025-08-13T01:01:56.534688950Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 13 01:01:56.537230 env[1846]: time="2025-08-13T01:01:56.537170483Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 13 01:01:56.537903 env[1846]: time="2025-08-13T01:01:56.537876279Z" level=info msg="CreateContainer within sandbox \"5520eed74605f5d1be1956a9010b3a516dc17e0259073cb1886f47092902d694\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 01:01:56.559676 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2749151499.mount: Deactivated successfully. Aug 13 01:01:56.568481 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3089229611.mount: Deactivated successfully. Aug 13 01:01:56.577374 env[1846]: time="2025-08-13T01:01:56.577261623Z" level=info msg="CreateContainer within sandbox \"5520eed74605f5d1be1956a9010b3a516dc17e0259073cb1886f47092902d694\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5b58c1d0590add9bfc5d1cf760cc267d0c5051a27e09f3bf30a7facc5cb7fed1\"" Aug 13 01:01:56.578004 env[1846]: time="2025-08-13T01:01:56.577965805Z" level=info msg="StartContainer for \"5b58c1d0590add9bfc5d1cf760cc267d0c5051a27e09f3bf30a7facc5cb7fed1\"" Aug 13 01:01:56.638225 env[1846]: time="2025-08-13T01:01:56.636520767Z" level=info msg="StartContainer for \"5b58c1d0590add9bfc5d1cf760cc267d0c5051a27e09f3bf30a7facc5cb7fed1\" returns successfully" Aug 13 01:01:56.829135 env[1846]: time="2025-08-13T01:01:56.829086923Z" level=info msg="shim disconnected" id=5b58c1d0590add9bfc5d1cf760cc267d0c5051a27e09f3bf30a7facc5cb7fed1 Aug 13 01:01:56.829135 env[1846]: time="2025-08-13T01:01:56.829135299Z" level=warning msg="cleaning up after shim disconnected" id=5b58c1d0590add9bfc5d1cf760cc267d0c5051a27e09f3bf30a7facc5cb7fed1 namespace=k8s.io Aug 13 01:01:56.829417 env[1846]: time="2025-08-13T01:01:56.829145780Z" level=info msg="cleaning up dead shim" Aug 13 01:01:56.837357 env[1846]: time="2025-08-13T01:01:56.837296144Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:01:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3272 runtime=io.containerd.runc.v2\n" Aug 13 01:01:57.332397 env[1846]: time="2025-08-13T01:01:57.332336608Z" level=info msg="CreateContainer within sandbox \"5520eed74605f5d1be1956a9010b3a516dc17e0259073cb1886f47092902d694\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 01:01:57.352155 env[1846]: time="2025-08-13T01:01:57.351944278Z" level=info msg="CreateContainer within sandbox \"5520eed74605f5d1be1956a9010b3a516dc17e0259073cb1886f47092902d694\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"73089cc5e8626e45dbcdcfdfd26335e315ca26f61eb69343867fb062f0857134\"" Aug 13 01:01:57.352819 env[1846]: time="2025-08-13T01:01:57.352789623Z" level=info msg="StartContainer for \"73089cc5e8626e45dbcdcfdfd26335e315ca26f61eb69343867fb062f0857134\"" Aug 13 01:01:57.421690 env[1846]: time="2025-08-13T01:01:57.418682820Z" level=info msg="StartContainer for \"73089cc5e8626e45dbcdcfdfd26335e315ca26f61eb69343867fb062f0857134\" returns successfully" Aug 13 01:01:57.431081 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 01:01:57.431359 systemd[1]: Stopped systemd-sysctl.service. Aug 13 01:01:57.431527 systemd[1]: Stopping systemd-sysctl.service... Aug 13 01:01:57.433614 systemd[1]: Starting systemd-sysctl.service... Aug 13 01:01:57.447272 systemd[1]: Finished systemd-sysctl.service. Aug 13 01:01:57.469171 env[1846]: time="2025-08-13T01:01:57.469125474Z" level=info msg="shim disconnected" id=73089cc5e8626e45dbcdcfdfd26335e315ca26f61eb69343867fb062f0857134 Aug 13 01:01:57.469171 env[1846]: time="2025-08-13T01:01:57.469167033Z" level=warning msg="cleaning up after shim disconnected" id=73089cc5e8626e45dbcdcfdfd26335e315ca26f61eb69343867fb062f0857134 namespace=k8s.io Aug 13 01:01:57.469171 env[1846]: time="2025-08-13T01:01:57.469175762Z" level=info msg="cleaning up dead shim" Aug 13 01:01:57.477614 env[1846]: time="2025-08-13T01:01:57.477562257Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:01:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3338 runtime=io.containerd.runc.v2\n" Aug 13 01:01:57.555558 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b58c1d0590add9bfc5d1cf760cc267d0c5051a27e09f3bf30a7facc5cb7fed1-rootfs.mount: Deactivated successfully. Aug 13 01:01:57.771744 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3022393349.mount: Deactivated successfully. Aug 13 01:01:58.350265 env[1846]: time="2025-08-13T01:01:58.348729903Z" level=info msg="CreateContainer within sandbox \"5520eed74605f5d1be1956a9010b3a516dc17e0259073cb1886f47092902d694\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 01:01:58.398838 env[1846]: time="2025-08-13T01:01:58.398772044Z" level=info msg="CreateContainer within sandbox \"5520eed74605f5d1be1956a9010b3a516dc17e0259073cb1886f47092902d694\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7035c9ca6036a7c24f11338ed145e92d0491895127c9fccbf0eb92ee68227ab7\"" Aug 13 01:01:58.400796 env[1846]: time="2025-08-13T01:01:58.400761100Z" level=info msg="StartContainer for \"7035c9ca6036a7c24f11338ed145e92d0491895127c9fccbf0eb92ee68227ab7\"" Aug 13 01:01:58.484559 env[1846]: time="2025-08-13T01:01:58.484522999Z" level=info msg="StartContainer for \"7035c9ca6036a7c24f11338ed145e92d0491895127c9fccbf0eb92ee68227ab7\" returns successfully" Aug 13 01:01:58.559690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3647676409.mount: Deactivated successfully. Aug 13 01:01:58.569866 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7035c9ca6036a7c24f11338ed145e92d0491895127c9fccbf0eb92ee68227ab7-rootfs.mount: Deactivated successfully. Aug 13 01:01:58.662966 env[1846]: time="2025-08-13T01:01:58.662551519Z" level=info msg="shim disconnected" id=7035c9ca6036a7c24f11338ed145e92d0491895127c9fccbf0eb92ee68227ab7 Aug 13 01:01:58.663315 env[1846]: time="2025-08-13T01:01:58.663277499Z" level=warning msg="cleaning up after shim disconnected" id=7035c9ca6036a7c24f11338ed145e92d0491895127c9fccbf0eb92ee68227ab7 namespace=k8s.io Aug 13 01:01:58.663467 env[1846]: time="2025-08-13T01:01:58.663453009Z" level=info msg="cleaning up dead shim" Aug 13 01:01:58.672931 env[1846]: time="2025-08-13T01:01:58.672889935Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:01:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3398 runtime=io.containerd.runc.v2\n" Aug 13 01:01:58.749471 env[1846]: time="2025-08-13T01:01:58.749413057Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:01:58.751441 env[1846]: time="2025-08-13T01:01:58.751397542Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:01:58.753430 env[1846]: time="2025-08-13T01:01:58.753401247Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:01:58.753995 env[1846]: time="2025-08-13T01:01:58.753962486Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 13 01:01:58.756666 env[1846]: time="2025-08-13T01:01:58.756618286Z" level=info msg="CreateContainer within sandbox \"f04c0c0cdabac85e2e177fc32916467365fe139360098de97a8f80db145fb956\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 13 01:01:58.777867 env[1846]: time="2025-08-13T01:01:58.777807195Z" level=info msg="CreateContainer within sandbox \"f04c0c0cdabac85e2e177fc32916467365fe139360098de97a8f80db145fb956\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"91f3cdca5e578c078c475ce79035554f6eabccf92d506ae9b4d261e26a5cd0ae\"" Aug 13 01:01:58.780395 env[1846]: time="2025-08-13T01:01:58.778918272Z" level=info msg="StartContainer for \"91f3cdca5e578c078c475ce79035554f6eabccf92d506ae9b4d261e26a5cd0ae\"" Aug 13 01:01:58.838916 env[1846]: time="2025-08-13T01:01:58.838861178Z" level=info msg="StartContainer for \"91f3cdca5e578c078c475ce79035554f6eabccf92d506ae9b4d261e26a5cd0ae\" returns successfully" Aug 13 01:01:59.337515 env[1846]: time="2025-08-13T01:01:59.336215933Z" level=info msg="CreateContainer within sandbox \"5520eed74605f5d1be1956a9010b3a516dc17e0259073cb1886f47092902d694\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 01:01:59.350370 env[1846]: time="2025-08-13T01:01:59.350332753Z" level=info msg="CreateContainer within sandbox \"5520eed74605f5d1be1956a9010b3a516dc17e0259073cb1886f47092902d694\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4905ab6ac239310ad2584b9784137914f8f0076f6de9bf7ab315552a9289e702\"" Aug 13 01:01:59.351335 env[1846]: time="2025-08-13T01:01:59.351305518Z" level=info msg="StartContainer for \"4905ab6ac239310ad2584b9784137914f8f0076f6de9bf7ab315552a9289e702\"" Aug 13 01:01:59.423841 env[1846]: time="2025-08-13T01:01:59.423792914Z" level=info msg="StartContainer for \"4905ab6ac239310ad2584b9784137914f8f0076f6de9bf7ab315552a9289e702\" returns successfully" Aug 13 01:01:59.445978 kubelet[2758]: I0813 01:01:59.445927 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-xpcx9" podStartSLOduration=1.829446793 podStartE2EDuration="12.445906488s" podCreationTimestamp="2025-08-13 01:01:47 +0000 UTC" firstStartedPulling="2025-08-13 01:01:48.138503826 +0000 UTC m=+5.109347651" lastFinishedPulling="2025-08-13 01:01:58.754963533 +0000 UTC m=+15.725807346" observedRunningTime="2025-08-13 01:01:59.355771634 +0000 UTC m=+16.326615466" watchObservedRunningTime="2025-08-13 01:01:59.445906488 +0000 UTC m=+16.416750319" Aug 13 01:01:59.496653 env[1846]: time="2025-08-13T01:01:59.496610746Z" level=info msg="shim disconnected" id=4905ab6ac239310ad2584b9784137914f8f0076f6de9bf7ab315552a9289e702 Aug 13 01:01:59.496653 env[1846]: time="2025-08-13T01:01:59.496651818Z" level=warning msg="cleaning up after shim disconnected" id=4905ab6ac239310ad2584b9784137914f8f0076f6de9bf7ab315552a9289e702 namespace=k8s.io Aug 13 01:01:59.496883 env[1846]: time="2025-08-13T01:01:59.496661156Z" level=info msg="cleaning up dead shim" Aug 13 01:01:59.522236 env[1846]: time="2025-08-13T01:01:59.520263082Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:01:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3491 runtime=io.containerd.runc.v2\n" Aug 13 01:02:00.333698 env[1846]: time="2025-08-13T01:02:00.333660103Z" level=info msg="CreateContainer within sandbox \"5520eed74605f5d1be1956a9010b3a516dc17e0259073cb1886f47092902d694\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 01:02:00.358048 env[1846]: time="2025-08-13T01:02:00.358001010Z" level=info msg="CreateContainer within sandbox \"5520eed74605f5d1be1956a9010b3a516dc17e0259073cb1886f47092902d694\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"efaa867a26622d57bfb5dff323a3b991c20318e8dcba7ef5ac926b90e2cde59c\"" Aug 13 01:02:00.358721 env[1846]: time="2025-08-13T01:02:00.358673320Z" level=info msg="StartContainer for \"efaa867a26622d57bfb5dff323a3b991c20318e8dcba7ef5ac926b90e2cde59c\"" Aug 13 01:02:00.429454 env[1846]: time="2025-08-13T01:02:00.429401554Z" level=info msg="StartContainer for \"efaa867a26622d57bfb5dff323a3b991c20318e8dcba7ef5ac926b90e2cde59c\" returns successfully" Aug 13 01:02:00.677083 kubelet[2758]: I0813 01:02:00.676905 2758 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 13 01:02:00.848236 kubelet[2758]: I0813 01:02:00.848183 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11bcc140-895a-4766-8371-b4ab0b4db295-config-volume\") pod \"coredns-7c65d6cfc9-rpmv9\" (UID: \"11bcc140-895a-4766-8371-b4ab0b4db295\") " pod="kube-system/coredns-7c65d6cfc9-rpmv9" Aug 13 01:02:00.848236 kubelet[2758]: I0813 01:02:00.848242 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/804eb000-cfba-43e6-8048-537d56374440-config-volume\") pod \"coredns-7c65d6cfc9-6bv2h\" (UID: \"804eb000-cfba-43e6-8048-537d56374440\") " pod="kube-system/coredns-7c65d6cfc9-6bv2h" Aug 13 01:02:00.848428 kubelet[2758]: I0813 01:02:00.848276 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4zfb\" (UniqueName: \"kubernetes.io/projected/11bcc140-895a-4766-8371-b4ab0b4db295-kube-api-access-j4zfb\") pod \"coredns-7c65d6cfc9-rpmv9\" (UID: \"11bcc140-895a-4766-8371-b4ab0b4db295\") " pod="kube-system/coredns-7c65d6cfc9-rpmv9" Aug 13 01:02:00.848428 kubelet[2758]: I0813 01:02:00.848295 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2fbz\" (UniqueName: \"kubernetes.io/projected/804eb000-cfba-43e6-8048-537d56374440-kube-api-access-s2fbz\") pod \"coredns-7c65d6cfc9-6bv2h\" (UID: \"804eb000-cfba-43e6-8048-537d56374440\") " pod="kube-system/coredns-7c65d6cfc9-6bv2h" Aug 13 01:02:01.035964 env[1846]: time="2025-08-13T01:02:01.035809737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rpmv9,Uid:11bcc140-895a-4766-8371-b4ab0b4db295,Namespace:kube-system,Attempt:0,}" Aug 13 01:02:01.038759 env[1846]: time="2025-08-13T01:02:01.038701421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6bv2h,Uid:804eb000-cfba-43e6-8048-537d56374440,Namespace:kube-system,Attempt:0,}" Aug 13 01:02:01.385102 kubelet[2758]: I0813 01:02:01.384923 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qx2nx" podStartSLOduration=5.837790648 podStartE2EDuration="14.384896777s" podCreationTimestamp="2025-08-13 01:01:47 +0000 UTC" firstStartedPulling="2025-08-13 01:01:47.989356142 +0000 UTC m=+4.960199952" lastFinishedPulling="2025-08-13 01:01:56.536462259 +0000 UTC m=+13.507306081" observedRunningTime="2025-08-13 01:02:01.383529718 +0000 UTC m=+18.354373576" watchObservedRunningTime="2025-08-13 01:02:01.384896777 +0000 UTC m=+18.355740613" Aug 13 01:02:03.154868 amazon-ssm-agent[1842]: 2025-08-13 01:02:03 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Aug 13 01:02:03.991229 systemd-networkd[1517]: cilium_host: Link UP Aug 13 01:02:04.000458 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Aug 13 01:02:04.001703 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Aug 13 01:02:03.995615 systemd-networkd[1517]: cilium_net: Link UP Aug 13 01:02:03.996776 systemd-networkd[1517]: cilium_net: Gained carrier Aug 13 01:02:04.000706 systemd-networkd[1517]: cilium_host: Gained carrier Aug 13 01:02:04.001108 (udev-worker)[3652]: Network interface NamePolicy= disabled on kernel command line. Aug 13 01:02:04.005452 systemd-networkd[1517]: cilium_net: Gained IPv6LL Aug 13 01:02:04.007500 (udev-worker)[3651]: Network interface NamePolicy= disabled on kernel command line. Aug 13 01:02:04.278017 (udev-worker)[3667]: Network interface NamePolicy= disabled on kernel command line. Aug 13 01:02:04.285033 systemd-networkd[1517]: cilium_vxlan: Link UP Aug 13 01:02:04.285041 systemd-networkd[1517]: cilium_vxlan: Gained carrier Aug 13 01:02:04.681687 systemd-networkd[1517]: cilium_host: Gained IPv6LL Aug 13 01:02:05.048353 kernel: NET: Registered PF_ALG protocol family Aug 13 01:02:05.881439 systemd-networkd[1517]: lxc_health: Link UP Aug 13 01:02:05.884833 systemd-networkd[1517]: lxc_health: Gained carrier Aug 13 01:02:05.885286 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Aug 13 01:02:05.961383 systemd-networkd[1517]: cilium_vxlan: Gained IPv6LL Aug 13 01:02:06.154364 systemd-networkd[1517]: lxc97220572756b: Link UP Aug 13 01:02:06.160226 kernel: eth0: renamed from tmp0b940 Aug 13 01:02:06.169264 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc97220572756b: link becomes ready Aug 13 01:02:06.169072 systemd-networkd[1517]: lxc97220572756b: Gained carrier Aug 13 01:02:06.174075 systemd-networkd[1517]: lxc1e769711a5e1: Link UP Aug 13 01:02:06.189637 kernel: eth0: renamed from tmpc0e75 Aug 13 01:02:06.201347 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc1e769711a5e1: link becomes ready Aug 13 01:02:06.200808 systemd-networkd[1517]: lxc1e769711a5e1: Gained carrier Aug 13 01:02:06.207853 (udev-worker)[3666]: Network interface NamePolicy= disabled on kernel command line. Aug 13 01:02:07.625425 systemd-networkd[1517]: lxc_health: Gained IPv6LL Aug 13 01:02:07.689844 systemd-networkd[1517]: lxc1e769711a5e1: Gained IPv6LL Aug 13 01:02:07.945832 systemd-networkd[1517]: lxc97220572756b: Gained IPv6LL Aug 13 01:02:10.743092 env[1846]: time="2025-08-13T01:02:10.743015445Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:02:10.746420 env[1846]: time="2025-08-13T01:02:10.746373526Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:02:10.746750 env[1846]: time="2025-08-13T01:02:10.746702477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:02:10.747126 env[1846]: time="2025-08-13T01:02:10.747078069Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0b940345839d1491783fb4f706402974df9fd6e18c820d703efecdede8caff1e pid=4023 runtime=io.containerd.runc.v2 Aug 13 01:02:10.788741 env[1846]: time="2025-08-13T01:02:10.788646647Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:02:10.788996 env[1846]: time="2025-08-13T01:02:10.788954396Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:02:10.789136 env[1846]: time="2025-08-13T01:02:10.789108991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:02:10.789549 env[1846]: time="2025-08-13T01:02:10.789488282Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c0e759c36c4132705a0f3eb53c3c7513b8b16fc2ad59551a119f8aa0a5ce45bb pid=4046 runtime=io.containerd.runc.v2 Aug 13 01:02:10.814049 systemd[1]: run-containerd-runc-k8s.io-0b940345839d1491783fb4f706402974df9fd6e18c820d703efecdede8caff1e-runc.LuYf7d.mount: Deactivated successfully. Aug 13 01:02:10.956344 env[1846]: time="2025-08-13T01:02:10.956293931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rpmv9,Uid:11bcc140-895a-4766-8371-b4ab0b4db295,Namespace:kube-system,Attempt:0,} returns sandbox id \"0b940345839d1491783fb4f706402974df9fd6e18c820d703efecdede8caff1e\"" Aug 13 01:02:10.985073 env[1846]: time="2025-08-13T01:02:10.985015264Z" level=info msg="CreateContainer within sandbox \"0b940345839d1491783fb4f706402974df9fd6e18c820d703efecdede8caff1e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 01:02:10.987311 env[1846]: time="2025-08-13T01:02:10.987269528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6bv2h,Uid:804eb000-cfba-43e6-8048-537d56374440,Namespace:kube-system,Attempt:0,} returns sandbox id \"c0e759c36c4132705a0f3eb53c3c7513b8b16fc2ad59551a119f8aa0a5ce45bb\"" Aug 13 01:02:10.995872 env[1846]: time="2025-08-13T01:02:10.994947830Z" level=info msg="CreateContainer within sandbox \"c0e759c36c4132705a0f3eb53c3c7513b8b16fc2ad59551a119f8aa0a5ce45bb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 01:02:11.039858 env[1846]: time="2025-08-13T01:02:11.039782656Z" level=info msg="CreateContainer within sandbox \"0b940345839d1491783fb4f706402974df9fd6e18c820d703efecdede8caff1e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6965d5e46f863e732ae2da8113b548ba2b6c4ec6773e3c7b70250416e83dc8b7\"" Aug 13 01:02:11.044978 env[1846]: time="2025-08-13T01:02:11.042649584Z" level=info msg="CreateContainer within sandbox \"c0e759c36c4132705a0f3eb53c3c7513b8b16fc2ad59551a119f8aa0a5ce45bb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a612ff65165f19ef424a8a4542519ac5c834647d1dcb50f01c9bbd184bccdc8b\"" Aug 13 01:02:11.047265 env[1846]: time="2025-08-13T01:02:11.045498934Z" level=info msg="StartContainer for \"a612ff65165f19ef424a8a4542519ac5c834647d1dcb50f01c9bbd184bccdc8b\"" Aug 13 01:02:11.048733 env[1846]: time="2025-08-13T01:02:11.048691383Z" level=info msg="StartContainer for \"6965d5e46f863e732ae2da8113b548ba2b6c4ec6773e3c7b70250416e83dc8b7\"" Aug 13 01:02:11.136266 env[1846]: time="2025-08-13T01:02:11.136165084Z" level=info msg="StartContainer for \"6965d5e46f863e732ae2da8113b548ba2b6c4ec6773e3c7b70250416e83dc8b7\" returns successfully" Aug 13 01:02:11.139008 env[1846]: time="2025-08-13T01:02:11.138824167Z" level=info msg="StartContainer for \"a612ff65165f19ef424a8a4542519ac5c834647d1dcb50f01c9bbd184bccdc8b\" returns successfully" Aug 13 01:02:11.402467 kubelet[2758]: I0813 01:02:11.402419 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-6bv2h" podStartSLOduration=24.402402399 podStartE2EDuration="24.402402399s" podCreationTimestamp="2025-08-13 01:01:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:02:11.40181234 +0000 UTC m=+28.372656173" watchObservedRunningTime="2025-08-13 01:02:11.402402399 +0000 UTC m=+28.373246230" Aug 13 01:02:11.416887 kubelet[2758]: I0813 01:02:11.416832 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-rpmv9" podStartSLOduration=24.416815103 podStartE2EDuration="24.416815103s" podCreationTimestamp="2025-08-13 01:01:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:02:11.416683714 +0000 UTC m=+28.387527546" watchObservedRunningTime="2025-08-13 01:02:11.416815103 +0000 UTC m=+28.387658934" Aug 13 01:02:11.755758 systemd[1]: run-containerd-runc-k8s.io-c0e759c36c4132705a0f3eb53c3c7513b8b16fc2ad59551a119f8aa0a5ce45bb-runc.BG0AIn.mount: Deactivated successfully. Aug 13 01:02:12.825835 amazon-ssm-agent[1842]: 2025-08-13 01:02:12 INFO [HealthCheck] HealthCheck reporting agent health. Aug 13 01:02:24.995295 systemd[1]: Started sshd@5-172.31.20.232:22-147.75.109.163:60082.service. Aug 13 01:02:25.200613 sshd[4189]: Accepted publickey for core from 147.75.109.163 port 60082 ssh2: RSA SHA256:LgPaLKY3LN4TBfyOIoir69nxEAguUoPITi+qXSaDutg Aug 13 01:02:25.202548 sshd[4189]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:02:25.209070 systemd-logind[1835]: New session 6 of user core. Aug 13 01:02:25.209433 systemd[1]: Started session-6.scope. Aug 13 01:02:25.550130 sshd[4189]: pam_unix(sshd:session): session closed for user core Aug 13 01:02:25.553639 systemd[1]: sshd@5-172.31.20.232:22-147.75.109.163:60082.service: Deactivated successfully. Aug 13 01:02:25.556343 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 01:02:25.556969 systemd-logind[1835]: Session 6 logged out. Waiting for processes to exit. Aug 13 01:02:25.558760 systemd-logind[1835]: Removed session 6. Aug 13 01:02:30.575048 systemd[1]: Started sshd@6-172.31.20.232:22-147.75.109.163:45824.service. Aug 13 01:02:30.741491 sshd[4203]: Accepted publickey for core from 147.75.109.163 port 45824 ssh2: RSA SHA256:LgPaLKY3LN4TBfyOIoir69nxEAguUoPITi+qXSaDutg Aug 13 01:02:30.742990 sshd[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:02:30.748762 systemd[1]: Started session-7.scope. Aug 13 01:02:30.750382 systemd-logind[1835]: New session 7 of user core. Aug 13 01:02:30.945720 sshd[4203]: pam_unix(sshd:session): session closed for user core Aug 13 01:02:30.949077 systemd[1]: sshd@6-172.31.20.232:22-147.75.109.163:45824.service: Deactivated successfully. Aug 13 01:02:30.949852 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 01:02:30.950471 systemd-logind[1835]: Session 7 logged out. Waiting for processes to exit. Aug 13 01:02:30.951293 systemd-logind[1835]: Removed session 7. Aug 13 01:02:35.970594 systemd[1]: Started sshd@7-172.31.20.232:22-147.75.109.163:45826.service. Aug 13 01:02:36.135142 sshd[4217]: Accepted publickey for core from 147.75.109.163 port 45826 ssh2: RSA SHA256:LgPaLKY3LN4TBfyOIoir69nxEAguUoPITi+qXSaDutg Aug 13 01:02:36.136854 sshd[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:02:36.142386 systemd[1]: Started session-8.scope. Aug 13 01:02:36.142939 systemd-logind[1835]: New session 8 of user core. Aug 13 01:02:36.333867 sshd[4217]: pam_unix(sshd:session): session closed for user core Aug 13 01:02:36.337514 systemd[1]: sshd@7-172.31.20.232:22-147.75.109.163:45826.service: Deactivated successfully. Aug 13 01:02:36.339882 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 01:02:36.340359 systemd-logind[1835]: Session 8 logged out. Waiting for processes to exit. Aug 13 01:02:36.344886 systemd-logind[1835]: Removed session 8. Aug 13 01:02:41.358372 systemd[1]: Started sshd@8-172.31.20.232:22-147.75.109.163:46654.service. Aug 13 01:02:41.522454 sshd[4230]: Accepted publickey for core from 147.75.109.163 port 46654 ssh2: RSA SHA256:LgPaLKY3LN4TBfyOIoir69nxEAguUoPITi+qXSaDutg Aug 13 01:02:41.524188 sshd[4230]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:02:41.530079 systemd[1]: Started session-9.scope. Aug 13 01:02:41.530821 systemd-logind[1835]: New session 9 of user core. Aug 13 01:02:41.727415 sshd[4230]: pam_unix(sshd:session): session closed for user core Aug 13 01:02:41.730825 systemd[1]: sshd@8-172.31.20.232:22-147.75.109.163:46654.service: Deactivated successfully. Aug 13 01:02:41.732174 systemd-logind[1835]: Session 9 logged out. Waiting for processes to exit. Aug 13 01:02:41.732874 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 01:02:41.733828 systemd-logind[1835]: Removed session 9. Aug 13 01:02:41.750682 systemd[1]: Started sshd@9-172.31.20.232:22-147.75.109.163:46660.service. Aug 13 01:02:41.912656 sshd[4244]: Accepted publickey for core from 147.75.109.163 port 46660 ssh2: RSA SHA256:LgPaLKY3LN4TBfyOIoir69nxEAguUoPITi+qXSaDutg Aug 13 01:02:41.914249 sshd[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:02:41.920313 systemd[1]: Started session-10.scope. Aug 13 01:02:41.920632 systemd-logind[1835]: New session 10 of user core. Aug 13 01:02:42.187276 sshd[4244]: pam_unix(sshd:session): session closed for user core Aug 13 01:02:42.195588 systemd-logind[1835]: Session 10 logged out. Waiting for processes to exit. Aug 13 01:02:42.198593 systemd[1]: sshd@9-172.31.20.232:22-147.75.109.163:46660.service: Deactivated successfully. Aug 13 01:02:42.199466 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 01:02:42.200841 systemd-logind[1835]: Removed session 10. Aug 13 01:02:42.212404 systemd[1]: Started sshd@10-172.31.20.232:22-147.75.109.163:46668.service. Aug 13 01:02:42.380437 sshd[4255]: Accepted publickey for core from 147.75.109.163 port 46668 ssh2: RSA SHA256:LgPaLKY3LN4TBfyOIoir69nxEAguUoPITi+qXSaDutg Aug 13 01:02:42.381867 sshd[4255]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:02:42.386956 systemd[1]: Started session-11.scope. Aug 13 01:02:42.387310 systemd-logind[1835]: New session 11 of user core. Aug 13 01:02:42.588765 sshd[4255]: pam_unix(sshd:session): session closed for user core Aug 13 01:02:42.591943 systemd[1]: sshd@10-172.31.20.232:22-147.75.109.163:46668.service: Deactivated successfully. Aug 13 01:02:42.592753 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 01:02:42.593845 systemd-logind[1835]: Session 11 logged out. Waiting for processes to exit. Aug 13 01:02:42.599875 systemd-logind[1835]: Removed session 11. Aug 13 01:02:47.613680 systemd[1]: Started sshd@11-172.31.20.232:22-147.75.109.163:46684.service. Aug 13 01:02:47.778752 sshd[4270]: Accepted publickey for core from 147.75.109.163 port 46684 ssh2: RSA SHA256:LgPaLKY3LN4TBfyOIoir69nxEAguUoPITi+qXSaDutg Aug 13 01:02:47.780663 sshd[4270]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:02:47.786618 systemd[1]: Started session-12.scope. Aug 13 01:02:47.787460 systemd-logind[1835]: New session 12 of user core. Aug 13 01:02:48.002148 sshd[4270]: pam_unix(sshd:session): session closed for user core Aug 13 01:02:48.005792 systemd[1]: sshd@11-172.31.20.232:22-147.75.109.163:46684.service: Deactivated successfully. Aug 13 01:02:48.007523 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 01:02:48.008104 systemd-logind[1835]: Session 12 logged out. Waiting for processes to exit. Aug 13 01:02:48.009850 systemd-logind[1835]: Removed session 12. Aug 13 01:02:53.028601 systemd[1]: Started sshd@12-172.31.20.232:22-147.75.109.163:52270.service. Aug 13 01:02:53.193132 sshd[4285]: Accepted publickey for core from 147.75.109.163 port 52270 ssh2: RSA SHA256:LgPaLKY3LN4TBfyOIoir69nxEAguUoPITi+qXSaDutg Aug 13 01:02:53.194644 sshd[4285]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:02:53.200445 systemd[1]: Started session-13.scope. Aug 13 01:02:53.200850 systemd-logind[1835]: New session 13 of user core. Aug 13 01:02:53.389822 sshd[4285]: pam_unix(sshd:session): session closed for user core Aug 13 01:02:53.392844 systemd[1]: sshd@12-172.31.20.232:22-147.75.109.163:52270.service: Deactivated successfully. Aug 13 01:02:53.393677 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 01:02:53.394687 systemd-logind[1835]: Session 13 logged out. Waiting for processes to exit. Aug 13 01:02:53.395608 systemd-logind[1835]: Removed session 13. Aug 13 01:02:53.412824 systemd[1]: Started sshd@13-172.31.20.232:22-147.75.109.163:52272.service. Aug 13 01:02:53.574358 sshd[4298]: Accepted publickey for core from 147.75.109.163 port 52272 ssh2: RSA SHA256:LgPaLKY3LN4TBfyOIoir69nxEAguUoPITi+qXSaDutg Aug 13 01:02:53.575864 sshd[4298]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:02:53.581886 systemd[1]: Started session-14.scope. Aug 13 01:02:53.582184 systemd-logind[1835]: New session 14 of user core. Aug 13 01:02:57.242112 sshd[4298]: pam_unix(sshd:session): session closed for user core Aug 13 01:02:57.245985 systemd[1]: sshd@13-172.31.20.232:22-147.75.109.163:52272.service: Deactivated successfully. Aug 13 01:02:57.247444 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 01:02:57.247451 systemd-logind[1835]: Session 14 logged out. Waiting for processes to exit. Aug 13 01:02:57.249249 systemd-logind[1835]: Removed session 14. Aug 13 01:02:57.267089 systemd[1]: Started sshd@14-172.31.20.232:22-147.75.109.163:52282.service. Aug 13 01:02:57.454662 sshd[4308]: Accepted publickey for core from 147.75.109.163 port 52282 ssh2: RSA SHA256:LgPaLKY3LN4TBfyOIoir69nxEAguUoPITi+qXSaDutg Aug 13 01:02:57.456268 sshd[4308]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:02:57.464072 systemd[1]: Started session-15.scope. Aug 13 01:02:57.464966 systemd-logind[1835]: New session 15 of user core. Aug 13 01:02:59.147140 sshd[4308]: pam_unix(sshd:session): session closed for user core Aug 13 01:02:59.157938 systemd[1]: sshd@14-172.31.20.232:22-147.75.109.163:52282.service: Deactivated successfully. Aug 13 01:02:59.159733 systemd-logind[1835]: Session 15 logged out. Waiting for processes to exit. Aug 13 01:02:59.160025 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 01:02:59.161475 systemd-logind[1835]: Removed session 15. Aug 13 01:02:59.171809 systemd[1]: Started sshd@15-172.31.20.232:22-147.75.109.163:33818.service. Aug 13 01:02:59.337127 sshd[4326]: Accepted publickey for core from 147.75.109.163 port 33818 ssh2: RSA SHA256:LgPaLKY3LN4TBfyOIoir69nxEAguUoPITi+qXSaDutg Aug 13 01:02:59.338614 sshd[4326]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:02:59.344850 systemd[1]: Started session-16.scope. Aug 13 01:02:59.345060 systemd-logind[1835]: New session 16 of user core. Aug 13 01:02:59.771420 sshd[4326]: pam_unix(sshd:session): session closed for user core Aug 13 01:02:59.775295 systemd[1]: sshd@15-172.31.20.232:22-147.75.109.163:33818.service: Deactivated successfully. Aug 13 01:02:59.776544 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 01:02:59.777253 systemd-logind[1835]: Session 16 logged out. Waiting for processes to exit. Aug 13 01:02:59.778445 systemd-logind[1835]: Removed session 16. Aug 13 01:02:59.795500 systemd[1]: Started sshd@16-172.31.20.232:22-147.75.109.163:33834.service. Aug 13 01:02:59.961713 sshd[4337]: Accepted publickey for core from 147.75.109.163 port 33834 ssh2: RSA SHA256:LgPaLKY3LN4TBfyOIoir69nxEAguUoPITi+qXSaDutg Aug 13 01:02:59.963206 sshd[4337]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:02:59.968821 systemd[1]: Started session-17.scope. Aug 13 01:02:59.969523 systemd-logind[1835]: New session 17 of user core. Aug 13 01:03:00.256620 sshd[4337]: pam_unix(sshd:session): session closed for user core Aug 13 01:03:00.260143 systemd[1]: sshd@16-172.31.20.232:22-147.75.109.163:33834.service: Deactivated successfully. Aug 13 01:03:00.263132 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 01:03:00.263624 systemd-logind[1835]: Session 17 logged out. Waiting for processes to exit. Aug 13 01:03:00.265555 systemd-logind[1835]: Removed session 17. Aug 13 01:03:05.280929 systemd[1]: Started sshd@17-172.31.20.232:22-147.75.109.163:33850.service. Aug 13 01:03:05.444600 sshd[4349]: Accepted publickey for core from 147.75.109.163 port 33850 ssh2: RSA SHA256:LgPaLKY3LN4TBfyOIoir69nxEAguUoPITi+qXSaDutg Aug 13 01:03:05.446498 sshd[4349]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:03:05.452668 systemd[1]: Started session-18.scope. Aug 13 01:03:05.454123 systemd-logind[1835]: New session 18 of user core. Aug 13 01:03:05.649604 sshd[4349]: pam_unix(sshd:session): session closed for user core Aug 13 01:03:05.652959 systemd[1]: sshd@17-172.31.20.232:22-147.75.109.163:33850.service: Deactivated successfully. Aug 13 01:03:05.654240 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 01:03:05.654268 systemd-logind[1835]: Session 18 logged out. Waiting for processes to exit. Aug 13 01:03:05.656467 systemd-logind[1835]: Removed session 18. Aug 13 01:03:10.675656 systemd[1]: Started sshd@18-172.31.20.232:22-147.75.109.163:52198.service. Aug 13 01:03:10.842725 sshd[4365]: Accepted publickey for core from 147.75.109.163 port 52198 ssh2: RSA SHA256:LgPaLKY3LN4TBfyOIoir69nxEAguUoPITi+qXSaDutg Aug 13 01:03:10.844761 sshd[4365]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:03:10.853658 systemd[1]: Started session-19.scope. Aug 13 01:03:10.854748 systemd-logind[1835]: New session 19 of user core. Aug 13 01:03:11.041592 sshd[4365]: pam_unix(sshd:session): session closed for user core Aug 13 01:03:11.048934 systemd[1]: sshd@18-172.31.20.232:22-147.75.109.163:52198.service: Deactivated successfully. Aug 13 01:03:11.050378 systemd-logind[1835]: Session 19 logged out. Waiting for processes to exit. Aug 13 01:03:11.050472 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 01:03:11.052072 systemd-logind[1835]: Removed session 19. Aug 13 01:03:16.064821 systemd[1]: Started sshd@19-172.31.20.232:22-147.75.109.163:52208.service. Aug 13 01:03:16.226608 sshd[4378]: Accepted publickey for core from 147.75.109.163 port 52208 ssh2: RSA SHA256:LgPaLKY3LN4TBfyOIoir69nxEAguUoPITi+qXSaDutg Aug 13 01:03:16.228711 sshd[4378]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:03:16.235546 systemd[1]: Started session-20.scope. Aug 13 01:03:16.236248 systemd-logind[1835]: New session 20 of user core. Aug 13 01:03:16.433952 sshd[4378]: pam_unix(sshd:session): session closed for user core Aug 13 01:03:16.437279 systemd[1]: sshd@19-172.31.20.232:22-147.75.109.163:52208.service: Deactivated successfully. Aug 13 01:03:16.438692 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 01:03:16.439304 systemd-logind[1835]: Session 20 logged out. Waiting for processes to exit. Aug 13 01:03:16.441078 systemd-logind[1835]: Removed session 20. Aug 13 01:03:21.459253 systemd[1]: Started sshd@20-172.31.20.232:22-147.75.109.163:59540.service. Aug 13 01:03:21.623061 sshd[4393]: Accepted publickey for core from 147.75.109.163 port 59540 ssh2: RSA SHA256:LgPaLKY3LN4TBfyOIoir69nxEAguUoPITi+qXSaDutg Aug 13 01:03:21.624697 sshd[4393]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:03:21.630310 systemd[1]: Started session-21.scope. Aug 13 01:03:21.630821 systemd-logind[1835]: New session 21 of user core. Aug 13 01:03:21.815882 sshd[4393]: pam_unix(sshd:session): session closed for user core Aug 13 01:03:21.819819 systemd[1]: sshd@20-172.31.20.232:22-147.75.109.163:59540.service: Deactivated successfully. Aug 13 01:03:21.821036 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 01:03:21.821513 systemd-logind[1835]: Session 21 logged out. Waiting for processes to exit. Aug 13 01:03:21.822818 systemd-logind[1835]: Removed session 21. Aug 13 01:03:21.839000 systemd[1]: Started sshd@21-172.31.20.232:22-147.75.109.163:59542.service. Aug 13 01:03:21.999406 sshd[4405]: Accepted publickey for core from 147.75.109.163 port 59542 ssh2: RSA SHA256:LgPaLKY3LN4TBfyOIoir69nxEAguUoPITi+qXSaDutg Aug 13 01:03:22.001002 sshd[4405]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:03:22.011496 systemd-logind[1835]: New session 22 of user core. Aug 13 01:03:22.011833 systemd[1]: Started session-22.scope. Aug 13 01:03:24.300528 systemd[1]: run-containerd-runc-k8s.io-efaa867a26622d57bfb5dff323a3b991c20318e8dcba7ef5ac926b90e2cde59c-runc.lGCk4J.mount: Deactivated successfully. Aug 13 01:03:24.301978 env[1846]: time="2025-08-13T01:03:24.301933062Z" level=info msg="StopContainer for \"91f3cdca5e578c078c475ce79035554f6eabccf92d506ae9b4d261e26a5cd0ae\" with timeout 30 (s)" Aug 13 01:03:24.302787 env[1846]: time="2025-08-13T01:03:24.302752712Z" level=info msg="Stop container \"91f3cdca5e578c078c475ce79035554f6eabccf92d506ae9b4d261e26a5cd0ae\" with signal terminated" Aug 13 01:03:24.356896 env[1846]: time="2025-08-13T01:03:24.356792884Z" level=error msg="Failed to pipe stderr of container \"91f3cdca5e578c078c475ce79035554f6eabccf92d506ae9b4d261e26a5cd0ae\"" error="read /proc/self/fd/26: file already closed" Aug 13 01:03:24.369527 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-91f3cdca5e578c078c475ce79035554f6eabccf92d506ae9b4d261e26a5cd0ae-rootfs.mount: Deactivated successfully. Aug 13 01:03:24.384635 env[1846]: time="2025-08-13T01:03:24.384504654Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 01:03:24.390037 env[1846]: time="2025-08-13T01:03:24.389811397Z" level=info msg="shim disconnected" id=91f3cdca5e578c078c475ce79035554f6eabccf92d506ae9b4d261e26a5cd0ae Aug 13 01:03:24.390037 env[1846]: time="2025-08-13T01:03:24.389866234Z" level=warning msg="cleaning up after shim disconnected" id=91f3cdca5e578c078c475ce79035554f6eabccf92d506ae9b4d261e26a5cd0ae namespace=k8s.io Aug 13 01:03:24.390037 env[1846]: time="2025-08-13T01:03:24.389875365Z" level=info msg="cleaning up dead shim" Aug 13 01:03:24.392135 env[1846]: time="2025-08-13T01:03:24.392097574Z" level=info msg="StopContainer for \"efaa867a26622d57bfb5dff323a3b991c20318e8dcba7ef5ac926b90e2cde59c\" with timeout 2 (s)" Aug 13 01:03:24.393750 env[1846]: time="2025-08-13T01:03:24.393706696Z" level=info msg="Stop container \"efaa867a26622d57bfb5dff323a3b991c20318e8dcba7ef5ac926b90e2cde59c\" with signal terminated" Aug 13 01:03:24.400853 systemd-networkd[1517]: lxc_health: Link DOWN Aug 13 01:03:24.400860 systemd-networkd[1517]: lxc_health: Lost carrier Aug 13 01:03:24.402802 env[1846]: time="2025-08-13T01:03:24.402766933Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:03:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4452 runtime=io.containerd.runc.v2\n" Aug 13 01:03:24.406522 env[1846]: time="2025-08-13T01:03:24.406478442Z" level=info msg="StopContainer for \"91f3cdca5e578c078c475ce79035554f6eabccf92d506ae9b4d261e26a5cd0ae\" returns successfully" Aug 13 01:03:24.407277 env[1846]: time="2025-08-13T01:03:24.407246958Z" level=info msg="StopPodSandbox for \"f04c0c0cdabac85e2e177fc32916467365fe139360098de97a8f80db145fb956\"" Aug 13 01:03:24.407471 env[1846]: time="2025-08-13T01:03:24.407443733Z" level=info msg="Container to stop \"91f3cdca5e578c078c475ce79035554f6eabccf92d506ae9b4d261e26a5cd0ae\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:03:24.418682 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f04c0c0cdabac85e2e177fc32916467365fe139360098de97a8f80db145fb956-shm.mount: Deactivated successfully. Aug 13 01:03:24.484291 env[1846]: time="2025-08-13T01:03:24.484155083Z" level=info msg="shim disconnected" id=efaa867a26622d57bfb5dff323a3b991c20318e8dcba7ef5ac926b90e2cde59c Aug 13 01:03:24.484291 env[1846]: time="2025-08-13T01:03:24.484285306Z" level=warning msg="cleaning up after shim disconnected" id=efaa867a26622d57bfb5dff323a3b991c20318e8dcba7ef5ac926b90e2cde59c namespace=k8s.io Aug 13 01:03:24.484744 env[1846]: time="2025-08-13T01:03:24.484301227Z" level=info msg="cleaning up dead shim" Aug 13 01:03:24.485537 env[1846]: time="2025-08-13T01:03:24.484156768Z" level=info msg="shim disconnected" id=f04c0c0cdabac85e2e177fc32916467365fe139360098de97a8f80db145fb956 Aug 13 01:03:24.485647 env[1846]: time="2025-08-13T01:03:24.485549914Z" level=warning msg="cleaning up after shim disconnected" id=f04c0c0cdabac85e2e177fc32916467365fe139360098de97a8f80db145fb956 namespace=k8s.io Aug 13 01:03:24.485647 env[1846]: time="2025-08-13T01:03:24.485569212Z" level=info msg="cleaning up dead shim" Aug 13 01:03:24.498753 env[1846]: time="2025-08-13T01:03:24.498697667Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:03:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4512 runtime=io.containerd.runc.v2\n" Aug 13 01:03:24.499087 env[1846]: time="2025-08-13T01:03:24.499052107Z" level=info msg="TearDown network for sandbox \"f04c0c0cdabac85e2e177fc32916467365fe139360098de97a8f80db145fb956\" successfully" Aug 13 01:03:24.499087 env[1846]: time="2025-08-13T01:03:24.499082681Z" level=info msg="StopPodSandbox for \"f04c0c0cdabac85e2e177fc32916467365fe139360098de97a8f80db145fb956\" returns successfully" Aug 13 01:03:24.499248 env[1846]: time="2025-08-13T01:03:24.498697665Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:03:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4511 runtime=io.containerd.runc.v2\n" Aug 13 01:03:24.503088 env[1846]: time="2025-08-13T01:03:24.502891400Z" level=info msg="StopContainer for \"efaa867a26622d57bfb5dff323a3b991c20318e8dcba7ef5ac926b90e2cde59c\" returns successfully" Aug 13 01:03:24.503702 env[1846]: time="2025-08-13T01:03:24.503673416Z" level=info msg="StopPodSandbox for \"5520eed74605f5d1be1956a9010b3a516dc17e0259073cb1886f47092902d694\"" Aug 13 01:03:24.503895 env[1846]: time="2025-08-13T01:03:24.503871025Z" level=info msg="Container to stop \"7035c9ca6036a7c24f11338ed145e92d0491895127c9fccbf0eb92ee68227ab7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:03:24.503991 env[1846]: time="2025-08-13T01:03:24.503970396Z" level=info msg="Container to stop \"4905ab6ac239310ad2584b9784137914f8f0076f6de9bf7ab315552a9289e702\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:03:24.504073 env[1846]: time="2025-08-13T01:03:24.504054586Z" level=info msg="Container to stop \"efaa867a26622d57bfb5dff323a3b991c20318e8dcba7ef5ac926b90e2cde59c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:03:24.504148 env[1846]: time="2025-08-13T01:03:24.504131893Z" level=info msg="Container to stop \"5b58c1d0590add9bfc5d1cf760cc267d0c5051a27e09f3bf30a7facc5cb7fed1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:03:24.505133 env[1846]: time="2025-08-13T01:03:24.505102312Z" level=info msg="Container to stop \"73089cc5e8626e45dbcdcfdfd26335e315ca26f61eb69343867fb062f0857134\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:03:24.562843 kubelet[2758]: I0813 01:03:24.561959 2758 scope.go:117] "RemoveContainer" containerID="91f3cdca5e578c078c475ce79035554f6eabccf92d506ae9b4d261e26a5cd0ae" Aug 13 01:03:24.566535 env[1846]: time="2025-08-13T01:03:24.565155620Z" level=info msg="shim disconnected" id=5520eed74605f5d1be1956a9010b3a516dc17e0259073cb1886f47092902d694 Aug 13 01:03:24.566687 env[1846]: time="2025-08-13T01:03:24.566542347Z" level=warning msg="cleaning up after shim disconnected" id=5520eed74605f5d1be1956a9010b3a516dc17e0259073cb1886f47092902d694 namespace=k8s.io Aug 13 01:03:24.566687 env[1846]: time="2025-08-13T01:03:24.566557794Z" level=info msg="cleaning up dead shim" Aug 13 01:03:24.567118 env[1846]: time="2025-08-13T01:03:24.567088384Z" level=info msg="RemoveContainer for \"91f3cdca5e578c078c475ce79035554f6eabccf92d506ae9b4d261e26a5cd0ae\"" Aug 13 01:03:24.572693 env[1846]: time="2025-08-13T01:03:24.572650680Z" level=info msg="RemoveContainer for \"91f3cdca5e578c078c475ce79035554f6eabccf92d506ae9b4d261e26a5cd0ae\" returns successfully" Aug 13 01:03:24.572971 kubelet[2758]: I0813 01:03:24.572949 2758 scope.go:117] "RemoveContainer" containerID="91f3cdca5e578c078c475ce79035554f6eabccf92d506ae9b4d261e26a5cd0ae" Aug 13 01:03:24.573336 env[1846]: time="2025-08-13T01:03:24.573260597Z" level=error msg="ContainerStatus for \"91f3cdca5e578c078c475ce79035554f6eabccf92d506ae9b4d261e26a5cd0ae\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"91f3cdca5e578c078c475ce79035554f6eabccf92d506ae9b4d261e26a5cd0ae\": not found" Aug 13 01:03:24.576123 kubelet[2758]: E0813 01:03:24.576062 2758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"91f3cdca5e578c078c475ce79035554f6eabccf92d506ae9b4d261e26a5cd0ae\": not found" containerID="91f3cdca5e578c078c475ce79035554f6eabccf92d506ae9b4d261e26a5cd0ae" Aug 13 01:03:24.577909 kubelet[2758]: I0813 01:03:24.577738 2758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"91f3cdca5e578c078c475ce79035554f6eabccf92d506ae9b4d261e26a5cd0ae"} err="failed to get container status \"91f3cdca5e578c078c475ce79035554f6eabccf92d506ae9b4d261e26a5cd0ae\": rpc error: code = NotFound desc = an error occurred when try to find container \"91f3cdca5e578c078c475ce79035554f6eabccf92d506ae9b4d261e26a5cd0ae\": not found" Aug 13 01:03:24.579124 env[1846]: time="2025-08-13T01:03:24.579085862Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:03:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4558 runtime=io.containerd.runc.v2\n" Aug 13 01:03:24.579640 env[1846]: time="2025-08-13T01:03:24.579598510Z" level=info msg="TearDown network for sandbox \"5520eed74605f5d1be1956a9010b3a516dc17e0259073cb1886f47092902d694\" successfully" Aug 13 01:03:24.579640 env[1846]: time="2025-08-13T01:03:24.579631533Z" level=info msg="StopPodSandbox for \"5520eed74605f5d1be1956a9010b3a516dc17e0259073cb1886f47092902d694\" returns successfully" Aug 13 01:03:24.609764 kubelet[2758]: I0813 01:03:24.609716 2758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxzjt\" (UniqueName: \"kubernetes.io/projected/98d0a19a-56f0-48cb-b1de-03bcce9bf4ed-kube-api-access-hxzjt\") pod \"98d0a19a-56f0-48cb-b1de-03bcce9bf4ed\" (UID: \"98d0a19a-56f0-48cb-b1de-03bcce9bf4ed\") " Aug 13 01:03:24.609948 kubelet[2758]: I0813 01:03:24.609787 2758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/98d0a19a-56f0-48cb-b1de-03bcce9bf4ed-cilium-config-path\") pod \"98d0a19a-56f0-48cb-b1de-03bcce9bf4ed\" (UID: \"98d0a19a-56f0-48cb-b1de-03bcce9bf4ed\") " Aug 13 01:03:24.625846 kubelet[2758]: I0813 01:03:24.622651 2758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98d0a19a-56f0-48cb-b1de-03bcce9bf4ed-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "98d0a19a-56f0-48cb-b1de-03bcce9bf4ed" (UID: "98d0a19a-56f0-48cb-b1de-03bcce9bf4ed"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 01:03:24.627361 kubelet[2758]: I0813 01:03:24.627189 2758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98d0a19a-56f0-48cb-b1de-03bcce9bf4ed-kube-api-access-hxzjt" (OuterVolumeSpecName: "kube-api-access-hxzjt") pod "98d0a19a-56f0-48cb-b1de-03bcce9bf4ed" (UID: "98d0a19a-56f0-48cb-b1de-03bcce9bf4ed"). InnerVolumeSpecName "kube-api-access-hxzjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 01:03:24.710465 kubelet[2758]: I0813 01:03:24.710424 2758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3a1cce10-0722-4fbb-ade6-78a2897e8100-cilium-config-path\") pod \"3a1cce10-0722-4fbb-ade6-78a2897e8100\" (UID: \"3a1cce10-0722-4fbb-ade6-78a2897e8100\") " Aug 13 01:03:24.710465 kubelet[2758]: I0813 01:03:24.710465 2758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3a1cce10-0722-4fbb-ade6-78a2897e8100-cni-path\") pod \"3a1cce10-0722-4fbb-ade6-78a2897e8100\" (UID: \"3a1cce10-0722-4fbb-ade6-78a2897e8100\") " Aug 13 01:03:24.710465 kubelet[2758]: I0813 01:03:24.710480 2758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a1cce10-0722-4fbb-ade6-78a2897e8100-lib-modules\") pod \"3a1cce10-0722-4fbb-ade6-78a2897e8100\" (UID: \"3a1cce10-0722-4fbb-ade6-78a2897e8100\") " Aug 13 01:03:24.710703 kubelet[2758]: I0813 01:03:24.710495 2758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3a1cce10-0722-4fbb-ade6-78a2897e8100-cilium-cgroup\") pod \"3a1cce10-0722-4fbb-ade6-78a2897e8100\" (UID: \"3a1cce10-0722-4fbb-ade6-78a2897e8100\") " Aug 13 01:03:24.710703 kubelet[2758]: I0813 01:03:24.710514 2758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3a1cce10-0722-4fbb-ade6-78a2897e8100-clustermesh-secrets\") pod \"3a1cce10-0722-4fbb-ade6-78a2897e8100\" (UID: \"3a1cce10-0722-4fbb-ade6-78a2897e8100\") " Aug 13 01:03:24.710703 kubelet[2758]: I0813 01:03:24.710528 2758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3a1cce10-0722-4fbb-ade6-78a2897e8100-xtables-lock\") pod \"3a1cce10-0722-4fbb-ade6-78a2897e8100\" (UID: \"3a1cce10-0722-4fbb-ade6-78a2897e8100\") " Aug 13 01:03:24.710703 kubelet[2758]: I0813 01:03:24.710543 2758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3a1cce10-0722-4fbb-ade6-78a2897e8100-bpf-maps\") pod \"3a1cce10-0722-4fbb-ade6-78a2897e8100\" (UID: \"3a1cce10-0722-4fbb-ade6-78a2897e8100\") " Aug 13 01:03:24.710703 kubelet[2758]: I0813 01:03:24.710557 2758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3a1cce10-0722-4fbb-ade6-78a2897e8100-host-proc-sys-kernel\") pod \"3a1cce10-0722-4fbb-ade6-78a2897e8100\" (UID: \"3a1cce10-0722-4fbb-ade6-78a2897e8100\") " Aug 13 01:03:24.710703 kubelet[2758]: I0813 01:03:24.710574 2758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pf7j6\" (UniqueName: \"kubernetes.io/projected/3a1cce10-0722-4fbb-ade6-78a2897e8100-kube-api-access-pf7j6\") pod \"3a1cce10-0722-4fbb-ade6-78a2897e8100\" (UID: \"3a1cce10-0722-4fbb-ade6-78a2897e8100\") " Aug 13 01:03:24.710875 kubelet[2758]: I0813 01:03:24.710588 2758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3a1cce10-0722-4fbb-ade6-78a2897e8100-cilium-run\") pod \"3a1cce10-0722-4fbb-ade6-78a2897e8100\" (UID: \"3a1cce10-0722-4fbb-ade6-78a2897e8100\") " Aug 13 01:03:24.710875 kubelet[2758]: I0813 01:03:24.710601 2758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3a1cce10-0722-4fbb-ade6-78a2897e8100-etc-cni-netd\") pod \"3a1cce10-0722-4fbb-ade6-78a2897e8100\" (UID: \"3a1cce10-0722-4fbb-ade6-78a2897e8100\") " Aug 13 01:03:24.710875 kubelet[2758]: I0813 01:03:24.710615 2758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3a1cce10-0722-4fbb-ade6-78a2897e8100-host-proc-sys-net\") pod \"3a1cce10-0722-4fbb-ade6-78a2897e8100\" (UID: \"3a1cce10-0722-4fbb-ade6-78a2897e8100\") " Aug 13 01:03:24.710875 kubelet[2758]: I0813 01:03:24.710631 2758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3a1cce10-0722-4fbb-ade6-78a2897e8100-hubble-tls\") pod \"3a1cce10-0722-4fbb-ade6-78a2897e8100\" (UID: \"3a1cce10-0722-4fbb-ade6-78a2897e8100\") " Aug 13 01:03:24.710875 kubelet[2758]: I0813 01:03:24.710654 2758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3a1cce10-0722-4fbb-ade6-78a2897e8100-hostproc\") pod \"3a1cce10-0722-4fbb-ade6-78a2897e8100\" (UID: \"3a1cce10-0722-4fbb-ade6-78a2897e8100\") " Aug 13 01:03:24.710875 kubelet[2758]: I0813 01:03:24.710686 2758 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/98d0a19a-56f0-48cb-b1de-03bcce9bf4ed-cilium-config-path\") on node \"ip-172-31-20-232\" DevicePath \"\"" Aug 13 01:03:24.711034 kubelet[2758]: I0813 01:03:24.710696 2758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxzjt\" (UniqueName: \"kubernetes.io/projected/98d0a19a-56f0-48cb-b1de-03bcce9bf4ed-kube-api-access-hxzjt\") on node \"ip-172-31-20-232\" DevicePath \"\"" Aug 13 01:03:24.711034 kubelet[2758]: I0813 01:03:24.710744 2758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a1cce10-0722-4fbb-ade6-78a2897e8100-hostproc" (OuterVolumeSpecName: "hostproc") pod "3a1cce10-0722-4fbb-ade6-78a2897e8100" (UID: "3a1cce10-0722-4fbb-ade6-78a2897e8100"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:03:24.711155 kubelet[2758]: I0813 01:03:24.711125 2758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a1cce10-0722-4fbb-ade6-78a2897e8100-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3a1cce10-0722-4fbb-ade6-78a2897e8100" (UID: "3a1cce10-0722-4fbb-ade6-78a2897e8100"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:03:24.711266 kubelet[2758]: I0813 01:03:24.711253 2758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a1cce10-0722-4fbb-ade6-78a2897e8100-cni-path" (OuterVolumeSpecName: "cni-path") pod "3a1cce10-0722-4fbb-ade6-78a2897e8100" (UID: "3a1cce10-0722-4fbb-ade6-78a2897e8100"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:03:24.711346 kubelet[2758]: I0813 01:03:24.711335 2758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a1cce10-0722-4fbb-ade6-78a2897e8100-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3a1cce10-0722-4fbb-ade6-78a2897e8100" (UID: "3a1cce10-0722-4fbb-ade6-78a2897e8100"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:03:24.711408 kubelet[2758]: I0813 01:03:24.711398 2758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a1cce10-0722-4fbb-ade6-78a2897e8100-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3a1cce10-0722-4fbb-ade6-78a2897e8100" (UID: "3a1cce10-0722-4fbb-ade6-78a2897e8100"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:03:24.712855 kubelet[2758]: I0813 01:03:24.712821 2758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a1cce10-0722-4fbb-ade6-78a2897e8100-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3a1cce10-0722-4fbb-ade6-78a2897e8100" (UID: "3a1cce10-0722-4fbb-ade6-78a2897e8100"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 01:03:24.712964 kubelet[2758]: I0813 01:03:24.712870 2758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a1cce10-0722-4fbb-ade6-78a2897e8100-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3a1cce10-0722-4fbb-ade6-78a2897e8100" (UID: "3a1cce10-0722-4fbb-ade6-78a2897e8100"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:03:24.714701 kubelet[2758]: I0813 01:03:24.714676 2758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a1cce10-0722-4fbb-ade6-78a2897e8100-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3a1cce10-0722-4fbb-ade6-78a2897e8100" (UID: "3a1cce10-0722-4fbb-ade6-78a2897e8100"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 01:03:24.714842 kubelet[2758]: I0813 01:03:24.714830 2758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a1cce10-0722-4fbb-ade6-78a2897e8100-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3a1cce10-0722-4fbb-ade6-78a2897e8100" (UID: "3a1cce10-0722-4fbb-ade6-78a2897e8100"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:03:24.714908 kubelet[2758]: I0813 01:03:24.714899 2758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a1cce10-0722-4fbb-ade6-78a2897e8100-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3a1cce10-0722-4fbb-ade6-78a2897e8100" (UID: "3a1cce10-0722-4fbb-ade6-78a2897e8100"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:03:24.714965 kubelet[2758]: I0813 01:03:24.714957 2758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a1cce10-0722-4fbb-ade6-78a2897e8100-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3a1cce10-0722-4fbb-ade6-78a2897e8100" (UID: "3a1cce10-0722-4fbb-ade6-78a2897e8100"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:03:24.715018 kubelet[2758]: I0813 01:03:24.715010 2758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a1cce10-0722-4fbb-ade6-78a2897e8100-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3a1cce10-0722-4fbb-ade6-78a2897e8100" (UID: "3a1cce10-0722-4fbb-ade6-78a2897e8100"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:03:24.715549 kubelet[2758]: I0813 01:03:24.715523 2758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a1cce10-0722-4fbb-ade6-78a2897e8100-kube-api-access-pf7j6" (OuterVolumeSpecName: "kube-api-access-pf7j6") pod "3a1cce10-0722-4fbb-ade6-78a2897e8100" (UID: "3a1cce10-0722-4fbb-ade6-78a2897e8100"). InnerVolumeSpecName "kube-api-access-pf7j6". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 01:03:24.717924 kubelet[2758]: I0813 01:03:24.717878 2758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a1cce10-0722-4fbb-ade6-78a2897e8100-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3a1cce10-0722-4fbb-ade6-78a2897e8100" (UID: "3a1cce10-0722-4fbb-ade6-78a2897e8100"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 01:03:24.811488 kubelet[2758]: I0813 01:03:24.811422 2758 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3a1cce10-0722-4fbb-ade6-78a2897e8100-bpf-maps\") on node \"ip-172-31-20-232\" DevicePath \"\"" Aug 13 01:03:24.811488 kubelet[2758]: I0813 01:03:24.811458 2758 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3a1cce10-0722-4fbb-ade6-78a2897e8100-host-proc-sys-kernel\") on node \"ip-172-31-20-232\" DevicePath \"\"" Aug 13 01:03:24.811488 kubelet[2758]: I0813 01:03:24.811471 2758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pf7j6\" (UniqueName: \"kubernetes.io/projected/3a1cce10-0722-4fbb-ade6-78a2897e8100-kube-api-access-pf7j6\") on node \"ip-172-31-20-232\" DevicePath \"\"" Aug 13 01:03:24.811488 kubelet[2758]: I0813 01:03:24.811481 2758 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3a1cce10-0722-4fbb-ade6-78a2897e8100-cilium-run\") on node \"ip-172-31-20-232\" DevicePath \"\"" Aug 13 01:03:24.811488 kubelet[2758]: I0813 01:03:24.811493 2758 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3a1cce10-0722-4fbb-ade6-78a2897e8100-etc-cni-netd\") on node \"ip-172-31-20-232\" DevicePath \"\"" Aug 13 01:03:24.811488 kubelet[2758]: I0813 01:03:24.811502 2758 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3a1cce10-0722-4fbb-ade6-78a2897e8100-host-proc-sys-net\") on node \"ip-172-31-20-232\" DevicePath \"\"" Aug 13 01:03:24.811806 kubelet[2758]: I0813 01:03:24.811511 2758 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3a1cce10-0722-4fbb-ade6-78a2897e8100-hubble-tls\") on node \"ip-172-31-20-232\" DevicePath \"\"" Aug 13 01:03:24.811806 kubelet[2758]: I0813 01:03:24.811519 2758 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3a1cce10-0722-4fbb-ade6-78a2897e8100-hostproc\") on node \"ip-172-31-20-232\" DevicePath \"\"" Aug 13 01:03:24.811806 kubelet[2758]: I0813 01:03:24.811527 2758 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3a1cce10-0722-4fbb-ade6-78a2897e8100-cilium-config-path\") on node \"ip-172-31-20-232\" DevicePath \"\"" Aug 13 01:03:24.811806 kubelet[2758]: I0813 01:03:24.811534 2758 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3a1cce10-0722-4fbb-ade6-78a2897e8100-cni-path\") on node \"ip-172-31-20-232\" DevicePath \"\"" Aug 13 01:03:24.811806 kubelet[2758]: I0813 01:03:24.811542 2758 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a1cce10-0722-4fbb-ade6-78a2897e8100-lib-modules\") on node \"ip-172-31-20-232\" DevicePath \"\"" Aug 13 01:03:24.811806 kubelet[2758]: I0813 01:03:24.811549 2758 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3a1cce10-0722-4fbb-ade6-78a2897e8100-cilium-cgroup\") on node \"ip-172-31-20-232\" DevicePath \"\"" Aug 13 01:03:24.811806 kubelet[2758]: I0813 01:03:24.811561 2758 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3a1cce10-0722-4fbb-ade6-78a2897e8100-clustermesh-secrets\") on node \"ip-172-31-20-232\" DevicePath \"\"" Aug 13 01:03:24.811806 kubelet[2758]: I0813 01:03:24.811570 2758 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3a1cce10-0722-4fbb-ade6-78a2897e8100-xtables-lock\") on node \"ip-172-31-20-232\" DevicePath \"\"" Aug 13 01:03:25.257443 kubelet[2758]: I0813 01:03:25.257372 2758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98d0a19a-56f0-48cb-b1de-03bcce9bf4ed" path="/var/lib/kubelet/pods/98d0a19a-56f0-48cb-b1de-03bcce9bf4ed/volumes" Aug 13 01:03:25.288852 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-efaa867a26622d57bfb5dff323a3b991c20318e8dcba7ef5ac926b90e2cde59c-rootfs.mount: Deactivated successfully. Aug 13 01:03:25.289024 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f04c0c0cdabac85e2e177fc32916467365fe139360098de97a8f80db145fb956-rootfs.mount: Deactivated successfully. Aug 13 01:03:25.289127 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5520eed74605f5d1be1956a9010b3a516dc17e0259073cb1886f47092902d694-rootfs.mount: Deactivated successfully. Aug 13 01:03:25.289244 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5520eed74605f5d1be1956a9010b3a516dc17e0259073cb1886f47092902d694-shm.mount: Deactivated successfully. Aug 13 01:03:25.289344 systemd[1]: var-lib-kubelet-pods-98d0a19a\x2d56f0\x2d48cb\x2db1de\x2d03bcce9bf4ed-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhxzjt.mount: Deactivated successfully. Aug 13 01:03:25.289437 systemd[1]: var-lib-kubelet-pods-3a1cce10\x2d0722\x2d4fbb\x2dade6\x2d78a2897e8100-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpf7j6.mount: Deactivated successfully. Aug 13 01:03:25.289528 systemd[1]: var-lib-kubelet-pods-3a1cce10\x2d0722\x2d4fbb\x2dade6\x2d78a2897e8100-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 01:03:25.289623 systemd[1]: var-lib-kubelet-pods-3a1cce10\x2d0722\x2d4fbb\x2dade6\x2d78a2897e8100-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 01:03:25.553052 kubelet[2758]: I0813 01:03:25.553029 2758 scope.go:117] "RemoveContainer" containerID="efaa867a26622d57bfb5dff323a3b991c20318e8dcba7ef5ac926b90e2cde59c" Aug 13 01:03:25.554787 env[1846]: time="2025-08-13T01:03:25.554709037Z" level=info msg="RemoveContainer for \"efaa867a26622d57bfb5dff323a3b991c20318e8dcba7ef5ac926b90e2cde59c\"" Aug 13 01:03:25.561096 env[1846]: time="2025-08-13T01:03:25.561016939Z" level=info msg="RemoveContainer for \"efaa867a26622d57bfb5dff323a3b991c20318e8dcba7ef5ac926b90e2cde59c\" returns successfully" Aug 13 01:03:25.561417 kubelet[2758]: I0813 01:03:25.561391 2758 scope.go:117] "RemoveContainer" containerID="4905ab6ac239310ad2584b9784137914f8f0076f6de9bf7ab315552a9289e702" Aug 13 01:03:25.563154 env[1846]: time="2025-08-13T01:03:25.562826627Z" level=info msg="RemoveContainer for \"4905ab6ac239310ad2584b9784137914f8f0076f6de9bf7ab315552a9289e702\"" Aug 13 01:03:25.568799 env[1846]: time="2025-08-13T01:03:25.568744390Z" level=info msg="RemoveContainer for \"4905ab6ac239310ad2584b9784137914f8f0076f6de9bf7ab315552a9289e702\" returns successfully" Aug 13 01:03:25.569083 kubelet[2758]: I0813 01:03:25.569061 2758 scope.go:117] "RemoveContainer" containerID="7035c9ca6036a7c24f11338ed145e92d0491895127c9fccbf0eb92ee68227ab7" Aug 13 01:03:25.573218 env[1846]: time="2025-08-13T01:03:25.573160249Z" level=info msg="RemoveContainer for \"7035c9ca6036a7c24f11338ed145e92d0491895127c9fccbf0eb92ee68227ab7\"" Aug 13 01:03:25.583830 env[1846]: time="2025-08-13T01:03:25.583776240Z" level=info msg="RemoveContainer for \"7035c9ca6036a7c24f11338ed145e92d0491895127c9fccbf0eb92ee68227ab7\" returns successfully" Aug 13 01:03:25.584086 kubelet[2758]: I0813 01:03:25.584051 2758 scope.go:117] "RemoveContainer" containerID="73089cc5e8626e45dbcdcfdfd26335e315ca26f61eb69343867fb062f0857134" Aug 13 01:03:25.590379 env[1846]: time="2025-08-13T01:03:25.589408287Z" level=info msg="RemoveContainer for \"73089cc5e8626e45dbcdcfdfd26335e315ca26f61eb69343867fb062f0857134\"" Aug 13 01:03:25.595152 env[1846]: time="2025-08-13T01:03:25.595095416Z" level=info msg="RemoveContainer for \"73089cc5e8626e45dbcdcfdfd26335e315ca26f61eb69343867fb062f0857134\" returns successfully" Aug 13 01:03:25.595422 kubelet[2758]: I0813 01:03:25.595387 2758 scope.go:117] "RemoveContainer" containerID="5b58c1d0590add9bfc5d1cf760cc267d0c5051a27e09f3bf30a7facc5cb7fed1" Aug 13 01:03:25.596764 env[1846]: time="2025-08-13T01:03:25.596730877Z" level=info msg="RemoveContainer for \"5b58c1d0590add9bfc5d1cf760cc267d0c5051a27e09f3bf30a7facc5cb7fed1\"" Aug 13 01:03:25.602658 env[1846]: time="2025-08-13T01:03:25.602592698Z" level=info msg="RemoveContainer for \"5b58c1d0590add9bfc5d1cf760cc267d0c5051a27e09f3bf30a7facc5cb7fed1\" returns successfully" Aug 13 01:03:26.082621 sshd[4405]: pam_unix(sshd:session): session closed for user core Aug 13 01:03:26.087468 systemd[1]: sshd@21-172.31.20.232:22-147.75.109.163:59542.service: Deactivated successfully. Aug 13 01:03:26.088694 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 01:03:26.090411 systemd-logind[1835]: Session 22 logged out. Waiting for processes to exit. Aug 13 01:03:26.092082 systemd-logind[1835]: Removed session 22. Aug 13 01:03:26.105903 systemd[1]: Started sshd@22-172.31.20.232:22-147.75.109.163:59550.service. Aug 13 01:03:26.294955 sshd[4578]: Accepted publickey for core from 147.75.109.163 port 59550 ssh2: RSA SHA256:LgPaLKY3LN4TBfyOIoir69nxEAguUoPITi+qXSaDutg Aug 13 01:03:26.297875 sshd[4578]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:03:26.305905 systemd[1]: Started session-23.scope. Aug 13 01:03:26.306105 systemd-logind[1835]: New session 23 of user core. Aug 13 01:03:26.924062 sshd[4578]: pam_unix(sshd:session): session closed for user core Aug 13 01:03:26.928339 kubelet[2758]: E0813 01:03:26.928302 2758 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3a1cce10-0722-4fbb-ade6-78a2897e8100" containerName="mount-cgroup" Aug 13 01:03:26.928992 kubelet[2758]: E0813 01:03:26.928969 2758 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3a1cce10-0722-4fbb-ade6-78a2897e8100" containerName="apply-sysctl-overwrites" Aug 13 01:03:26.929124 kubelet[2758]: E0813 01:03:26.929109 2758 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3a1cce10-0722-4fbb-ade6-78a2897e8100" containerName="clean-cilium-state" Aug 13 01:03:26.929245 kubelet[2758]: E0813 01:03:26.929232 2758 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3a1cce10-0722-4fbb-ade6-78a2897e8100" containerName="mount-bpf-fs" Aug 13 01:03:26.929349 kubelet[2758]: E0813 01:03:26.929336 2758 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="98d0a19a-56f0-48cb-b1de-03bcce9bf4ed" containerName="cilium-operator" Aug 13 01:03:26.929535 kubelet[2758]: E0813 01:03:26.929510 2758 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3a1cce10-0722-4fbb-ade6-78a2897e8100" containerName="cilium-agent" Aug 13 01:03:26.929705 kubelet[2758]: I0813 01:03:26.929682 2758 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a1cce10-0722-4fbb-ade6-78a2897e8100" containerName="cilium-agent" Aug 13 01:03:26.929839 kubelet[2758]: I0813 01:03:26.929808 2758 memory_manager.go:354] "RemoveStaleState removing state" podUID="98d0a19a-56f0-48cb-b1de-03bcce9bf4ed" containerName="cilium-operator" Aug 13 01:03:26.934067 systemd[1]: sshd@22-172.31.20.232:22-147.75.109.163:59550.service: Deactivated successfully. Aug 13 01:03:26.938978 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 01:03:26.940675 systemd-logind[1835]: Session 23 logged out. Waiting for processes to exit. Aug 13 01:03:26.950440 systemd-logind[1835]: Removed session 23. Aug 13 01:03:26.960139 systemd[1]: Started sshd@23-172.31.20.232:22-147.75.109.163:59564.service. Aug 13 01:03:27.028077 kubelet[2758]: I0813 01:03:27.028039 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-hubble-tls\") pod \"cilium-l2fd8\" (UID: \"1e95257c-5f2b-46dc-a3b6-f7a056adb18d\") " pod="kube-system/cilium-l2fd8" Aug 13 01:03:27.028360 kubelet[2758]: I0813 01:03:27.028329 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-cilium-run\") pod \"cilium-l2fd8\" (UID: \"1e95257c-5f2b-46dc-a3b6-f7a056adb18d\") " pod="kube-system/cilium-l2fd8" Aug 13 01:03:27.028508 kubelet[2758]: I0813 01:03:27.028492 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-lib-modules\") pod \"cilium-l2fd8\" (UID: \"1e95257c-5f2b-46dc-a3b6-f7a056adb18d\") " pod="kube-system/cilium-l2fd8" Aug 13 01:03:27.028631 kubelet[2758]: I0813 01:03:27.028617 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-cilium-ipsec-secrets\") pod \"cilium-l2fd8\" (UID: \"1e95257c-5f2b-46dc-a3b6-f7a056adb18d\") " pod="kube-system/cilium-l2fd8" Aug 13 01:03:27.028734 kubelet[2758]: I0813 01:03:27.028721 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-bpf-maps\") pod \"cilium-l2fd8\" (UID: \"1e95257c-5f2b-46dc-a3b6-f7a056adb18d\") " pod="kube-system/cilium-l2fd8" Aug 13 01:03:27.028844 kubelet[2758]: I0813 01:03:27.028831 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-hostproc\") pod \"cilium-l2fd8\" (UID: \"1e95257c-5f2b-46dc-a3b6-f7a056adb18d\") " pod="kube-system/cilium-l2fd8" Aug 13 01:03:27.028956 kubelet[2758]: I0813 01:03:27.028941 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-cni-path\") pod \"cilium-l2fd8\" (UID: \"1e95257c-5f2b-46dc-a3b6-f7a056adb18d\") " pod="kube-system/cilium-l2fd8" Aug 13 01:03:27.029064 kubelet[2758]: I0813 01:03:27.029048 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-cilium-config-path\") pod \"cilium-l2fd8\" (UID: \"1e95257c-5f2b-46dc-a3b6-f7a056adb18d\") " pod="kube-system/cilium-l2fd8" Aug 13 01:03:27.029171 kubelet[2758]: I0813 01:03:27.029158 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-etc-cni-netd\") pod \"cilium-l2fd8\" (UID: \"1e95257c-5f2b-46dc-a3b6-f7a056adb18d\") " pod="kube-system/cilium-l2fd8" Aug 13 01:03:27.029287 kubelet[2758]: I0813 01:03:27.029273 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-cilium-cgroup\") pod \"cilium-l2fd8\" (UID: \"1e95257c-5f2b-46dc-a3b6-f7a056adb18d\") " pod="kube-system/cilium-l2fd8" Aug 13 01:03:27.029391 kubelet[2758]: I0813 01:03:27.029364 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-xtables-lock\") pod \"cilium-l2fd8\" (UID: \"1e95257c-5f2b-46dc-a3b6-f7a056adb18d\") " pod="kube-system/cilium-l2fd8" Aug 13 01:03:27.029472 kubelet[2758]: I0813 01:03:27.029460 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-host-proc-sys-kernel\") pod \"cilium-l2fd8\" (UID: \"1e95257c-5f2b-46dc-a3b6-f7a056adb18d\") " pod="kube-system/cilium-l2fd8" Aug 13 01:03:27.029551 kubelet[2758]: I0813 01:03:27.029539 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drjgr\" (UniqueName: \"kubernetes.io/projected/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-kube-api-access-drjgr\") pod \"cilium-l2fd8\" (UID: \"1e95257c-5f2b-46dc-a3b6-f7a056adb18d\") " pod="kube-system/cilium-l2fd8" Aug 13 01:03:27.029662 kubelet[2758]: I0813 01:03:27.029648 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-clustermesh-secrets\") pod \"cilium-l2fd8\" (UID: \"1e95257c-5f2b-46dc-a3b6-f7a056adb18d\") " pod="kube-system/cilium-l2fd8" Aug 13 01:03:27.029750 kubelet[2758]: I0813 01:03:27.029739 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-host-proc-sys-net\") pod \"cilium-l2fd8\" (UID: \"1e95257c-5f2b-46dc-a3b6-f7a056adb18d\") " pod="kube-system/cilium-l2fd8" Aug 13 01:03:27.141452 sshd[4590]: Accepted publickey for core from 147.75.109.163 port 59564 ssh2: RSA SHA256:LgPaLKY3LN4TBfyOIoir69nxEAguUoPITi+qXSaDutg Aug 13 01:03:27.142641 sshd[4590]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:03:27.155299 systemd[1]: Started session-24.scope. Aug 13 01:03:27.156855 systemd-logind[1835]: New session 24 of user core. Aug 13 01:03:27.252077 env[1846]: time="2025-08-13T01:03:27.251663759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l2fd8,Uid:1e95257c-5f2b-46dc-a3b6-f7a056adb18d,Namespace:kube-system,Attempt:0,}" Aug 13 01:03:27.257756 kubelet[2758]: I0813 01:03:27.257722 2758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a1cce10-0722-4fbb-ade6-78a2897e8100" path="/var/lib/kubelet/pods/3a1cce10-0722-4fbb-ade6-78a2897e8100/volumes" Aug 13 01:03:27.300810 env[1846]: time="2025-08-13T01:03:27.300727432Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:03:27.300810 env[1846]: time="2025-08-13T01:03:27.300771799Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:03:27.300810 env[1846]: time="2025-08-13T01:03:27.300787420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:03:27.301344 env[1846]: time="2025-08-13T01:03:27.301299031Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a34a14161665687303ff02db05ef1cf7d4eaebfd5427950b99cd5a67f77b02dc pid=4610 runtime=io.containerd.runc.v2 Aug 13 01:03:27.370436 env[1846]: time="2025-08-13T01:03:27.370391046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l2fd8,Uid:1e95257c-5f2b-46dc-a3b6-f7a056adb18d,Namespace:kube-system,Attempt:0,} returns sandbox id \"a34a14161665687303ff02db05ef1cf7d4eaebfd5427950b99cd5a67f77b02dc\"" Aug 13 01:03:27.373967 env[1846]: time="2025-08-13T01:03:27.373925524Z" level=info msg="CreateContainer within sandbox \"a34a14161665687303ff02db05ef1cf7d4eaebfd5427950b99cd5a67f77b02dc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 01:03:27.397870 env[1846]: time="2025-08-13T01:03:27.397814180Z" level=info msg="CreateContainer within sandbox \"a34a14161665687303ff02db05ef1cf7d4eaebfd5427950b99cd5a67f77b02dc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f7b6ae2d937f98051336111cb329d936d81772a692b71e011a624dfc9393026a\"" Aug 13 01:03:27.398887 env[1846]: time="2025-08-13T01:03:27.398851389Z" level=info msg="StartContainer for \"f7b6ae2d937f98051336111cb329d936d81772a692b71e011a624dfc9393026a\"" Aug 13 01:03:27.476810 env[1846]: time="2025-08-13T01:03:27.474332189Z" level=info msg="StartContainer for \"f7b6ae2d937f98051336111cb329d936d81772a692b71e011a624dfc9393026a\" returns successfully" Aug 13 01:03:27.506796 sshd[4590]: pam_unix(sshd:session): session closed for user core Aug 13 01:03:27.510027 systemd[1]: sshd@23-172.31.20.232:22-147.75.109.163:59564.service: Deactivated successfully. Aug 13 01:03:27.511531 systemd-logind[1835]: Session 24 logged out. Waiting for processes to exit. Aug 13 01:03:27.511560 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 01:03:27.512670 systemd-logind[1835]: Removed session 24. Aug 13 01:03:27.532342 systemd[1]: Started sshd@24-172.31.20.232:22-147.75.109.163:59574.service. Aug 13 01:03:27.580006 env[1846]: time="2025-08-13T01:03:27.579959547Z" level=info msg="StopContainer for \"f7b6ae2d937f98051336111cb329d936d81772a692b71e011a624dfc9393026a\" with timeout 2 (s)" Aug 13 01:03:27.580843 env[1846]: time="2025-08-13T01:03:27.580815671Z" level=info msg="Stop container \"f7b6ae2d937f98051336111cb329d936d81772a692b71e011a624dfc9393026a\" with signal terminated" Aug 13 01:03:27.649612 env[1846]: time="2025-08-13T01:03:27.649571938Z" level=info msg="shim disconnected" id=f7b6ae2d937f98051336111cb329d936d81772a692b71e011a624dfc9393026a Aug 13 01:03:27.649908 env[1846]: time="2025-08-13T01:03:27.649883164Z" level=warning msg="cleaning up after shim disconnected" id=f7b6ae2d937f98051336111cb329d936d81772a692b71e011a624dfc9393026a namespace=k8s.io Aug 13 01:03:27.650054 env[1846]: time="2025-08-13T01:03:27.650040965Z" level=info msg="cleaning up dead shim" Aug 13 01:03:27.662353 env[1846]: time="2025-08-13T01:03:27.662307287Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:03:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4706 runtime=io.containerd.runc.v2\n" Aug 13 01:03:27.666515 env[1846]: time="2025-08-13T01:03:27.666472715Z" level=info msg="StopContainer for \"f7b6ae2d937f98051336111cb329d936d81772a692b71e011a624dfc9393026a\" returns successfully" Aug 13 01:03:27.666916 env[1846]: time="2025-08-13T01:03:27.666890829Z" level=info msg="StopPodSandbox for \"a34a14161665687303ff02db05ef1cf7d4eaebfd5427950b99cd5a67f77b02dc\"" Aug 13 01:03:27.666965 env[1846]: time="2025-08-13T01:03:27.666946489Z" level=info msg="Container to stop \"f7b6ae2d937f98051336111cb329d936d81772a692b71e011a624dfc9393026a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:03:27.705102 env[1846]: time="2025-08-13T01:03:27.705061445Z" level=info msg="shim disconnected" id=a34a14161665687303ff02db05ef1cf7d4eaebfd5427950b99cd5a67f77b02dc Aug 13 01:03:27.705763 env[1846]: time="2025-08-13T01:03:27.705735417Z" level=warning msg="cleaning up after shim disconnected" id=a34a14161665687303ff02db05ef1cf7d4eaebfd5427950b99cd5a67f77b02dc namespace=k8s.io Aug 13 01:03:27.705860 env[1846]: time="2025-08-13T01:03:27.705848724Z" level=info msg="cleaning up dead shim" Aug 13 01:03:27.707267 sshd[4684]: Accepted publickey for core from 147.75.109.163 port 59574 ssh2: RSA SHA256:LgPaLKY3LN4TBfyOIoir69nxEAguUoPITi+qXSaDutg Aug 13 01:03:27.708873 sshd[4684]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:03:27.713702 systemd[1]: Started session-25.scope. Aug 13 01:03:27.715073 systemd-logind[1835]: New session 25 of user core. Aug 13 01:03:27.718146 env[1846]: time="2025-08-13T01:03:27.717828677Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:03:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4739 runtime=io.containerd.runc.v2\n" Aug 13 01:03:27.719755 env[1846]: time="2025-08-13T01:03:27.718395001Z" level=info msg="TearDown network for sandbox \"a34a14161665687303ff02db05ef1cf7d4eaebfd5427950b99cd5a67f77b02dc\" successfully" Aug 13 01:03:27.719755 env[1846]: time="2025-08-13T01:03:27.718420109Z" level=info msg="StopPodSandbox for \"a34a14161665687303ff02db05ef1cf7d4eaebfd5427950b99cd5a67f77b02dc\" returns successfully" Aug 13 01:03:27.835327 kubelet[2758]: I0813 01:03:27.835284 2758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-bpf-maps\") pod \"1e95257c-5f2b-46dc-a3b6-f7a056adb18d\" (UID: \"1e95257c-5f2b-46dc-a3b6-f7a056adb18d\") " Aug 13 01:03:27.835538 kubelet[2758]: I0813 01:03:27.835413 2758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-etc-cni-netd\") pod \"1e95257c-5f2b-46dc-a3b6-f7a056adb18d\" (UID: \"1e95257c-5f2b-46dc-a3b6-f7a056adb18d\") " Aug 13 01:03:27.835538 kubelet[2758]: I0813 01:03:27.835454 2758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-clustermesh-secrets\") pod \"1e95257c-5f2b-46dc-a3b6-f7a056adb18d\" (UID: \"1e95257c-5f2b-46dc-a3b6-f7a056adb18d\") " Aug 13 01:03:27.835538 kubelet[2758]: I0813 01:03:27.835496 2758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-host-proc-sys-kernel\") pod \"1e95257c-5f2b-46dc-a3b6-f7a056adb18d\" (UID: \"1e95257c-5f2b-46dc-a3b6-f7a056adb18d\") " Aug 13 01:03:27.835538 kubelet[2758]: I0813 01:03:27.835528 2758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-hostproc\") pod \"1e95257c-5f2b-46dc-a3b6-f7a056adb18d\" (UID: \"1e95257c-5f2b-46dc-a3b6-f7a056adb18d\") " Aug 13 01:03:27.835770 kubelet[2758]: I0813 01:03:27.835567 2758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-host-proc-sys-net\") pod \"1e95257c-5f2b-46dc-a3b6-f7a056adb18d\" (UID: \"1e95257c-5f2b-46dc-a3b6-f7a056adb18d\") " Aug 13 01:03:27.835770 kubelet[2758]: I0813 01:03:27.835591 2758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-cilium-cgroup\") pod \"1e95257c-5f2b-46dc-a3b6-f7a056adb18d\" (UID: \"1e95257c-5f2b-46dc-a3b6-f7a056adb18d\") " Aug 13 01:03:27.835770 kubelet[2758]: I0813 01:03:27.835620 2758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drjgr\" (UniqueName: \"kubernetes.io/projected/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-kube-api-access-drjgr\") pod \"1e95257c-5f2b-46dc-a3b6-f7a056adb18d\" (UID: \"1e95257c-5f2b-46dc-a3b6-f7a056adb18d\") " Aug 13 01:03:27.835770 kubelet[2758]: I0813 01:03:27.835659 2758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-xtables-lock\") pod \"1e95257c-5f2b-46dc-a3b6-f7a056adb18d\" (UID: \"1e95257c-5f2b-46dc-a3b6-f7a056adb18d\") " Aug 13 01:03:27.835770 kubelet[2758]: I0813 01:03:27.835681 2758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-lib-modules\") pod \"1e95257c-5f2b-46dc-a3b6-f7a056adb18d\" (UID: \"1e95257c-5f2b-46dc-a3b6-f7a056adb18d\") " Aug 13 01:03:27.835770 kubelet[2758]: I0813 01:03:27.835725 2758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-cilium-ipsec-secrets\") pod \"1e95257c-5f2b-46dc-a3b6-f7a056adb18d\" (UID: \"1e95257c-5f2b-46dc-a3b6-f7a056adb18d\") " Aug 13 01:03:27.836038 kubelet[2758]: I0813 01:03:27.835752 2758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-cilium-config-path\") pod \"1e95257c-5f2b-46dc-a3b6-f7a056adb18d\" (UID: \"1e95257c-5f2b-46dc-a3b6-f7a056adb18d\") " Aug 13 01:03:27.836038 kubelet[2758]: I0813 01:03:27.835795 2758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-hubble-tls\") pod \"1e95257c-5f2b-46dc-a3b6-f7a056adb18d\" (UID: \"1e95257c-5f2b-46dc-a3b6-f7a056adb18d\") " Aug 13 01:03:27.836038 kubelet[2758]: I0813 01:03:27.835821 2758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-cilium-run\") pod \"1e95257c-5f2b-46dc-a3b6-f7a056adb18d\" (UID: \"1e95257c-5f2b-46dc-a3b6-f7a056adb18d\") " Aug 13 01:03:27.836038 kubelet[2758]: I0813 01:03:27.835843 2758 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-cni-path\") pod \"1e95257c-5f2b-46dc-a3b6-f7a056adb18d\" (UID: \"1e95257c-5f2b-46dc-a3b6-f7a056adb18d\") " Aug 13 01:03:27.836038 kubelet[2758]: I0813 01:03:27.835957 2758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-cni-path" (OuterVolumeSpecName: "cni-path") pod "1e95257c-5f2b-46dc-a3b6-f7a056adb18d" (UID: "1e95257c-5f2b-46dc-a3b6-f7a056adb18d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:03:27.836038 kubelet[2758]: I0813 01:03:27.835996 2758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1e95257c-5f2b-46dc-a3b6-f7a056adb18d" (UID: "1e95257c-5f2b-46dc-a3b6-f7a056adb18d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:03:27.836317 kubelet[2758]: I0813 01:03:27.836031 2758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1e95257c-5f2b-46dc-a3b6-f7a056adb18d" (UID: "1e95257c-5f2b-46dc-a3b6-f7a056adb18d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:03:27.836583 kubelet[2758]: I0813 01:03:27.836407 2758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1e95257c-5f2b-46dc-a3b6-f7a056adb18d" (UID: "1e95257c-5f2b-46dc-a3b6-f7a056adb18d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:03:27.837356 kubelet[2758]: I0813 01:03:27.836694 2758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1e95257c-5f2b-46dc-a3b6-f7a056adb18d" (UID: "1e95257c-5f2b-46dc-a3b6-f7a056adb18d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:03:27.837484 kubelet[2758]: I0813 01:03:27.836720 2758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-hostproc" (OuterVolumeSpecName: "hostproc") pod "1e95257c-5f2b-46dc-a3b6-f7a056adb18d" (UID: "1e95257c-5f2b-46dc-a3b6-f7a056adb18d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:03:27.837570 kubelet[2758]: I0813 01:03:27.836753 2758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1e95257c-5f2b-46dc-a3b6-f7a056adb18d" (UID: "1e95257c-5f2b-46dc-a3b6-f7a056adb18d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:03:27.838135 kubelet[2758]: I0813 01:03:27.836767 2758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1e95257c-5f2b-46dc-a3b6-f7a056adb18d" (UID: "1e95257c-5f2b-46dc-a3b6-f7a056adb18d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:03:27.838135 kubelet[2758]: I0813 01:03:27.837425 2758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1e95257c-5f2b-46dc-a3b6-f7a056adb18d" (UID: "1e95257c-5f2b-46dc-a3b6-f7a056adb18d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:03:27.842118 kubelet[2758]: I0813 01:03:27.842074 2758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1e95257c-5f2b-46dc-a3b6-f7a056adb18d" (UID: "1e95257c-5f2b-46dc-a3b6-f7a056adb18d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:03:27.846281 kubelet[2758]: I0813 01:03:27.844670 2758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1e95257c-5f2b-46dc-a3b6-f7a056adb18d" (UID: "1e95257c-5f2b-46dc-a3b6-f7a056adb18d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 01:03:27.849430 kubelet[2758]: I0813 01:03:27.849387 2758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1e95257c-5f2b-46dc-a3b6-f7a056adb18d" (UID: "1e95257c-5f2b-46dc-a3b6-f7a056adb18d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 01:03:27.849637 kubelet[2758]: I0813 01:03:27.849561 2758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "1e95257c-5f2b-46dc-a3b6-f7a056adb18d" (UID: "1e95257c-5f2b-46dc-a3b6-f7a056adb18d"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 01:03:27.852239 kubelet[2758]: I0813 01:03:27.851557 2758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1e95257c-5f2b-46dc-a3b6-f7a056adb18d" (UID: "1e95257c-5f2b-46dc-a3b6-f7a056adb18d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 01:03:27.853965 kubelet[2758]: I0813 01:03:27.853918 2758 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-kube-api-access-drjgr" (OuterVolumeSpecName: "kube-api-access-drjgr") pod "1e95257c-5f2b-46dc-a3b6-f7a056adb18d" (UID: "1e95257c-5f2b-46dc-a3b6-f7a056adb18d"). InnerVolumeSpecName "kube-api-access-drjgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 01:03:27.936285 kubelet[2758]: I0813 01:03:27.936227 2758 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-host-proc-sys-kernel\") on node \"ip-172-31-20-232\" DevicePath \"\"" Aug 13 01:03:27.936285 kubelet[2758]: I0813 01:03:27.936280 2758 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-hostproc\") on node \"ip-172-31-20-232\" DevicePath \"\"" Aug 13 01:03:27.936285 kubelet[2758]: I0813 01:03:27.936290 2758 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-host-proc-sys-net\") on node \"ip-172-31-20-232\" DevicePath \"\"" Aug 13 01:03:27.936285 kubelet[2758]: I0813 01:03:27.936299 2758 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-cilium-cgroup\") on node \"ip-172-31-20-232\" DevicePath \"\"" Aug 13 01:03:27.936948 kubelet[2758]: I0813 01:03:27.936308 2758 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drjgr\" (UniqueName: \"kubernetes.io/projected/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-kube-api-access-drjgr\") on node \"ip-172-31-20-232\" DevicePath \"\"" Aug 13 01:03:27.936948 kubelet[2758]: I0813 01:03:27.936319 2758 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-lib-modules\") on node \"ip-172-31-20-232\" DevicePath \"\"" Aug 13 01:03:27.936948 kubelet[2758]: I0813 01:03:27.936327 2758 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-cilium-ipsec-secrets\") on node \"ip-172-31-20-232\" DevicePath \"\"" Aug 13 01:03:27.936948 kubelet[2758]: I0813 01:03:27.936335 2758 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-cilium-config-path\") on node \"ip-172-31-20-232\" DevicePath \"\"" Aug 13 01:03:27.936948 kubelet[2758]: I0813 01:03:27.936343 2758 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-xtables-lock\") on node \"ip-172-31-20-232\" DevicePath \"\"" Aug 13 01:03:27.936948 kubelet[2758]: I0813 01:03:27.936351 2758 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-hubble-tls\") on node \"ip-172-31-20-232\" DevicePath \"\"" Aug 13 01:03:27.936948 kubelet[2758]: I0813 01:03:27.936359 2758 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-cilium-run\") on node \"ip-172-31-20-232\" DevicePath \"\"" Aug 13 01:03:27.936948 kubelet[2758]: I0813 01:03:27.936367 2758 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-cni-path\") on node \"ip-172-31-20-232\" DevicePath \"\"" Aug 13 01:03:27.937159 kubelet[2758]: I0813 01:03:27.936375 2758 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-bpf-maps\") on node \"ip-172-31-20-232\" DevicePath \"\"" Aug 13 01:03:27.937159 kubelet[2758]: I0813 01:03:27.936382 2758 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-etc-cni-netd\") on node \"ip-172-31-20-232\" DevicePath \"\"" Aug 13 01:03:27.937159 kubelet[2758]: I0813 01:03:27.936390 2758 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1e95257c-5f2b-46dc-a3b6-f7a056adb18d-clustermesh-secrets\") on node \"ip-172-31-20-232\" DevicePath \"\"" Aug 13 01:03:28.141599 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a34a14161665687303ff02db05ef1cf7d4eaebfd5427950b99cd5a67f77b02dc-shm.mount: Deactivated successfully. Aug 13 01:03:28.141761 systemd[1]: var-lib-kubelet-pods-1e95257c\x2d5f2b\x2d46dc\x2da3b6\x2df7a056adb18d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddrjgr.mount: Deactivated successfully. Aug 13 01:03:28.141851 systemd[1]: var-lib-kubelet-pods-1e95257c\x2d5f2b\x2d46dc\x2da3b6\x2df7a056adb18d-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Aug 13 01:03:28.141940 systemd[1]: var-lib-kubelet-pods-1e95257c\x2d5f2b\x2d46dc\x2da3b6\x2df7a056adb18d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 01:03:28.142026 systemd[1]: var-lib-kubelet-pods-1e95257c\x2d5f2b\x2d46dc\x2da3b6\x2df7a056adb18d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 01:03:28.378710 kubelet[2758]: E0813 01:03:28.378679 2758 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 01:03:28.578627 kubelet[2758]: I0813 01:03:28.578602 2758 scope.go:117] "RemoveContainer" containerID="f7b6ae2d937f98051336111cb329d936d81772a692b71e011a624dfc9393026a" Aug 13 01:03:28.579746 env[1846]: time="2025-08-13T01:03:28.579706431Z" level=info msg="RemoveContainer for \"f7b6ae2d937f98051336111cb329d936d81772a692b71e011a624dfc9393026a\"" Aug 13 01:03:28.587467 env[1846]: time="2025-08-13T01:03:28.587332217Z" level=info msg="RemoveContainer for \"f7b6ae2d937f98051336111cb329d936d81772a692b71e011a624dfc9393026a\" returns successfully" Aug 13 01:03:28.587796 kubelet[2758]: I0813 01:03:28.587772 2758 scope.go:117] "RemoveContainer" containerID="f7b6ae2d937f98051336111cb329d936d81772a692b71e011a624dfc9393026a" Aug 13 01:03:28.588342 env[1846]: time="2025-08-13T01:03:28.588209216Z" level=error msg="ContainerStatus for \"f7b6ae2d937f98051336111cb329d936d81772a692b71e011a624dfc9393026a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f7b6ae2d937f98051336111cb329d936d81772a692b71e011a624dfc9393026a\": not found" Aug 13 01:03:28.588845 kubelet[2758]: E0813 01:03:28.588815 2758 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f7b6ae2d937f98051336111cb329d936d81772a692b71e011a624dfc9393026a\": not found" containerID="f7b6ae2d937f98051336111cb329d936d81772a692b71e011a624dfc9393026a" Aug 13 01:03:28.589004 kubelet[2758]: I0813 01:03:28.588849 2758 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f7b6ae2d937f98051336111cb329d936d81772a692b71e011a624dfc9393026a"} err="failed to get container status \"f7b6ae2d937f98051336111cb329d936d81772a692b71e011a624dfc9393026a\": rpc error: code = NotFound desc = an error occurred when try to find container \"f7b6ae2d937f98051336111cb329d936d81772a692b71e011a624dfc9393026a\": not found" Aug 13 01:03:28.654235 kubelet[2758]: E0813 01:03:28.654184 2758 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1e95257c-5f2b-46dc-a3b6-f7a056adb18d" containerName="mount-cgroup" Aug 13 01:03:28.654396 kubelet[2758]: I0813 01:03:28.654257 2758 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e95257c-5f2b-46dc-a3b6-f7a056adb18d" containerName="mount-cgroup" Aug 13 01:03:28.747657 kubelet[2758]: I0813 01:03:28.747604 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/609aca0c-93c0-4b22-a9fb-2e6291243182-cilium-cgroup\") pod \"cilium-qszjr\" (UID: \"609aca0c-93c0-4b22-a9fb-2e6291243182\") " pod="kube-system/cilium-qszjr" Aug 13 01:03:28.747657 kubelet[2758]: I0813 01:03:28.747653 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/609aca0c-93c0-4b22-a9fb-2e6291243182-etc-cni-netd\") pod \"cilium-qszjr\" (UID: \"609aca0c-93c0-4b22-a9fb-2e6291243182\") " pod="kube-system/cilium-qszjr" Aug 13 01:03:28.747905 kubelet[2758]: I0813 01:03:28.747677 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/609aca0c-93c0-4b22-a9fb-2e6291243182-hubble-tls\") pod \"cilium-qszjr\" (UID: \"609aca0c-93c0-4b22-a9fb-2e6291243182\") " pod="kube-system/cilium-qszjr" Aug 13 01:03:28.747905 kubelet[2758]: I0813 01:03:28.747709 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/609aca0c-93c0-4b22-a9fb-2e6291243182-cilium-ipsec-secrets\") pod \"cilium-qszjr\" (UID: \"609aca0c-93c0-4b22-a9fb-2e6291243182\") " pod="kube-system/cilium-qszjr" Aug 13 01:03:28.747905 kubelet[2758]: I0813 01:03:28.747729 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/609aca0c-93c0-4b22-a9fb-2e6291243182-cni-path\") pod \"cilium-qszjr\" (UID: \"609aca0c-93c0-4b22-a9fb-2e6291243182\") " pod="kube-system/cilium-qszjr" Aug 13 01:03:28.747905 kubelet[2758]: I0813 01:03:28.747751 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/609aca0c-93c0-4b22-a9fb-2e6291243182-hostproc\") pod \"cilium-qszjr\" (UID: \"609aca0c-93c0-4b22-a9fb-2e6291243182\") " pod="kube-system/cilium-qszjr" Aug 13 01:03:28.747905 kubelet[2758]: I0813 01:03:28.747770 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/609aca0c-93c0-4b22-a9fb-2e6291243182-lib-modules\") pod \"cilium-qszjr\" (UID: \"609aca0c-93c0-4b22-a9fb-2e6291243182\") " pod="kube-system/cilium-qszjr" Aug 13 01:03:28.747905 kubelet[2758]: I0813 01:03:28.747791 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/609aca0c-93c0-4b22-a9fb-2e6291243182-host-proc-sys-kernel\") pod \"cilium-qszjr\" (UID: \"609aca0c-93c0-4b22-a9fb-2e6291243182\") " pod="kube-system/cilium-qszjr" Aug 13 01:03:28.748183 kubelet[2758]: I0813 01:03:28.747813 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpssb\" (UniqueName: \"kubernetes.io/projected/609aca0c-93c0-4b22-a9fb-2e6291243182-kube-api-access-hpssb\") pod \"cilium-qszjr\" (UID: \"609aca0c-93c0-4b22-a9fb-2e6291243182\") " pod="kube-system/cilium-qszjr" Aug 13 01:03:28.748183 kubelet[2758]: I0813 01:03:28.747837 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/609aca0c-93c0-4b22-a9fb-2e6291243182-clustermesh-secrets\") pod \"cilium-qszjr\" (UID: \"609aca0c-93c0-4b22-a9fb-2e6291243182\") " pod="kube-system/cilium-qszjr" Aug 13 01:03:28.748183 kubelet[2758]: I0813 01:03:28.747865 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/609aca0c-93c0-4b22-a9fb-2e6291243182-cilium-config-path\") pod \"cilium-qszjr\" (UID: \"609aca0c-93c0-4b22-a9fb-2e6291243182\") " pod="kube-system/cilium-qszjr" Aug 13 01:03:28.748183 kubelet[2758]: I0813 01:03:28.747889 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/609aca0c-93c0-4b22-a9fb-2e6291243182-cilium-run\") pod \"cilium-qszjr\" (UID: \"609aca0c-93c0-4b22-a9fb-2e6291243182\") " pod="kube-system/cilium-qszjr" Aug 13 01:03:28.748183 kubelet[2758]: I0813 01:03:28.747913 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/609aca0c-93c0-4b22-a9fb-2e6291243182-bpf-maps\") pod \"cilium-qszjr\" (UID: \"609aca0c-93c0-4b22-a9fb-2e6291243182\") " pod="kube-system/cilium-qszjr" Aug 13 01:03:28.748183 kubelet[2758]: I0813 01:03:28.747936 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/609aca0c-93c0-4b22-a9fb-2e6291243182-xtables-lock\") pod \"cilium-qszjr\" (UID: \"609aca0c-93c0-4b22-a9fb-2e6291243182\") " pod="kube-system/cilium-qszjr" Aug 13 01:03:28.748403 kubelet[2758]: I0813 01:03:28.747960 2758 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/609aca0c-93c0-4b22-a9fb-2e6291243182-host-proc-sys-net\") pod \"cilium-qszjr\" (UID: \"609aca0c-93c0-4b22-a9fb-2e6291243182\") " pod="kube-system/cilium-qszjr" Aug 13 01:03:28.961131 env[1846]: time="2025-08-13T01:03:28.961013822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qszjr,Uid:609aca0c-93c0-4b22-a9fb-2e6291243182,Namespace:kube-system,Attempt:0,}" Aug 13 01:03:28.981910 env[1846]: time="2025-08-13T01:03:28.981654894Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:03:28.981910 env[1846]: time="2025-08-13T01:03:28.981707585Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:03:28.981910 env[1846]: time="2025-08-13T01:03:28.981724785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:03:28.982547 env[1846]: time="2025-08-13T01:03:28.982352934Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a3c4319d13f0f752d49a2d14280430c34b0d37afb517024efc8046bc128142a9 pid=4772 runtime=io.containerd.runc.v2 Aug 13 01:03:29.029000 env[1846]: time="2025-08-13T01:03:29.028651159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qszjr,Uid:609aca0c-93c0-4b22-a9fb-2e6291243182,Namespace:kube-system,Attempt:0,} returns sandbox id \"a3c4319d13f0f752d49a2d14280430c34b0d37afb517024efc8046bc128142a9\"" Aug 13 01:03:29.031999 env[1846]: time="2025-08-13T01:03:29.031788123Z" level=info msg="CreateContainer within sandbox \"a3c4319d13f0f752d49a2d14280430c34b0d37afb517024efc8046bc128142a9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 01:03:29.054669 env[1846]: time="2025-08-13T01:03:29.054616138Z" level=info msg="CreateContainer within sandbox \"a3c4319d13f0f752d49a2d14280430c34b0d37afb517024efc8046bc128142a9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"03a62d3ae3d7e09ed5ce168a9cb0dfffc602e839e3c6fdd57fcaa321a3d399c2\"" Aug 13 01:03:29.055979 env[1846]: time="2025-08-13T01:03:29.055454610Z" level=info msg="StartContainer for \"03a62d3ae3d7e09ed5ce168a9cb0dfffc602e839e3c6fdd57fcaa321a3d399c2\"" Aug 13 01:03:29.118466 env[1846]: time="2025-08-13T01:03:29.116721619Z" level=info msg="StartContainer for \"03a62d3ae3d7e09ed5ce168a9cb0dfffc602e839e3c6fdd57fcaa321a3d399c2\" returns successfully" Aug 13 01:03:29.155006 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-03a62d3ae3d7e09ed5ce168a9cb0dfffc602e839e3c6fdd57fcaa321a3d399c2-rootfs.mount: Deactivated successfully. Aug 13 01:03:29.170940 env[1846]: time="2025-08-13T01:03:29.170898555Z" level=info msg="shim disconnected" id=03a62d3ae3d7e09ed5ce168a9cb0dfffc602e839e3c6fdd57fcaa321a3d399c2 Aug 13 01:03:29.171256 env[1846]: time="2025-08-13T01:03:29.171226585Z" level=warning msg="cleaning up after shim disconnected" id=03a62d3ae3d7e09ed5ce168a9cb0dfffc602e839e3c6fdd57fcaa321a3d399c2 namespace=k8s.io Aug 13 01:03:29.171256 env[1846]: time="2025-08-13T01:03:29.171248383Z" level=info msg="cleaning up dead shim" Aug 13 01:03:29.179731 env[1846]: time="2025-08-13T01:03:29.179673866Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:03:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4856 runtime=io.containerd.runc.v2\n" Aug 13 01:03:29.254106 kubelet[2758]: I0813 01:03:29.253993 2758 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e95257c-5f2b-46dc-a3b6-f7a056adb18d" path="/var/lib/kubelet/pods/1e95257c-5f2b-46dc-a3b6-f7a056adb18d/volumes" Aug 13 01:03:29.589401 env[1846]: time="2025-08-13T01:03:29.589356055Z" level=info msg="CreateContainer within sandbox \"a3c4319d13f0f752d49a2d14280430c34b0d37afb517024efc8046bc128142a9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 01:03:29.616508 env[1846]: time="2025-08-13T01:03:29.616469280Z" level=info msg="CreateContainer within sandbox \"a3c4319d13f0f752d49a2d14280430c34b0d37afb517024efc8046bc128142a9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9cbeccc446a9c1c9f4565646c916d77092ddc1ac9c7e0a48b947d2b30c50f525\"" Aug 13 01:03:29.617276 env[1846]: time="2025-08-13T01:03:29.617251816Z" level=info msg="StartContainer for \"9cbeccc446a9c1c9f4565646c916d77092ddc1ac9c7e0a48b947d2b30c50f525\"" Aug 13 01:03:29.678345 env[1846]: time="2025-08-13T01:03:29.678300394Z" level=info msg="StartContainer for \"9cbeccc446a9c1c9f4565646c916d77092ddc1ac9c7e0a48b947d2b30c50f525\" returns successfully" Aug 13 01:03:29.714590 env[1846]: time="2025-08-13T01:03:29.714540279Z" level=info msg="shim disconnected" id=9cbeccc446a9c1c9f4565646c916d77092ddc1ac9c7e0a48b947d2b30c50f525 Aug 13 01:03:29.714590 env[1846]: time="2025-08-13T01:03:29.714587351Z" level=warning msg="cleaning up after shim disconnected" id=9cbeccc446a9c1c9f4565646c916d77092ddc1ac9c7e0a48b947d2b30c50f525 namespace=k8s.io Aug 13 01:03:29.714590 env[1846]: time="2025-08-13T01:03:29.714596828Z" level=info msg="cleaning up dead shim" Aug 13 01:03:29.723081 env[1846]: time="2025-08-13T01:03:29.723026940Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:03:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4916 runtime=io.containerd.runc.v2\n" Aug 13 01:03:30.141582 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9cbeccc446a9c1c9f4565646c916d77092ddc1ac9c7e0a48b947d2b30c50f525-rootfs.mount: Deactivated successfully. Aug 13 01:03:30.597040 env[1846]: time="2025-08-13T01:03:30.596985036Z" level=info msg="CreateContainer within sandbox \"a3c4319d13f0f752d49a2d14280430c34b0d37afb517024efc8046bc128142a9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 01:03:30.626762 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1349065953.mount: Deactivated successfully. Aug 13 01:03:30.644234 env[1846]: time="2025-08-13T01:03:30.644159845Z" level=info msg="CreateContainer within sandbox \"a3c4319d13f0f752d49a2d14280430c34b0d37afb517024efc8046bc128142a9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"168b0466663d7dce9f4950adb2e8c4885aa1886ba7579e33deff1e4ba573b711\"" Aug 13 01:03:30.645076 env[1846]: time="2025-08-13T01:03:30.645026121Z" level=info msg="StartContainer for \"168b0466663d7dce9f4950adb2e8c4885aa1886ba7579e33deff1e4ba573b711\"" Aug 13 01:03:30.709076 env[1846]: time="2025-08-13T01:03:30.709035183Z" level=info msg="StartContainer for \"168b0466663d7dce9f4950adb2e8c4885aa1886ba7579e33deff1e4ba573b711\" returns successfully" Aug 13 01:03:30.828970 env[1846]: time="2025-08-13T01:03:30.828341788Z" level=info msg="shim disconnected" id=168b0466663d7dce9f4950adb2e8c4885aa1886ba7579e33deff1e4ba573b711 Aug 13 01:03:30.828970 env[1846]: time="2025-08-13T01:03:30.828720268Z" level=warning msg="cleaning up after shim disconnected" id=168b0466663d7dce9f4950adb2e8c4885aa1886ba7579e33deff1e4ba573b711 namespace=k8s.io Aug 13 01:03:30.828970 env[1846]: time="2025-08-13T01:03:30.828742231Z" level=info msg="cleaning up dead shim" Aug 13 01:03:30.838912 env[1846]: time="2025-08-13T01:03:30.838866268Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:03:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4974 runtime=io.containerd.runc.v2\n" Aug 13 01:03:31.610810 env[1846]: time="2025-08-13T01:03:31.610757111Z" level=info msg="CreateContainer within sandbox \"a3c4319d13f0f752d49a2d14280430c34b0d37afb517024efc8046bc128142a9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 01:03:31.629843 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3320036998.mount: Deactivated successfully. Aug 13 01:03:31.635789 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1209525428.mount: Deactivated successfully. Aug 13 01:03:31.642220 env[1846]: time="2025-08-13T01:03:31.642133431Z" level=info msg="CreateContainer within sandbox \"a3c4319d13f0f752d49a2d14280430c34b0d37afb517024efc8046bc128142a9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"02888f9bee9adc1d9a74877a60362ae8dce75ad1d23eb4d696b585b1dbc8c0da\"" Aug 13 01:03:31.644095 env[1846]: time="2025-08-13T01:03:31.642708358Z" level=info msg="StartContainer for \"02888f9bee9adc1d9a74877a60362ae8dce75ad1d23eb4d696b585b1dbc8c0da\"" Aug 13 01:03:31.700928 env[1846]: time="2025-08-13T01:03:31.700869510Z" level=info msg="StartContainer for \"02888f9bee9adc1d9a74877a60362ae8dce75ad1d23eb4d696b585b1dbc8c0da\" returns successfully" Aug 13 01:03:31.728855 env[1846]: time="2025-08-13T01:03:31.728810235Z" level=info msg="shim disconnected" id=02888f9bee9adc1d9a74877a60362ae8dce75ad1d23eb4d696b585b1dbc8c0da Aug 13 01:03:31.728855 env[1846]: time="2025-08-13T01:03:31.728852803Z" level=warning msg="cleaning up after shim disconnected" id=02888f9bee9adc1d9a74877a60362ae8dce75ad1d23eb4d696b585b1dbc8c0da namespace=k8s.io Aug 13 01:03:31.728855 env[1846]: time="2025-08-13T01:03:31.728862283Z" level=info msg="cleaning up dead shim" Aug 13 01:03:31.737590 env[1846]: time="2025-08-13T01:03:31.737543526Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:03:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5032 runtime=io.containerd.runc.v2\n" Aug 13 01:03:32.614633 env[1846]: time="2025-08-13T01:03:32.614588695Z" level=info msg="CreateContainer within sandbox \"a3c4319d13f0f752d49a2d14280430c34b0d37afb517024efc8046bc128142a9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 01:03:32.648610 env[1846]: time="2025-08-13T01:03:32.648431976Z" level=info msg="CreateContainer within sandbox \"a3c4319d13f0f752d49a2d14280430c34b0d37afb517024efc8046bc128142a9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c7232c0deaf50e3c105a65b7e2a312603312113c67965f93cc8918c1816bf01c\"" Aug 13 01:03:32.649523 env[1846]: time="2025-08-13T01:03:32.649495244Z" level=info msg="StartContainer for \"c7232c0deaf50e3c105a65b7e2a312603312113c67965f93cc8918c1816bf01c\"" Aug 13 01:03:32.720988 env[1846]: time="2025-08-13T01:03:32.720942338Z" level=info msg="StartContainer for \"c7232c0deaf50e3c105a65b7e2a312603312113c67965f93cc8918c1816bf01c\" returns successfully" Aug 13 01:03:33.639243 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Aug 13 01:03:34.207287 systemd[1]: run-containerd-runc-k8s.io-c7232c0deaf50e3c105a65b7e2a312603312113c67965f93cc8918c1816bf01c-runc.WEcuFw.mount: Deactivated successfully. Aug 13 01:03:36.388552 systemd[1]: run-containerd-runc-k8s.io-c7232c0deaf50e3c105a65b7e2a312603312113c67965f93cc8918c1816bf01c-runc.5g8gQK.mount: Deactivated successfully. Aug 13 01:03:36.621001 (udev-worker)[5134]: Network interface NamePolicy= disabled on kernel command line. Aug 13 01:03:36.622476 (udev-worker)[5631]: Network interface NamePolicy= disabled on kernel command line. Aug 13 01:03:36.627774 systemd-networkd[1517]: lxc_health: Link UP Aug 13 01:03:36.637262 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Aug 13 01:03:36.637510 systemd-networkd[1517]: lxc_health: Gained carrier Aug 13 01:03:36.998564 kubelet[2758]: I0813 01:03:36.998493 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qszjr" podStartSLOduration=8.998456156 podStartE2EDuration="8.998456156s" podCreationTimestamp="2025-08-13 01:03:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:03:33.643582731 +0000 UTC m=+110.614426564" watchObservedRunningTime="2025-08-13 01:03:36.998456156 +0000 UTC m=+113.969299989" Aug 13 01:03:38.315354 systemd-networkd[1517]: lxc_health: Gained IPv6LL Aug 13 01:03:43.256181 env[1846]: time="2025-08-13T01:03:43.256128483Z" level=info msg="StopPodSandbox for \"5520eed74605f5d1be1956a9010b3a516dc17e0259073cb1886f47092902d694\"" Aug 13 01:03:43.256840 env[1846]: time="2025-08-13T01:03:43.256255382Z" level=info msg="TearDown network for sandbox \"5520eed74605f5d1be1956a9010b3a516dc17e0259073cb1886f47092902d694\" successfully" Aug 13 01:03:43.256840 env[1846]: time="2025-08-13T01:03:43.256303779Z" level=info msg="StopPodSandbox for \"5520eed74605f5d1be1956a9010b3a516dc17e0259073cb1886f47092902d694\" returns successfully" Aug 13 01:03:43.256968 env[1846]: time="2025-08-13T01:03:43.256851741Z" level=info msg="RemovePodSandbox for \"5520eed74605f5d1be1956a9010b3a516dc17e0259073cb1886f47092902d694\"" Aug 13 01:03:43.256968 env[1846]: time="2025-08-13T01:03:43.256886730Z" level=info msg="Forcibly stopping sandbox \"5520eed74605f5d1be1956a9010b3a516dc17e0259073cb1886f47092902d694\"" Aug 13 01:03:43.257057 env[1846]: time="2025-08-13T01:03:43.256981294Z" level=info msg="TearDown network for sandbox \"5520eed74605f5d1be1956a9010b3a516dc17e0259073cb1886f47092902d694\" successfully" Aug 13 01:03:43.263444 env[1846]: time="2025-08-13T01:03:43.263396649Z" level=info msg="RemovePodSandbox \"5520eed74605f5d1be1956a9010b3a516dc17e0259073cb1886f47092902d694\" returns successfully" Aug 13 01:03:43.263911 env[1846]: time="2025-08-13T01:03:43.263860898Z" level=info msg="StopPodSandbox for \"a34a14161665687303ff02db05ef1cf7d4eaebfd5427950b99cd5a67f77b02dc\"" Aug 13 01:03:43.264657 env[1846]: time="2025-08-13T01:03:43.263999261Z" level=info msg="TearDown network for sandbox \"a34a14161665687303ff02db05ef1cf7d4eaebfd5427950b99cd5a67f77b02dc\" successfully" Aug 13 01:03:43.264657 env[1846]: time="2025-08-13T01:03:43.264117274Z" level=info msg="StopPodSandbox for \"a34a14161665687303ff02db05ef1cf7d4eaebfd5427950b99cd5a67f77b02dc\" returns successfully" Aug 13 01:03:43.264779 env[1846]: time="2025-08-13T01:03:43.264758959Z" level=info msg="RemovePodSandbox for \"a34a14161665687303ff02db05ef1cf7d4eaebfd5427950b99cd5a67f77b02dc\"" Aug 13 01:03:43.264829 env[1846]: time="2025-08-13T01:03:43.264792145Z" level=info msg="Forcibly stopping sandbox \"a34a14161665687303ff02db05ef1cf7d4eaebfd5427950b99cd5a67f77b02dc\"" Aug 13 01:03:43.264920 env[1846]: time="2025-08-13T01:03:43.264890045Z" level=info msg="TearDown network for sandbox \"a34a14161665687303ff02db05ef1cf7d4eaebfd5427950b99cd5a67f77b02dc\" successfully" Aug 13 01:03:43.268689 env[1846]: time="2025-08-13T01:03:43.268657167Z" level=info msg="RemovePodSandbox \"a34a14161665687303ff02db05ef1cf7d4eaebfd5427950b99cd5a67f77b02dc\" returns successfully" Aug 13 01:03:43.269501 env[1846]: time="2025-08-13T01:03:43.269472761Z" level=info msg="StopPodSandbox for \"f04c0c0cdabac85e2e177fc32916467365fe139360098de97a8f80db145fb956\"" Aug 13 01:03:43.269598 env[1846]: time="2025-08-13T01:03:43.269549213Z" level=info msg="TearDown network for sandbox \"f04c0c0cdabac85e2e177fc32916467365fe139360098de97a8f80db145fb956\" successfully" Aug 13 01:03:43.269598 env[1846]: time="2025-08-13T01:03:43.269578209Z" level=info msg="StopPodSandbox for \"f04c0c0cdabac85e2e177fc32916467365fe139360098de97a8f80db145fb956\" returns successfully" Aug 13 01:03:43.270238 env[1846]: time="2025-08-13T01:03:43.269980105Z" level=info msg="RemovePodSandbox for \"f04c0c0cdabac85e2e177fc32916467365fe139360098de97a8f80db145fb956\"" Aug 13 01:03:43.270238 env[1846]: time="2025-08-13T01:03:43.270008114Z" level=info msg="Forcibly stopping sandbox \"f04c0c0cdabac85e2e177fc32916467365fe139360098de97a8f80db145fb956\"" Aug 13 01:03:43.270238 env[1846]: time="2025-08-13T01:03:43.270073497Z" level=info msg="TearDown network for sandbox \"f04c0c0cdabac85e2e177fc32916467365fe139360098de97a8f80db145fb956\" successfully" Aug 13 01:03:43.273876 env[1846]: time="2025-08-13T01:03:43.273773023Z" level=info msg="RemovePodSandbox \"f04c0c0cdabac85e2e177fc32916467365fe139360098de97a8f80db145fb956\" returns successfully" Aug 13 01:03:43.729729 sshd[4684]: pam_unix(sshd:session): session closed for user core Aug 13 01:03:43.742174 systemd[1]: sshd@24-172.31.20.232:22-147.75.109.163:59574.service: Deactivated successfully. Aug 13 01:03:43.743237 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 01:03:43.744168 systemd-logind[1835]: Session 25 logged out. Waiting for processes to exit. Aug 13 01:03:43.747953 systemd-logind[1835]: Removed session 25. Aug 13 01:03:58.325457 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-91c66dfa19a3dacee53a9a2e70304abbf54eb188dc9116fc151aeb195145daa1-rootfs.mount: Deactivated successfully. Aug 13 01:03:58.340372 env[1846]: time="2025-08-13T01:03:58.340293122Z" level=info msg="shim disconnected" id=91c66dfa19a3dacee53a9a2e70304abbf54eb188dc9116fc151aeb195145daa1 Aug 13 01:03:58.340372 env[1846]: time="2025-08-13T01:03:58.340337713Z" level=warning msg="cleaning up after shim disconnected" id=91c66dfa19a3dacee53a9a2e70304abbf54eb188dc9116fc151aeb195145daa1 namespace=k8s.io Aug 13 01:03:58.340372 env[1846]: time="2025-08-13T01:03:58.340347237Z" level=info msg="cleaning up dead shim" Aug 13 01:03:58.348980 env[1846]: time="2025-08-13T01:03:58.348937597Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:03:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5750 runtime=io.containerd.runc.v2\n" Aug 13 01:03:58.684979 kubelet[2758]: I0813 01:03:58.684659 2758 scope.go:117] "RemoveContainer" containerID="91c66dfa19a3dacee53a9a2e70304abbf54eb188dc9116fc151aeb195145daa1" Aug 13 01:03:58.689236 env[1846]: time="2025-08-13T01:03:58.689166566Z" level=info msg="CreateContainer within sandbox \"bdd5c601e258017788ef393dfcc4163b7403150c223afea5a4e1b4c96050a3f1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Aug 13 01:03:58.779302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount243848749.mount: Deactivated successfully. Aug 13 01:03:58.782537 env[1846]: time="2025-08-13T01:03:58.782108071Z" level=info msg="CreateContainer within sandbox \"bdd5c601e258017788ef393dfcc4163b7403150c223afea5a4e1b4c96050a3f1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"2a81774ed976a60c9615264f229a97a76d8b5c83d0a6b0c5839ec42984684242\"" Aug 13 01:03:58.783110 env[1846]: time="2025-08-13T01:03:58.783072740Z" level=info msg="StartContainer for \"2a81774ed976a60c9615264f229a97a76d8b5c83d0a6b0c5839ec42984684242\"" Aug 13 01:03:58.854068 env[1846]: time="2025-08-13T01:03:58.854017041Z" level=info msg="StartContainer for \"2a81774ed976a60c9615264f229a97a76d8b5c83d0a6b0c5839ec42984684242\" returns successfully" Aug 13 01:04:03.674625 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-87d0f46361187528fbca31d1db150b110abdf8f5fac6ccfbbe88a94e43cefc43-rootfs.mount: Deactivated successfully. Aug 13 01:04:03.699383 env[1846]: time="2025-08-13T01:04:03.699326066Z" level=info msg="shim disconnected" id=87d0f46361187528fbca31d1db150b110abdf8f5fac6ccfbbe88a94e43cefc43 Aug 13 01:04:03.700091 env[1846]: time="2025-08-13T01:04:03.700063175Z" level=warning msg="cleaning up after shim disconnected" id=87d0f46361187528fbca31d1db150b110abdf8f5fac6ccfbbe88a94e43cefc43 namespace=k8s.io Aug 13 01:04:03.700217 env[1846]: time="2025-08-13T01:04:03.700184080Z" level=info msg="cleaning up dead shim" Aug 13 01:04:03.733615 env[1846]: time="2025-08-13T01:04:03.733303303Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:04:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5814 runtime=io.containerd.runc.v2\n" Aug 13 01:04:04.700329 kubelet[2758]: I0813 01:04:04.700293 2758 scope.go:117] "RemoveContainer" containerID="87d0f46361187528fbca31d1db150b110abdf8f5fac6ccfbbe88a94e43cefc43" Aug 13 01:04:04.704397 env[1846]: time="2025-08-13T01:04:04.704358218Z" level=info msg="CreateContainer within sandbox \"6126f9aa2c6f75750f36079e4e15a165692c09006a3e04d1ca798c62733d295a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Aug 13 01:04:04.719178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3829536972.mount: Deactivated successfully. Aug 13 01:04:04.729504 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount826223593.mount: Deactivated successfully. Aug 13 01:04:04.736871 env[1846]: time="2025-08-13T01:04:04.736812321Z" level=info msg="CreateContainer within sandbox \"6126f9aa2c6f75750f36079e4e15a165692c09006a3e04d1ca798c62733d295a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"b975501357aabb1091817a698f8f8fdeb33e73ef81b8ffddc8ce0141ba8fa632\"" Aug 13 01:04:04.737575 env[1846]: time="2025-08-13T01:04:04.737540470Z" level=info msg="StartContainer for \"b975501357aabb1091817a698f8f8fdeb33e73ef81b8ffddc8ce0141ba8fa632\"" Aug 13 01:04:04.816902 env[1846]: time="2025-08-13T01:04:04.816849147Z" level=info msg="StartContainer for \"b975501357aabb1091817a698f8f8fdeb33e73ef81b8ffddc8ce0141ba8fa632\" returns successfully" Aug 13 01:04:06.419160 kubelet[2758]: E0813 01:04:06.419121 2758 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ip-172-31-20-232)" Aug 13 01:04:16.419411 kubelet[2758]: E0813 01:04:16.419335 2758 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.232:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-232?timeout=10s\": context deadline exceeded"