Sep 6 00:28:05.986129 kernel: Linux version 5.15.190-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Sep 5 22:53:38 -00 2025 Sep 6 00:28:05.986161 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 00:28:05.986180 kernel: BIOS-provided physical RAM map: Sep 6 00:28:05.986191 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 6 00:28:05.986202 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Sep 6 00:28:05.986213 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Sep 6 00:28:05.986227 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Sep 6 00:28:05.986239 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Sep 6 00:28:05.986253 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Sep 6 00:28:05.986265 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Sep 6 00:28:05.986276 kernel: NX (Execute Disable) protection: active Sep 6 00:28:05.986288 kernel: e820: update [mem 0x76813018-0x7681be57] usable ==> usable Sep 6 00:28:05.986300 kernel: e820: update [mem 0x76813018-0x7681be57] usable ==> usable Sep 6 00:28:05.986311 kernel: extended physical RAM map: Sep 6 00:28:05.986329 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 6 00:28:05.986342 kernel: reserve setup_data: [mem 0x0000000000100000-0x0000000076813017] usable Sep 6 00:28:05.986354 kernel: reserve setup_data: [mem 0x0000000076813018-0x000000007681be57] usable Sep 6 00:28:05.986367 kernel: reserve setup_data: [mem 0x000000007681be58-0x00000000786cdfff] usable Sep 6 00:28:05.986380 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Sep 6 00:28:05.986392 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Sep 6 00:28:05.986405 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Sep 6 00:28:05.986418 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Sep 6 00:28:05.986430 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Sep 6 00:28:05.986443 kernel: efi: EFI v2.70 by EDK II Sep 6 00:28:05.986458 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77004a98 Sep 6 00:28:05.986471 kernel: SMBIOS 2.7 present. Sep 6 00:28:05.986483 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Sep 6 00:28:05.986496 kernel: Hypervisor detected: KVM Sep 6 00:28:05.986508 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 6 00:28:05.986521 kernel: kvm-clock: cpu 0, msr 5219f001, primary cpu clock Sep 6 00:28:05.986533 kernel: kvm-clock: using sched offset of 5173031074 cycles Sep 6 00:28:05.986547 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 6 00:28:05.986560 kernel: tsc: Detected 2500.004 MHz processor Sep 6 00:28:05.986573 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 6 00:28:05.986586 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 6 00:28:05.986601 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Sep 6 00:28:05.986614 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 6 00:28:05.986627 kernel: Using GB pages for direct mapping Sep 6 00:28:05.986640 kernel: Secure boot disabled Sep 6 00:28:05.986653 kernel: ACPI: Early table checksum verification disabled Sep 6 00:28:05.986671 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Sep 6 00:28:05.986685 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Sep 6 00:28:05.986701 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 6 00:28:05.986715 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Sep 6 00:28:05.986729 kernel: ACPI: FACS 0x00000000789D0000 000040 Sep 6 00:28:05.986743 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Sep 6 00:28:05.986758 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 6 00:28:05.986772 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 6 00:28:05.986786 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Sep 6 00:28:05.986802 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Sep 6 00:28:05.986816 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 6 00:28:05.986830 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 6 00:28:05.986844 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Sep 6 00:28:05.986858 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Sep 6 00:28:05.986872 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Sep 6 00:28:05.986886 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Sep 6 00:28:05.986900 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Sep 6 00:28:05.986914 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Sep 6 00:28:05.986931 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Sep 6 00:28:05.986959 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Sep 6 00:28:05.986974 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Sep 6 00:28:05.986987 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Sep 6 00:28:05.986997 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Sep 6 00:28:05.987008 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Sep 6 00:28:05.987019 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 6 00:28:05.987031 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 6 00:28:05.987044 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Sep 6 00:28:05.987060 kernel: NUMA: Initialized distance table, cnt=1 Sep 6 00:28:05.987073 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] Sep 6 00:28:05.987085 kernel: Zone ranges: Sep 6 00:28:05.987097 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 6 00:28:05.987109 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Sep 6 00:28:05.987122 kernel: Normal empty Sep 6 00:28:05.987135 kernel: Movable zone start for each node Sep 6 00:28:05.987148 kernel: Early memory node ranges Sep 6 00:28:05.987160 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 6 00:28:05.987174 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Sep 6 00:28:05.987187 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Sep 6 00:28:05.987201 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Sep 6 00:28:05.987214 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 6 00:28:05.987227 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 6 00:28:05.987240 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 6 00:28:05.987254 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Sep 6 00:28:05.987267 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 6 00:28:05.987278 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 6 00:28:05.987294 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Sep 6 00:28:05.987305 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 6 00:28:05.987316 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 6 00:28:05.987328 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 6 00:28:05.987339 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 6 00:28:05.987351 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 6 00:28:05.987364 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 6 00:28:05.987377 kernel: TSC deadline timer available Sep 6 00:28:05.987388 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 6 00:28:05.987404 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Sep 6 00:28:05.987420 kernel: Booting paravirtualized kernel on KVM Sep 6 00:28:05.987432 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 6 00:28:05.987445 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Sep 6 00:28:05.987458 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Sep 6 00:28:05.987471 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Sep 6 00:28:05.987483 kernel: pcpu-alloc: [0] 0 1 Sep 6 00:28:05.987495 kernel: kvm-guest: stealtime: cpu 0, msr 7a41c0c0 Sep 6 00:28:05.987517 kernel: kvm-guest: PV spinlocks enabled Sep 6 00:28:05.987537 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 6 00:28:05.987549 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Sep 6 00:28:05.987561 kernel: Policy zone: DMA32 Sep 6 00:28:05.987575 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 00:28:05.987587 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 6 00:28:05.987598 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 6 00:28:05.987610 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 6 00:28:05.987622 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 6 00:28:05.987638 kernel: Memory: 1876640K/2037804K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47492K init, 4088K bss, 160904K reserved, 0K cma-reserved) Sep 6 00:28:05.987651 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 6 00:28:05.987665 kernel: Kernel/User page tables isolation: enabled Sep 6 00:28:05.987679 kernel: ftrace: allocating 34612 entries in 136 pages Sep 6 00:28:05.987693 kernel: ftrace: allocated 136 pages with 2 groups Sep 6 00:28:05.987706 kernel: rcu: Hierarchical RCU implementation. Sep 6 00:28:05.987719 kernel: rcu: RCU event tracing is enabled. Sep 6 00:28:05.987744 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 6 00:28:05.987757 kernel: Rude variant of Tasks RCU enabled. Sep 6 00:28:05.987770 kernel: Tracing variant of Tasks RCU enabled. Sep 6 00:28:05.987785 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 6 00:28:05.987799 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 6 00:28:05.987815 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 6 00:28:05.987827 kernel: random: crng init done Sep 6 00:28:05.987840 kernel: Console: colour dummy device 80x25 Sep 6 00:28:05.987853 kernel: printk: console [tty0] enabled Sep 6 00:28:05.987868 kernel: printk: console [ttyS0] enabled Sep 6 00:28:05.987882 kernel: ACPI: Core revision 20210730 Sep 6 00:28:05.987896 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Sep 6 00:28:05.987912 kernel: APIC: Switch to symmetric I/O mode setup Sep 6 00:28:05.987927 kernel: x2apic enabled Sep 6 00:28:05.987941 kernel: Switched APIC routing to physical x2apic. Sep 6 00:28:05.987981 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns Sep 6 00:28:05.987994 kernel: Calibrating delay loop (skipped) preset value.. 5000.00 BogoMIPS (lpj=2500004) Sep 6 00:28:05.988008 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 6 00:28:05.988020 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 6 00:28:05.988038 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 6 00:28:05.988053 kernel: Spectre V2 : Mitigation: Retpolines Sep 6 00:28:05.988068 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 6 00:28:05.988083 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 6 00:28:05.988099 kernel: RETBleed: Vulnerable Sep 6 00:28:05.988114 kernel: Speculative Store Bypass: Vulnerable Sep 6 00:28:05.988129 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Sep 6 00:28:05.988144 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 6 00:28:05.988158 kernel: GDS: Unknown: Dependent on hypervisor status Sep 6 00:28:05.988173 kernel: active return thunk: its_return_thunk Sep 6 00:28:05.988187 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 6 00:28:05.988204 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 6 00:28:05.988220 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 6 00:28:05.988235 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 6 00:28:05.988250 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Sep 6 00:28:05.988264 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Sep 6 00:28:05.988279 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 6 00:28:05.988293 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 6 00:28:05.988308 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 6 00:28:05.988323 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Sep 6 00:28:05.988338 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 6 00:28:05.988356 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Sep 6 00:28:05.988370 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Sep 6 00:28:05.988394 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Sep 6 00:28:05.988409 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Sep 6 00:28:05.988424 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Sep 6 00:28:05.988438 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Sep 6 00:28:05.988454 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Sep 6 00:28:05.988469 kernel: Freeing SMP alternatives memory: 32K Sep 6 00:28:05.988485 kernel: pid_max: default: 32768 minimum: 301 Sep 6 00:28:05.988499 kernel: LSM: Security Framework initializing Sep 6 00:28:05.988514 kernel: SELinux: Initializing. Sep 6 00:28:05.988529 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 6 00:28:05.988548 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 6 00:28:05.988563 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Sep 6 00:28:05.988578 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Sep 6 00:28:05.988594 kernel: signal: max sigframe size: 3632 Sep 6 00:28:05.988608 kernel: rcu: Hierarchical SRCU implementation. Sep 6 00:28:05.988623 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 6 00:28:05.988638 kernel: smp: Bringing up secondary CPUs ... Sep 6 00:28:05.988653 kernel: x86: Booting SMP configuration: Sep 6 00:28:05.988668 kernel: .... node #0, CPUs: #1 Sep 6 00:28:05.988683 kernel: kvm-clock: cpu 1, msr 5219f041, secondary cpu clock Sep 6 00:28:05.988701 kernel: kvm-guest: stealtime: cpu 1, msr 7a51c0c0 Sep 6 00:28:05.988718 kernel: Transient Scheduler Attacks: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 6 00:28:05.988735 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 6 00:28:05.988750 kernel: smp: Brought up 1 node, 2 CPUs Sep 6 00:28:05.988766 kernel: smpboot: Max logical packages: 1 Sep 6 00:28:05.988781 kernel: smpboot: Total of 2 processors activated (10000.01 BogoMIPS) Sep 6 00:28:05.988797 kernel: devtmpfs: initialized Sep 6 00:28:05.988812 kernel: x86/mm: Memory block size: 128MB Sep 6 00:28:05.988830 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Sep 6 00:28:05.988845 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 6 00:28:05.988861 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 6 00:28:05.988876 kernel: pinctrl core: initialized pinctrl subsystem Sep 6 00:28:05.988891 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 6 00:28:05.988907 kernel: audit: initializing netlink subsys (disabled) Sep 6 00:28:05.988922 kernel: audit: type=2000 audit(1757118486.918:1): state=initialized audit_enabled=0 res=1 Sep 6 00:28:05.988937 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 6 00:28:05.988970 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 6 00:28:05.988989 kernel: cpuidle: using governor menu Sep 6 00:28:05.989004 kernel: ACPI: bus type PCI registered Sep 6 00:28:05.989019 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 6 00:28:05.989034 kernel: dca service started, version 1.12.1 Sep 6 00:28:05.989049 kernel: PCI: Using configuration type 1 for base access Sep 6 00:28:05.989065 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 6 00:28:05.989080 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 6 00:28:05.989096 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 6 00:28:05.989111 kernel: ACPI: Added _OSI(Module Device) Sep 6 00:28:05.989129 kernel: ACPI: Added _OSI(Processor Device) Sep 6 00:28:05.989146 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 6 00:28:05.989161 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 6 00:28:05.989176 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 6 00:28:05.989191 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 6 00:28:05.989207 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Sep 6 00:28:05.989222 kernel: ACPI: Interpreter enabled Sep 6 00:28:05.989236 kernel: ACPI: PM: (supports S0 S5) Sep 6 00:28:05.989252 kernel: ACPI: Using IOAPIC for interrupt routing Sep 6 00:28:05.989270 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 6 00:28:05.989286 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 6 00:28:05.989301 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 6 00:28:05.989515 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 6 00:28:05.989650 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Sep 6 00:28:05.989671 kernel: acpiphp: Slot [3] registered Sep 6 00:28:05.989687 kernel: acpiphp: Slot [4] registered Sep 6 00:28:05.989708 kernel: acpiphp: Slot [5] registered Sep 6 00:28:05.989723 kernel: acpiphp: Slot [6] registered Sep 6 00:28:05.989739 kernel: acpiphp: Slot [7] registered Sep 6 00:28:05.989755 kernel: acpiphp: Slot [8] registered Sep 6 00:28:05.989770 kernel: acpiphp: Slot [9] registered Sep 6 00:28:05.989785 kernel: acpiphp: Slot [10] registered Sep 6 00:28:05.989801 kernel: acpiphp: Slot [11] registered Sep 6 00:28:05.989817 kernel: acpiphp: Slot [12] registered Sep 6 00:28:05.989833 kernel: acpiphp: Slot [13] registered Sep 6 00:28:05.989849 kernel: acpiphp: Slot [14] registered Sep 6 00:28:05.989869 kernel: acpiphp: Slot [15] registered Sep 6 00:28:05.989884 kernel: acpiphp: Slot [16] registered Sep 6 00:28:05.989899 kernel: acpiphp: Slot [17] registered Sep 6 00:28:05.989915 kernel: acpiphp: Slot [18] registered Sep 6 00:28:05.989930 kernel: acpiphp: Slot [19] registered Sep 6 00:28:05.996260 kernel: acpiphp: Slot [20] registered Sep 6 00:28:05.996304 kernel: acpiphp: Slot [21] registered Sep 6 00:28:05.996320 kernel: acpiphp: Slot [22] registered Sep 6 00:28:05.996332 kernel: acpiphp: Slot [23] registered Sep 6 00:28:05.996352 kernel: acpiphp: Slot [24] registered Sep 6 00:28:05.996366 kernel: acpiphp: Slot [25] registered Sep 6 00:28:05.996381 kernel: acpiphp: Slot [26] registered Sep 6 00:28:05.996407 kernel: acpiphp: Slot [27] registered Sep 6 00:28:05.996422 kernel: acpiphp: Slot [28] registered Sep 6 00:28:05.996436 kernel: acpiphp: Slot [29] registered Sep 6 00:28:05.996451 kernel: acpiphp: Slot [30] registered Sep 6 00:28:05.996465 kernel: acpiphp: Slot [31] registered Sep 6 00:28:05.996480 kernel: PCI host bridge to bus 0000:00 Sep 6 00:28:05.996660 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 6 00:28:06.003142 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 6 00:28:06.003287 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 6 00:28:06.003404 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 6 00:28:06.003516 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Sep 6 00:28:06.003631 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 6 00:28:06.003793 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 6 00:28:06.003941 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Sep 6 00:28:06.004099 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Sep 6 00:28:06.004224 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 6 00:28:06.004350 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Sep 6 00:28:06.004489 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Sep 6 00:28:06.004617 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Sep 6 00:28:06.004740 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Sep 6 00:28:06.004872 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Sep 6 00:28:06.005009 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Sep 6 00:28:06.005144 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Sep 6 00:28:06.005268 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Sep 6 00:28:06.005391 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 6 00:28:06.005515 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Sep 6 00:28:06.005638 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 6 00:28:06.005774 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Sep 6 00:28:06.005898 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Sep 6 00:28:06.006036 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Sep 6 00:28:06.006156 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Sep 6 00:28:06.006173 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 6 00:28:06.006186 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 6 00:28:06.006203 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 6 00:28:06.006217 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 6 00:28:06.006230 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 6 00:28:06.006243 kernel: iommu: Default domain type: Translated Sep 6 00:28:06.006256 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 6 00:28:06.006372 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Sep 6 00:28:06.006492 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 6 00:28:06.006610 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Sep 6 00:28:06.006626 kernel: vgaarb: loaded Sep 6 00:28:06.006642 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 6 00:28:06.006655 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 6 00:28:06.006668 kernel: PTP clock support registered Sep 6 00:28:06.006682 kernel: Registered efivars operations Sep 6 00:28:06.006695 kernel: PCI: Using ACPI for IRQ routing Sep 6 00:28:06.006708 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 6 00:28:06.006721 kernel: e820: reserve RAM buffer [mem 0x76813018-0x77ffffff] Sep 6 00:28:06.006734 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Sep 6 00:28:06.006747 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Sep 6 00:28:06.006763 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Sep 6 00:28:06.006776 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Sep 6 00:28:06.006789 kernel: clocksource: Switched to clocksource kvm-clock Sep 6 00:28:06.006801 kernel: VFS: Disk quotas dquot_6.6.0 Sep 6 00:28:06.006815 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 6 00:28:06.006828 kernel: pnp: PnP ACPI init Sep 6 00:28:06.006842 kernel: pnp: PnP ACPI: found 5 devices Sep 6 00:28:06.006855 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 6 00:28:06.006868 kernel: NET: Registered PF_INET protocol family Sep 6 00:28:06.006884 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 6 00:28:06.006897 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 6 00:28:06.006910 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 6 00:28:06.006924 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 6 00:28:06.006936 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Sep 6 00:28:06.006960 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 6 00:28:06.006973 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 6 00:28:06.006986 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 6 00:28:06.007001 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 6 00:28:06.007014 kernel: NET: Registered PF_XDP protocol family Sep 6 00:28:06.007126 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 6 00:28:06.007233 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 6 00:28:06.007339 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 6 00:28:06.007445 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 6 00:28:06.007549 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Sep 6 00:28:06.007671 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 6 00:28:06.007791 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Sep 6 00:28:06.007811 kernel: PCI: CLS 0 bytes, default 64 Sep 6 00:28:06.007824 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 6 00:28:06.007837 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns Sep 6 00:28:06.007850 kernel: clocksource: Switched to clocksource tsc Sep 6 00:28:06.007863 kernel: Initialise system trusted keyrings Sep 6 00:28:06.007875 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 6 00:28:06.007889 kernel: Key type asymmetric registered Sep 6 00:28:06.007901 kernel: Asymmetric key parser 'x509' registered Sep 6 00:28:06.007916 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 6 00:28:06.007928 kernel: io scheduler mq-deadline registered Sep 6 00:28:06.007941 kernel: io scheduler kyber registered Sep 6 00:28:06.008028 kernel: io scheduler bfq registered Sep 6 00:28:06.008040 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 6 00:28:06.008053 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 6 00:28:06.008066 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 6 00:28:06.008079 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 6 00:28:06.008092 kernel: i8042: Warning: Keylock active Sep 6 00:28:06.008107 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 6 00:28:06.008120 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 6 00:28:06.008248 kernel: rtc_cmos 00:00: RTC can wake from S4 Sep 6 00:28:06.008359 kernel: rtc_cmos 00:00: registered as rtc0 Sep 6 00:28:06.008480 kernel: rtc_cmos 00:00: setting system clock to 2025-09-06T00:28:05 UTC (1757118485) Sep 6 00:28:06.008591 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Sep 6 00:28:06.008607 kernel: intel_pstate: CPU model not supported Sep 6 00:28:06.008620 kernel: efifb: probing for efifb Sep 6 00:28:06.008636 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Sep 6 00:28:06.008649 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Sep 6 00:28:06.008662 kernel: efifb: scrolling: redraw Sep 6 00:28:06.008675 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 6 00:28:06.008688 kernel: Console: switching to colour frame buffer device 100x37 Sep 6 00:28:06.008702 kernel: fb0: EFI VGA frame buffer device Sep 6 00:28:06.008738 kernel: pstore: Registered efi as persistent store backend Sep 6 00:28:06.008754 kernel: NET: Registered PF_INET6 protocol family Sep 6 00:28:06.008768 kernel: Segment Routing with IPv6 Sep 6 00:28:06.008785 kernel: In-situ OAM (IOAM) with IPv6 Sep 6 00:28:06.008798 kernel: NET: Registered PF_PACKET protocol family Sep 6 00:28:06.008812 kernel: Key type dns_resolver registered Sep 6 00:28:06.008825 kernel: IPI shorthand broadcast: enabled Sep 6 00:28:06.008839 kernel: sched_clock: Marking stable (363409819, 137673045)->(587810216, -86727352) Sep 6 00:28:06.008853 kernel: registered taskstats version 1 Sep 6 00:28:06.008867 kernel: Loading compiled-in X.509 certificates Sep 6 00:28:06.008880 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.190-flatcar: 59a3efd48c75422889eb056cb9758fbe471623cb' Sep 6 00:28:06.008894 kernel: Key type .fscrypt registered Sep 6 00:28:06.008914 kernel: Key type fscrypt-provisioning registered Sep 6 00:28:06.008928 kernel: pstore: Using crash dump compression: deflate Sep 6 00:28:06.008942 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 6 00:28:06.008966 kernel: ima: Allocated hash algorithm: sha1 Sep 6 00:28:06.008980 kernel: ima: No architecture policies found Sep 6 00:28:06.008993 kernel: clk: Disabling unused clocks Sep 6 00:28:06.009007 kernel: Freeing unused kernel image (initmem) memory: 47492K Sep 6 00:28:06.009022 kernel: Write protecting the kernel read-only data: 28672k Sep 6 00:28:06.009036 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Sep 6 00:28:06.009053 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Sep 6 00:28:06.009066 kernel: Run /init as init process Sep 6 00:28:06.009080 kernel: with arguments: Sep 6 00:28:06.009093 kernel: /init Sep 6 00:28:06.009106 kernel: with environment: Sep 6 00:28:06.009120 kernel: HOME=/ Sep 6 00:28:06.009133 kernel: TERM=linux Sep 6 00:28:06.009146 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 6 00:28:06.009164 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 00:28:06.009183 systemd[1]: Detected virtualization amazon. Sep 6 00:28:06.009197 systemd[1]: Detected architecture x86-64. Sep 6 00:28:06.009211 systemd[1]: Running in initrd. Sep 6 00:28:06.009225 systemd[1]: No hostname configured, using default hostname. Sep 6 00:28:06.009239 systemd[1]: Hostname set to . Sep 6 00:28:06.009255 systemd[1]: Initializing machine ID from VM UUID. Sep 6 00:28:06.009269 systemd[1]: Queued start job for default target initrd.target. Sep 6 00:28:06.009286 systemd[1]: Started systemd-ask-password-console.path. Sep 6 00:28:06.009300 systemd[1]: Reached target cryptsetup.target. Sep 6 00:28:06.009313 systemd[1]: Reached target paths.target. Sep 6 00:28:06.009330 systemd[1]: Reached target slices.target. Sep 6 00:28:06.009344 systemd[1]: Reached target swap.target. Sep 6 00:28:06.009360 systemd[1]: Reached target timers.target. Sep 6 00:28:06.009375 systemd[1]: Listening on iscsid.socket. Sep 6 00:28:06.009390 systemd[1]: Listening on iscsiuio.socket. Sep 6 00:28:06.009404 systemd[1]: Listening on systemd-journald-audit.socket. Sep 6 00:28:06.009418 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 6 00:28:06.009432 systemd[1]: Listening on systemd-journald.socket. Sep 6 00:28:06.009447 systemd[1]: Listening on systemd-networkd.socket. Sep 6 00:28:06.009461 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 00:28:06.009478 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 00:28:06.009492 systemd[1]: Reached target sockets.target. Sep 6 00:28:06.009507 systemd[1]: Starting kmod-static-nodes.service... Sep 6 00:28:06.009521 systemd[1]: Finished network-cleanup.service. Sep 6 00:28:06.009535 systemd[1]: Starting systemd-fsck-usr.service... Sep 6 00:28:06.009550 systemd[1]: Starting systemd-journald.service... Sep 6 00:28:06.009564 systemd[1]: Starting systemd-modules-load.service... Sep 6 00:28:06.009578 systemd[1]: Starting systemd-resolved.service... Sep 6 00:28:06.009592 systemd[1]: Starting systemd-vconsole-setup.service... Sep 6 00:28:06.009609 systemd[1]: Finished kmod-static-nodes.service. Sep 6 00:28:06.009625 kernel: audit: type=1130 audit(1757118486.002:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:06.009645 systemd-journald[185]: Journal started Sep 6 00:28:06.009715 systemd-journald[185]: Runtime Journal (/run/log/journal/ec24da6112671e6b883e2f5741d05fe0) is 4.8M, max 38.3M, 33.5M free. Sep 6 00:28:06.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:06.012895 systemd[1]: Started systemd-journald.service. Sep 6 00:28:06.012943 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 6 00:28:06.018922 systemd-modules-load[186]: Inserted module 'overlay' Sep 6 00:28:06.036097 kernel: audit: type=1130 audit(1757118486.024:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:06.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:06.025444 systemd[1]: Finished systemd-fsck-usr.service. Sep 6 00:28:06.052120 kernel: audit: type=1130 audit(1757118486.036:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:06.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:06.037292 systemd[1]: Finished systemd-vconsole-setup.service. Sep 6 00:28:06.048118 systemd-resolved[187]: Positive Trust Anchors: Sep 6 00:28:06.063890 kernel: audit: type=1130 audit(1757118486.052:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:06.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:06.048130 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 00:28:06.048182 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 00:28:06.052637 systemd-resolved[187]: Defaulting to hostname 'linux'. Sep 6 00:28:06.102928 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 6 00:28:06.102977 kernel: audit: type=1130 audit(1757118486.089:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:06.102997 kernel: Bridge firewalling registered Sep 6 00:28:06.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:06.054365 systemd[1]: Starting dracut-cmdline-ask.service... Sep 6 00:28:06.072161 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 6 00:28:06.086139 systemd[1]: Started systemd-resolved.service. Sep 6 00:28:06.091122 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 6 00:28:06.120646 kernel: audit: type=1130 audit(1757118486.106:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:06.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:06.099830 systemd-modules-load[186]: Inserted module 'br_netfilter' Sep 6 00:28:06.107427 systemd[1]: Finished dracut-cmdline-ask.service. Sep 6 00:28:06.129040 kernel: audit: type=1130 audit(1757118486.115:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:06.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:06.117103 systemd[1]: Reached target nss-lookup.target. Sep 6 00:28:06.136179 kernel: SCSI subsystem initialized Sep 6 00:28:06.119333 systemd[1]: Starting dracut-cmdline.service... Sep 6 00:28:06.145986 dracut-cmdline[202]: dracut-dracut-053 Sep 6 00:28:06.150420 dracut-cmdline[202]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 00:28:06.174589 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 6 00:28:06.174623 kernel: device-mapper: uevent: version 1.0.3 Sep 6 00:28:06.174643 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 6 00:28:06.174662 kernel: audit: type=1130 audit(1757118486.165:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:06.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:06.161512 systemd-modules-load[186]: Inserted module 'dm_multipath' Sep 6 00:28:06.162545 systemd[1]: Finished systemd-modules-load.service. Sep 6 00:28:06.172614 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:28:06.184217 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:28:06.193811 kernel: audit: type=1130 audit(1757118486.184:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:06.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:06.240978 kernel: Loading iSCSI transport class v2.0-870. Sep 6 00:28:06.259978 kernel: iscsi: registered transport (tcp) Sep 6 00:28:06.284990 kernel: iscsi: registered transport (qla4xxx) Sep 6 00:28:06.285066 kernel: QLogic iSCSI HBA Driver Sep 6 00:28:06.318089 systemd[1]: Finished dracut-cmdline.service. Sep 6 00:28:06.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:06.320121 systemd[1]: Starting dracut-pre-udev.service... Sep 6 00:28:06.371994 kernel: raid6: avx512x4 gen() 18149 MB/s Sep 6 00:28:06.389976 kernel: raid6: avx512x4 xor() 8125 MB/s Sep 6 00:28:06.407995 kernel: raid6: avx512x2 gen() 18218 MB/s Sep 6 00:28:06.425974 kernel: raid6: avx512x2 xor() 23981 MB/s Sep 6 00:28:06.443986 kernel: raid6: avx512x1 gen() 17793 MB/s Sep 6 00:28:06.461979 kernel: raid6: avx512x1 xor() 21641 MB/s Sep 6 00:28:06.479991 kernel: raid6: avx2x4 gen() 18093 MB/s Sep 6 00:28:06.497977 kernel: raid6: avx2x4 xor() 7450 MB/s Sep 6 00:28:06.515984 kernel: raid6: avx2x2 gen() 18118 MB/s Sep 6 00:28:06.533974 kernel: raid6: avx2x2 xor() 17929 MB/s Sep 6 00:28:06.551993 kernel: raid6: avx2x1 gen() 14015 MB/s Sep 6 00:28:06.569978 kernel: raid6: avx2x1 xor() 15418 MB/s Sep 6 00:28:06.587981 kernel: raid6: sse2x4 gen() 9490 MB/s Sep 6 00:28:06.605976 kernel: raid6: sse2x4 xor() 6040 MB/s Sep 6 00:28:06.623996 kernel: raid6: sse2x2 gen() 10388 MB/s Sep 6 00:28:06.641978 kernel: raid6: sse2x2 xor() 6075 MB/s Sep 6 00:28:06.659993 kernel: raid6: sse2x1 gen() 9317 MB/s Sep 6 00:28:06.678272 kernel: raid6: sse2x1 xor() 4729 MB/s Sep 6 00:28:06.678342 kernel: raid6: using algorithm avx512x2 gen() 18218 MB/s Sep 6 00:28:06.678361 kernel: raid6: .... xor() 23981 MB/s, rmw enabled Sep 6 00:28:06.679410 kernel: raid6: using avx512x2 recovery algorithm Sep 6 00:28:06.693980 kernel: xor: automatically using best checksumming function avx Sep 6 00:28:06.799992 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Sep 6 00:28:06.809367 systemd[1]: Finished dracut-pre-udev.service. Sep 6 00:28:06.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:06.809000 audit: BPF prog-id=7 op=LOAD Sep 6 00:28:06.809000 audit: BPF prog-id=8 op=LOAD Sep 6 00:28:06.810922 systemd[1]: Starting systemd-udevd.service... Sep 6 00:28:06.825196 systemd-udevd[384]: Using default interface naming scheme 'v252'. Sep 6 00:28:06.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:06.830578 systemd[1]: Started systemd-udevd.service. Sep 6 00:28:06.832413 systemd[1]: Starting dracut-pre-trigger.service... Sep 6 00:28:06.853032 dracut-pre-trigger[389]: rd.md=0: removing MD RAID activation Sep 6 00:28:06.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:06.886842 systemd[1]: Finished dracut-pre-trigger.service. Sep 6 00:28:06.888112 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 00:28:06.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:06.933630 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 00:28:06.988973 kernel: cryptd: max_cpu_qlen set to 1000 Sep 6 00:28:07.026721 kernel: AVX2 version of gcm_enc/dec engaged. Sep 6 00:28:07.026793 kernel: AES CTR mode by8 optimization enabled Sep 6 00:28:07.031673 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 6 00:28:07.050359 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 6 00:28:07.050535 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Sep 6 00:28:07.050678 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 6 00:28:07.050839 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:3d:a2:a2:c7:29 Sep 6 00:28:07.051000 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 6 00:28:07.062975 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 6 00:28:07.072071 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 6 00:28:07.072140 kernel: GPT:9289727 != 16777215 Sep 6 00:28:07.072163 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 6 00:28:07.073813 kernel: GPT:9289727 != 16777215 Sep 6 00:28:07.074828 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 6 00:28:07.076064 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 6 00:28:07.083017 (udev-worker)[432]: Network interface NamePolicy= disabled on kernel command line. Sep 6 00:28:07.141966 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (431) Sep 6 00:28:07.175433 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 6 00:28:07.202835 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 6 00:28:07.207563 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 6 00:28:07.219767 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 00:28:07.225995 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 6 00:28:07.236192 systemd[1]: Starting disk-uuid.service... Sep 6 00:28:07.245109 disk-uuid[593]: Primary Header is updated. Sep 6 00:28:07.245109 disk-uuid[593]: Secondary Entries is updated. Sep 6 00:28:07.245109 disk-uuid[593]: Secondary Header is updated. Sep 6 00:28:07.253982 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 6 00:28:07.263990 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 6 00:28:08.277179 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 6 00:28:08.277248 disk-uuid[594]: The operation has completed successfully. Sep 6 00:28:08.415188 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 6 00:28:08.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:08.415000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:08.415308 systemd[1]: Finished disk-uuid.service. Sep 6 00:28:08.426054 systemd[1]: Starting verity-setup.service... Sep 6 00:28:08.455543 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 6 00:28:08.568106 systemd[1]: Found device dev-mapper-usr.device. Sep 6 00:28:08.570972 systemd[1]: Mounting sysusr-usr.mount... Sep 6 00:28:08.574636 systemd[1]: Finished verity-setup.service. Sep 6 00:28:08.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:08.679027 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 6 00:28:08.679745 systemd[1]: Mounted sysusr-usr.mount. Sep 6 00:28:08.680681 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 6 00:28:08.681747 systemd[1]: Starting ignition-setup.service... Sep 6 00:28:08.686591 systemd[1]: Starting parse-ip-for-networkd.service... Sep 6 00:28:08.711376 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 6 00:28:08.711448 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 6 00:28:08.711469 kernel: BTRFS info (device nvme0n1p6): has skinny extents Sep 6 00:28:08.728992 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 6 00:28:08.744568 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 6 00:28:08.758030 systemd[1]: Finished ignition-setup.service. Sep 6 00:28:08.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:08.760241 systemd[1]: Starting ignition-fetch-offline.service... Sep 6 00:28:08.778055 systemd[1]: Finished parse-ip-for-networkd.service. Sep 6 00:28:08.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:08.778000 audit: BPF prog-id=9 op=LOAD Sep 6 00:28:08.780597 systemd[1]: Starting systemd-networkd.service... Sep 6 00:28:08.805405 systemd-networkd[1107]: lo: Link UP Sep 6 00:28:08.805417 systemd-networkd[1107]: lo: Gained carrier Sep 6 00:28:08.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:08.806427 systemd-networkd[1107]: Enumeration completed Sep 6 00:28:08.806549 systemd[1]: Started systemd-networkd.service. Sep 6 00:28:08.806990 systemd-networkd[1107]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 00:28:08.808101 systemd[1]: Reached target network.target. Sep 6 00:28:08.810163 systemd[1]: Starting iscsiuio.service... Sep 6 00:28:08.815866 systemd-networkd[1107]: eth0: Link UP Sep 6 00:28:08.815872 systemd-networkd[1107]: eth0: Gained carrier Sep 6 00:28:08.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:08.818091 systemd[1]: Started iscsiuio.service. Sep 6 00:28:08.821793 systemd[1]: Starting iscsid.service... Sep 6 00:28:08.827908 iscsid[1112]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 6 00:28:08.827908 iscsid[1112]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 6 00:28:08.827908 iscsid[1112]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 6 00:28:08.827908 iscsid[1112]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 6 00:28:08.827908 iscsid[1112]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 6 00:28:08.827908 iscsid[1112]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 6 00:28:08.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:08.828063 systemd-networkd[1107]: eth0: DHCPv4 address 172.31.18.181/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 6 00:28:08.831273 systemd[1]: Started iscsid.service. Sep 6 00:28:08.833293 systemd[1]: Starting dracut-initqueue.service... Sep 6 00:28:08.849755 systemd[1]: Finished dracut-initqueue.service. Sep 6 00:28:08.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:08.850513 systemd[1]: Reached target remote-fs-pre.target. Sep 6 00:28:08.851204 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 00:28:08.852486 systemd[1]: Reached target remote-fs.target. Sep 6 00:28:08.855934 systemd[1]: Starting dracut-pre-mount.service... Sep 6 00:28:08.866688 systemd[1]: Finished dracut-pre-mount.service. Sep 6 00:28:08.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:09.188917 ignition[1090]: Ignition 2.14.0 Sep 6 00:28:09.188934 ignition[1090]: Stage: fetch-offline Sep 6 00:28:09.189075 ignition[1090]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:28:09.189107 ignition[1090]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 6 00:28:09.209091 ignition[1090]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 6 00:28:09.209447 ignition[1090]: Ignition finished successfully Sep 6 00:28:09.211380 systemd[1]: Finished ignition-fetch-offline.service. Sep 6 00:28:09.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:09.213373 systemd[1]: Starting ignition-fetch.service... Sep 6 00:28:09.221855 ignition[1131]: Ignition 2.14.0 Sep 6 00:28:09.221867 ignition[1131]: Stage: fetch Sep 6 00:28:09.222032 ignition[1131]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:28:09.222053 ignition[1131]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 6 00:28:09.229119 ignition[1131]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 6 00:28:09.230304 ignition[1131]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 6 00:28:09.251528 ignition[1131]: INFO : PUT result: OK Sep 6 00:28:09.253595 ignition[1131]: DEBUG : parsed url from cmdline: "" Sep 6 00:28:09.253595 ignition[1131]: INFO : no config URL provided Sep 6 00:28:09.253595 ignition[1131]: INFO : reading system config file "/usr/lib/ignition/user.ign" Sep 6 00:28:09.253595 ignition[1131]: INFO : no config at "/usr/lib/ignition/user.ign" Sep 6 00:28:09.257264 ignition[1131]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 6 00:28:09.257264 ignition[1131]: INFO : PUT result: OK Sep 6 00:28:09.257264 ignition[1131]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 6 00:28:09.257264 ignition[1131]: INFO : GET result: OK Sep 6 00:28:09.257264 ignition[1131]: DEBUG : parsing config with SHA512: 0b27edd5ad75b13a71613f012d1ad4e97a98d747854455dd7124693e69491409c1e3cdfc80c92d55ef6c05224b149b2842547d7f19be8ed50f378431989c01e6 Sep 6 00:28:09.260046 ignition[1131]: fetch: fetch complete Sep 6 00:28:09.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:09.259096 unknown[1131]: fetched base config from "system" Sep 6 00:28:09.260057 ignition[1131]: fetch: fetch passed Sep 6 00:28:09.259107 unknown[1131]: fetched base config from "system" Sep 6 00:28:09.260133 ignition[1131]: Ignition finished successfully Sep 6 00:28:09.259116 unknown[1131]: fetched user config from "aws" Sep 6 00:28:09.262118 systemd[1]: Finished ignition-fetch.service. Sep 6 00:28:09.265315 systemd[1]: Starting ignition-kargs.service... Sep 6 00:28:09.277661 ignition[1137]: Ignition 2.14.0 Sep 6 00:28:09.277674 ignition[1137]: Stage: kargs Sep 6 00:28:09.277880 ignition[1137]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:28:09.277918 ignition[1137]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 6 00:28:09.285616 ignition[1137]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 6 00:28:09.286397 ignition[1137]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 6 00:28:09.287491 ignition[1137]: INFO : PUT result: OK Sep 6 00:28:09.289904 ignition[1137]: kargs: kargs passed Sep 6 00:28:09.289995 ignition[1137]: Ignition finished successfully Sep 6 00:28:09.291996 systemd[1]: Finished ignition-kargs.service. Sep 6 00:28:09.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:09.293863 systemd[1]: Starting ignition-disks.service... Sep 6 00:28:09.302901 ignition[1143]: Ignition 2.14.0 Sep 6 00:28:09.302914 ignition[1143]: Stage: disks Sep 6 00:28:09.303139 ignition[1143]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:28:09.303173 ignition[1143]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 6 00:28:09.310564 ignition[1143]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 6 00:28:09.311358 ignition[1143]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 6 00:28:09.312717 ignition[1143]: INFO : PUT result: OK Sep 6 00:28:09.314756 ignition[1143]: disks: disks passed Sep 6 00:28:09.314825 ignition[1143]: Ignition finished successfully Sep 6 00:28:09.316143 systemd[1]: Finished ignition-disks.service. Sep 6 00:28:09.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:09.317298 systemd[1]: Reached target initrd-root-device.target. Sep 6 00:28:09.318219 systemd[1]: Reached target local-fs-pre.target. Sep 6 00:28:09.319210 systemd[1]: Reached target local-fs.target. Sep 6 00:28:09.320196 systemd[1]: Reached target sysinit.target. Sep 6 00:28:09.321274 systemd[1]: Reached target basic.target. Sep 6 00:28:09.323418 systemd[1]: Starting systemd-fsck-root.service... Sep 6 00:28:09.354736 systemd-fsck[1151]: ROOT: clean, 629/553520 files, 56028/553472 blocks Sep 6 00:28:09.357932 systemd[1]: Finished systemd-fsck-root.service. Sep 6 00:28:09.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:09.359368 systemd[1]: Mounting sysroot.mount... Sep 6 00:28:09.388980 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 6 00:28:09.390424 systemd[1]: Mounted sysroot.mount. Sep 6 00:28:09.394297 systemd[1]: Reached target initrd-root-fs.target. Sep 6 00:28:09.406258 systemd[1]: Mounting sysroot-usr.mount... Sep 6 00:28:09.408182 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Sep 6 00:28:09.409630 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 6 00:28:09.409671 systemd[1]: Reached target ignition-diskful.target. Sep 6 00:28:09.411421 systemd[1]: Mounted sysroot-usr.mount. Sep 6 00:28:09.415498 systemd[1]: Starting initrd-setup-root.service... Sep 6 00:28:09.422992 initrd-setup-root[1172]: cut: /sysroot/etc/passwd: No such file or directory Sep 6 00:28:09.438279 initrd-setup-root[1180]: cut: /sysroot/etc/group: No such file or directory Sep 6 00:28:09.443325 initrd-setup-root[1188]: cut: /sysroot/etc/shadow: No such file or directory Sep 6 00:28:09.449546 initrd-setup-root[1196]: cut: /sysroot/etc/gshadow: No such file or directory Sep 6 00:28:09.553886 systemd[1]: Finished initrd-setup-root.service. Sep 6 00:28:09.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:09.555680 systemd[1]: Starting ignition-mount.service... Sep 6 00:28:09.557604 systemd[1]: Starting sysroot-boot.service... Sep 6 00:28:09.567493 bash[1213]: umount: /sysroot/usr/share/oem: not mounted. Sep 6 00:28:09.577753 ignition[1214]: INFO : Ignition 2.14.0 Sep 6 00:28:09.578671 ignition[1214]: INFO : Stage: mount Sep 6 00:28:09.579707 ignition[1214]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:28:09.579707 ignition[1214]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 6 00:28:09.591492 ignition[1214]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 6 00:28:09.592987 ignition[1214]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 6 00:28:09.595246 ignition[1214]: INFO : PUT result: OK Sep 6 00:28:09.599345 ignition[1214]: INFO : mount: mount passed Sep 6 00:28:09.599345 ignition[1214]: INFO : Ignition finished successfully Sep 6 00:28:09.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:09.601672 systemd[1]: Finished ignition-mount.service. Sep 6 00:28:09.605187 systemd[1]: Finished sysroot-boot.service. Sep 6 00:28:09.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:09.611182 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 6 00:28:09.634975 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1223) Sep 6 00:28:09.639338 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 6 00:28:09.639424 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 6 00:28:09.639455 kernel: BTRFS info (device nvme0n1p6): has skinny extents Sep 6 00:28:09.677001 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 6 00:28:09.680840 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 6 00:28:09.682820 systemd[1]: Starting ignition-files.service... Sep 6 00:28:09.701231 ignition[1243]: INFO : Ignition 2.14.0 Sep 6 00:28:09.701231 ignition[1243]: INFO : Stage: files Sep 6 00:28:09.703492 ignition[1243]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:28:09.703492 ignition[1243]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 6 00:28:09.710303 ignition[1243]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 6 00:28:09.711199 ignition[1243]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 6 00:28:09.712029 ignition[1243]: INFO : PUT result: OK Sep 6 00:28:09.714685 ignition[1243]: DEBUG : files: compiled without relabeling support, skipping Sep 6 00:28:09.723045 ignition[1243]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 6 00:28:09.723045 ignition[1243]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 6 00:28:09.727842 ignition[1243]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 6 00:28:09.729613 ignition[1243]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 6 00:28:09.733061 unknown[1243]: wrote ssh authorized keys file for user: core Sep 6 00:28:09.734229 ignition[1243]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 6 00:28:09.736002 ignition[1243]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Sep 6 00:28:09.736002 ignition[1243]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 6 00:28:09.747042 ignition[1243]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4104913237" Sep 6 00:28:09.747042 ignition[1243]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4104913237": device or resource busy Sep 6 00:28:09.747042 ignition[1243]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4104913237", trying btrfs: device or resource busy Sep 6 00:28:09.747042 ignition[1243]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4104913237" Sep 6 00:28:09.747042 ignition[1243]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4104913237" Sep 6 00:28:09.759047 ignition[1243]: INFO : op(3): [started] unmounting "/mnt/oem4104913237" Sep 6 00:28:09.760357 ignition[1243]: INFO : op(3): [finished] unmounting "/mnt/oem4104913237" Sep 6 00:28:09.759662 systemd[1]: mnt-oem4104913237.mount: Deactivated successfully. Sep 6 00:28:09.762568 ignition[1243]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Sep 6 00:28:09.762568 ignition[1243]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 6 00:28:09.762568 ignition[1243]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 6 00:28:09.762568 ignition[1243]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 00:28:09.762568 ignition[1243]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 00:28:09.762568 ignition[1243]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 6 00:28:09.782525 ignition[1243]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 6 00:28:09.782525 ignition[1243]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Sep 6 00:28:09.782525 ignition[1243]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 6 00:28:09.782525 ignition[1243]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2701793573" Sep 6 00:28:09.782525 ignition[1243]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2701793573": device or resource busy Sep 6 00:28:09.782525 ignition[1243]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2701793573", trying btrfs: device or resource busy Sep 6 00:28:09.782525 ignition[1243]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2701793573" Sep 6 00:28:09.782525 ignition[1243]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2701793573" Sep 6 00:28:09.782525 ignition[1243]: INFO : op(6): [started] unmounting "/mnt/oem2701793573" Sep 6 00:28:09.782525 ignition[1243]: INFO : op(6): [finished] unmounting "/mnt/oem2701793573" Sep 6 00:28:09.782525 ignition[1243]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Sep 6 00:28:09.782525 ignition[1243]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Sep 6 00:28:09.782525 ignition[1243]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 6 00:28:09.813175 ignition[1243]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2705887347" Sep 6 00:28:09.813175 ignition[1243]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2705887347": device or resource busy Sep 6 00:28:09.813175 ignition[1243]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2705887347", trying btrfs: device or resource busy Sep 6 00:28:09.813175 ignition[1243]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2705887347" Sep 6 00:28:09.813175 ignition[1243]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2705887347" Sep 6 00:28:09.813175 ignition[1243]: INFO : op(9): [started] unmounting "/mnt/oem2705887347" Sep 6 00:28:09.813175 ignition[1243]: INFO : op(9): [finished] unmounting "/mnt/oem2705887347" Sep 6 00:28:09.813175 ignition[1243]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Sep 6 00:28:09.813175 ignition[1243]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Sep 6 00:28:09.813175 ignition[1243]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 6 00:28:09.813175 ignition[1243]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3440206561" Sep 6 00:28:09.813175 ignition[1243]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3440206561": device or resource busy Sep 6 00:28:09.813175 ignition[1243]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3440206561", trying btrfs: device or resource busy Sep 6 00:28:09.813175 ignition[1243]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3440206561" Sep 6 00:28:09.813175 ignition[1243]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3440206561" Sep 6 00:28:09.813175 ignition[1243]: INFO : op(c): [started] unmounting "/mnt/oem3440206561" Sep 6 00:28:09.813175 ignition[1243]: INFO : op(c): [finished] unmounting "/mnt/oem3440206561" Sep 6 00:28:09.813175 ignition[1243]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Sep 6 00:28:09.813175 ignition[1243]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 6 00:28:09.813175 ignition[1243]: INFO : GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 6 00:28:10.270754 ignition[1243]: INFO : GET result: OK Sep 6 00:28:10.609210 systemd[1]: mnt-oem2701793573.mount: Deactivated successfully. Sep 6 00:28:10.804190 systemd-networkd[1107]: eth0: Gained IPv6LL Sep 6 00:28:10.846581 ignition[1243]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 6 00:28:10.846581 ignition[1243]: INFO : files: op(b): [started] processing unit "nvidia.service" Sep 6 00:28:10.846581 ignition[1243]: INFO : files: op(b): [finished] processing unit "nvidia.service" Sep 6 00:28:10.846581 ignition[1243]: INFO : files: op(c): [started] processing unit "coreos-metadata-sshkeys@.service" Sep 6 00:28:10.846581 ignition[1243]: INFO : files: op(c): [finished] processing unit "coreos-metadata-sshkeys@.service" Sep 6 00:28:10.846581 ignition[1243]: INFO : files: op(d): [started] processing unit "amazon-ssm-agent.service" Sep 6 00:28:10.853698 ignition[1243]: INFO : files: op(d): op(e): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Sep 6 00:28:10.853698 ignition[1243]: INFO : files: op(d): op(e): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Sep 6 00:28:10.853698 ignition[1243]: INFO : files: op(d): [finished] processing unit "amazon-ssm-agent.service" Sep 6 00:28:10.853698 ignition[1243]: INFO : files: op(f): [started] setting preset to enabled for "nvidia.service" Sep 6 00:28:10.853698 ignition[1243]: INFO : files: op(f): [finished] setting preset to enabled for "nvidia.service" Sep 6 00:28:10.853698 ignition[1243]: INFO : files: op(10): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 6 00:28:10.853698 ignition[1243]: INFO : files: op(10): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 6 00:28:10.853698 ignition[1243]: INFO : files: op(11): [started] setting preset to enabled for "amazon-ssm-agent.service" Sep 6 00:28:10.853698 ignition[1243]: INFO : files: op(11): [finished] setting preset to enabled for "amazon-ssm-agent.service" Sep 6 00:28:10.853698 ignition[1243]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 6 00:28:10.853698 ignition[1243]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 6 00:28:10.853698 ignition[1243]: INFO : files: files passed Sep 6 00:28:10.853698 ignition[1243]: INFO : Ignition finished successfully Sep 6 00:28:10.907836 kernel: kauditd_printk_skb: 26 callbacks suppressed Sep 6 00:28:10.907880 kernel: audit: type=1130 audit(1757118490.857:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:10.907902 kernel: audit: type=1130 audit(1757118490.885:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:10.907919 kernel: audit: type=1131 audit(1757118490.885:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:10.907937 kernel: audit: type=1130 audit(1757118490.899:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:10.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:10.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:10.885000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:10.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:10.857130 systemd[1]: Finished ignition-files.service. Sep 6 00:28:10.862234 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 6 00:28:10.871366 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 6 00:28:10.914720 initrd-setup-root-after-ignition[1268]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 6 00:28:10.873897 systemd[1]: Starting ignition-quench.service... Sep 6 00:28:10.880625 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 6 00:28:10.880764 systemd[1]: Finished ignition-quench.service. Sep 6 00:28:10.887132 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 6 00:28:10.900234 systemd[1]: Reached target ignition-complete.target. Sep 6 00:28:10.907958 systemd[1]: Starting initrd-parse-etc.service... Sep 6 00:28:10.931008 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 6 00:28:10.931159 systemd[1]: Finished initrd-parse-etc.service. Sep 6 00:28:10.943086 kernel: audit: type=1130 audit(1757118490.931:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:10.943124 kernel: audit: type=1131 audit(1757118490.931:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:10.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:10.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:10.932882 systemd[1]: Reached target initrd-fs.target. Sep 6 00:28:10.943902 systemd[1]: Reached target initrd.target. Sep 6 00:28:10.945528 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 6 00:28:10.946860 systemd[1]: Starting dracut-pre-pivot.service... Sep 6 00:28:10.960269 systemd[1]: Finished dracut-pre-pivot.service. Sep 6 00:28:10.970073 kernel: audit: type=1130 audit(1757118490.960:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:10.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:10.968334 systemd[1]: Starting initrd-cleanup.service... Sep 6 00:28:10.980683 systemd[1]: Stopped target nss-lookup.target. Sep 6 00:28:10.982342 systemd[1]: Stopped target remote-cryptsetup.target. Sep 6 00:28:10.984041 systemd[1]: Stopped target timers.target. Sep 6 00:28:10.985459 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 6 00:28:10.985677 systemd[1]: Stopped dracut-pre-pivot.service. Sep 6 00:28:10.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:10.988003 systemd[1]: Stopped target initrd.target. Sep 6 00:28:10.994683 kernel: audit: type=1131 audit(1757118490.986:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:10.994982 systemd[1]: Stopped target basic.target. Sep 6 00:28:10.996819 systemd[1]: Stopped target ignition-complete.target. Sep 6 00:28:10.997970 systemd[1]: Stopped target ignition-diskful.target. Sep 6 00:28:10.999312 systemd[1]: Stopped target initrd-root-device.target. Sep 6 00:28:11.000675 systemd[1]: Stopped target remote-fs.target. Sep 6 00:28:11.001874 systemd[1]: Stopped target remote-fs-pre.target. Sep 6 00:28:11.003120 systemd[1]: Stopped target sysinit.target. Sep 6 00:28:11.004549 systemd[1]: Stopped target local-fs.target. Sep 6 00:28:11.005865 systemd[1]: Stopped target local-fs-pre.target. Sep 6 00:28:11.007061 systemd[1]: Stopped target swap.target. Sep 6 00:28:11.008558 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 6 00:28:11.015335 kernel: audit: type=1131 audit(1757118491.008:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:11.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:11.008772 systemd[1]: Stopped dracut-pre-mount.service. Sep 6 00:28:11.010067 systemd[1]: Stopped target cryptsetup.target. Sep 6 00:28:11.023078 kernel: audit: type=1131 audit(1757118491.016:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:11.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:11.016140 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 6 00:28:11.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:11.016525 systemd[1]: Stopped dracut-initqueue.service. Sep 6 00:28:11.024000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:11.017754 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 6 00:28:11.017991 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 6 00:28:11.024099 systemd[1]: ignition-files.service: Deactivated successfully. Sep 6 00:28:11.024323 systemd[1]: Stopped ignition-files.service. Sep 6 00:28:11.026826 systemd[1]: Stopping ignition-mount.service... Sep 6 00:28:11.033203 iscsid[1112]: iscsid shutting down. Sep 6 00:28:11.034573 systemd[1]: Stopping iscsid.service... Sep 6 00:28:11.037892 systemd[1]: Stopping sysroot-boot.service... Sep 6 00:28:11.039942 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 6 00:28:11.041303 systemd[1]: Stopped systemd-udev-trigger.service. Sep 6 00:28:11.042000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:11.043768 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 6 00:28:11.044801 ignition[1281]: INFO : Ignition 2.14.0 Sep 6 00:28:11.044801 ignition[1281]: INFO : Stage: umount Sep 6 00:28:11.044801 ignition[1281]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 00:28:11.044801 ignition[1281]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 6 00:28:11.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:11.047333 systemd[1]: Stopped dracut-pre-trigger.service. Sep 6 00:28:11.055427 systemd[1]: iscsid.service: Deactivated successfully. Sep 6 00:28:11.061000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:11.055601 systemd[1]: Stopped iscsid.service. Sep 6 00:28:11.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:11.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:11.070370 ignition[1281]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 6 00:28:11.070370 ignition[1281]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 6 00:28:11.070370 ignition[1281]: INFO : PUT result: OK Sep 6 00:28:11.063854 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 6 00:28:11.066029 systemd[1]: Finished initrd-cleanup.service. Sep 6 00:28:11.068573 systemd[1]: Stopping iscsiuio.service... Sep 6 00:28:11.078710 ignition[1281]: INFO : umount: umount passed Sep 6 00:28:11.078710 ignition[1281]: INFO : Ignition finished successfully Sep 6 00:28:11.075396 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 6 00:28:11.075555 systemd[1]: Stopped iscsiuio.service. Sep 6 00:28:11.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:11.083039 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 6 00:28:11.083162 systemd[1]: Stopped ignition-mount.service. Sep 6 00:28:11.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:11.087347 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 6 00:28:11.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:11.087420 systemd[1]: Stopped ignition-disks.service. Sep 6 00:28:11.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:11.089092 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 6 00:28:11.090000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:11.089160 systemd[1]: Stopped ignition-kargs.service. Sep 6 00:28:11.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:11.090035 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 6 00:28:11.090096 systemd[1]: Stopped ignition-fetch.service. Sep 6 00:28:11.091199 systemd[1]: Stopped target network.target. Sep 6 00:28:11.092728 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 6 00:28:11.092799 systemd[1]: Stopped ignition-fetch-offline.service. Sep 6 00:28:11.093451 systemd[1]: Stopped target paths.target. Sep 6 00:28:11.094809 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 6 00:28:11.098029 systemd[1]: Stopped systemd-ask-password-console.path. Sep 6 00:28:11.098934 systemd[1]: Stopped target slices.target. Sep 6 00:28:11.099975 systemd[1]: Stopped target sockets.target. Sep 6 00:28:11.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:11.101119 systemd[1]: iscsid.socket: Deactivated successfully. Sep 6 00:28:11.101177 systemd[1]: Closed iscsid.socket. Sep 6 00:28:11.102181 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 6 00:28:11.102234 systemd[1]: Closed iscsiuio.socket. Sep 6 00:28:11.103212 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 6 00:28:11.103281 systemd[1]: Stopped ignition-setup.service. Sep 6 00:28:11.104659 systemd[1]: Stopping systemd-networkd.service... Sep 6 00:28:11.105704 systemd[1]: Stopping systemd-resolved.service... Sep 6 00:28:11.108004 systemd-networkd[1107]: eth0: DHCPv6 lease lost Sep 6 00:28:11.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:11.109755 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 6 00:28:11.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:11.110543 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 6 00:28:11.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:11.115000 audit: BPF prog-id=9 op=UNLOAD Sep 6 00:28:11.116000 audit: BPF prog-id=6 op=UNLOAD Sep 6 00:28:11.110677 systemd[1]: Stopped systemd-networkd.service. Sep 6 00:28:11.113852 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 6 00:28:11.117000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:11.114162 systemd[1]: Stopped systemd-resolved.service. Sep 6 00:28:11.115470 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 6 00:28:11.115589 systemd[1]: Stopped sysroot-boot.service. Sep 6 00:28:11.122000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:11.116770 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 6 00:28:11.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:11.116816 systemd[1]: Closed systemd-networkd.socket. Sep 6 00:28:11.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:11.117858 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 6 00:28:11.117921 systemd[1]: Stopped initrd-setup-root.service. Sep 6 00:28:11.120104 systemd[1]: Stopping network-cleanup.service... Sep 6 00:28:11.122412 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 6 00:28:11.122489 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 6 00:28:11.123704 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 00:28:11.137000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:11.123768 systemd[1]: Stopped systemd-sysctl.service. Sep 6 00:28:11.139000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:11.125253 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 6 00:28:11.125315 systemd[1]: Stopped systemd-modules-load.service. Sep 6 00:28:11.126480 systemd[1]: Stopping systemd-udevd.service... Sep 6 00:28:11.143000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:11.133300 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 6 00:28:11.144000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:11.137993 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 6 00:28:11.146000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:11.138141 systemd[1]: Stopped network-cleanup.service. Sep 6 00:28:11.151000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:11.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:11.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:11.139315 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 6 00:28:11.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:11.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:11.139491 systemd[1]: Stopped systemd-udevd.service. Sep 6 00:28:11.140905 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 6 00:28:11.140980 systemd[1]: Closed systemd-udevd-control.socket. Sep 6 00:28:11.141795 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 6 00:28:11.141841 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 6 00:28:11.142915 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 6 00:28:11.143069 systemd[1]: Stopped dracut-pre-udev.service. Sep 6 00:28:11.144110 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 6 00:28:11.144170 systemd[1]: Stopped dracut-cmdline.service. Sep 6 00:28:11.145401 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 6 00:28:11.145459 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 6 00:28:11.148189 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 6 00:28:11.151007 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 6 00:28:11.151102 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Sep 6 00:28:11.152231 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 6 00:28:11.152292 systemd[1]: Stopped kmod-static-nodes.service. Sep 6 00:28:11.153390 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 6 00:28:11.153448 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 6 00:28:11.155780 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 6 00:28:11.157677 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 6 00:28:11.157772 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 6 00:28:11.158471 systemd[1]: Reached target initrd-switch-root.target. Sep 6 00:28:11.160339 systemd[1]: Starting initrd-switch-root.service... Sep 6 00:28:11.178165 systemd[1]: Switching root. Sep 6 00:28:11.202686 systemd-journald[185]: Journal stopped Sep 6 00:28:16.055648 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Sep 6 00:28:16.055745 kernel: SELinux: Class mctp_socket not defined in policy. Sep 6 00:28:16.055768 kernel: SELinux: Class anon_inode not defined in policy. Sep 6 00:28:16.055788 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 6 00:28:16.055812 kernel: SELinux: policy capability network_peer_controls=1 Sep 6 00:28:16.055841 kernel: SELinux: policy capability open_perms=1 Sep 6 00:28:16.055860 kernel: SELinux: policy capability extended_socket_class=1 Sep 6 00:28:16.055879 kernel: SELinux: policy capability always_check_network=0 Sep 6 00:28:16.055905 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 6 00:28:16.055925 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 6 00:28:16.055975 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 6 00:28:16.056000 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 6 00:28:16.056022 systemd[1]: Successfully loaded SELinux policy in 86.561ms. Sep 6 00:28:16.056052 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.812ms. Sep 6 00:28:16.056075 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 00:28:16.056095 systemd[1]: Detected virtualization amazon. Sep 6 00:28:16.056118 systemd[1]: Detected architecture x86-64. Sep 6 00:28:16.056138 systemd[1]: Detected first boot. Sep 6 00:28:16.056160 systemd[1]: Initializing machine ID from VM UUID. Sep 6 00:28:16.056181 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 6 00:28:16.056203 systemd[1]: Populated /etc with preset unit settings. Sep 6 00:28:16.056225 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:28:16.056250 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:28:16.056274 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:28:16.056295 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 6 00:28:16.056316 systemd[1]: Stopped initrd-switch-root.service. Sep 6 00:28:16.056348 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 6 00:28:16.056369 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 6 00:28:16.056394 systemd[1]: Created slice system-addon\x2drun.slice. Sep 6 00:28:16.056415 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Sep 6 00:28:16.056434 systemd[1]: Created slice system-getty.slice. Sep 6 00:28:16.056452 systemd[1]: Created slice system-modprobe.slice. Sep 6 00:28:16.056470 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 6 00:28:16.056488 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 6 00:28:16.056506 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 6 00:28:16.056525 systemd[1]: Created slice user.slice. Sep 6 00:28:16.056544 systemd[1]: Started systemd-ask-password-console.path. Sep 6 00:28:16.056565 systemd[1]: Started systemd-ask-password-wall.path. Sep 6 00:28:16.056583 systemd[1]: Set up automount boot.automount. Sep 6 00:28:16.056601 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 6 00:28:16.056622 systemd[1]: Stopped target initrd-switch-root.target. Sep 6 00:28:16.056640 systemd[1]: Stopped target initrd-fs.target. Sep 6 00:28:16.056659 systemd[1]: Stopped target initrd-root-fs.target. Sep 6 00:28:16.056677 systemd[1]: Reached target integritysetup.target. Sep 6 00:28:16.056696 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 00:28:16.056717 systemd[1]: Reached target remote-fs.target. Sep 6 00:28:16.056737 systemd[1]: Reached target slices.target. Sep 6 00:28:16.056755 systemd[1]: Reached target swap.target. Sep 6 00:28:16.056773 systemd[1]: Reached target torcx.target. Sep 6 00:28:16.056791 systemd[1]: Reached target veritysetup.target. Sep 6 00:28:16.056815 systemd[1]: Listening on systemd-coredump.socket. Sep 6 00:28:16.056833 systemd[1]: Listening on systemd-initctl.socket. Sep 6 00:28:16.056852 systemd[1]: Listening on systemd-networkd.socket. Sep 6 00:28:16.056870 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 00:28:16.056889 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 00:28:16.056910 systemd[1]: Listening on systemd-userdbd.socket. Sep 6 00:28:16.056928 systemd[1]: Mounting dev-hugepages.mount... Sep 6 00:28:16.056960 systemd[1]: Mounting dev-mqueue.mount... Sep 6 00:28:16.056979 systemd[1]: Mounting media.mount... Sep 6 00:28:16.056998 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:28:16.057017 systemd[1]: Mounting sys-kernel-debug.mount... Sep 6 00:28:16.057037 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 6 00:28:16.057055 systemd[1]: Mounting tmp.mount... Sep 6 00:28:16.057074 systemd[1]: Starting flatcar-tmpfiles.service... Sep 6 00:28:16.057096 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:28:16.057115 systemd[1]: Starting kmod-static-nodes.service... Sep 6 00:28:16.057133 systemd[1]: Starting modprobe@configfs.service... Sep 6 00:28:16.057152 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:28:16.057170 systemd[1]: Starting modprobe@drm.service... Sep 6 00:28:16.057187 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:28:16.057206 systemd[1]: Starting modprobe@fuse.service... Sep 6 00:28:16.057224 systemd[1]: Starting modprobe@loop.service... Sep 6 00:28:16.057243 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 6 00:28:16.057265 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 6 00:28:16.057293 systemd[1]: Stopped systemd-fsck-root.service. Sep 6 00:28:16.057314 kernel: kauditd_printk_skb: 65 callbacks suppressed Sep 6 00:28:16.057333 kernel: audit: type=1131 audit(1757118495.881:105): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:16.057355 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 6 00:28:16.057373 systemd[1]: Stopped systemd-fsck-usr.service. Sep 6 00:28:16.057392 kernel: audit: type=1131 audit(1757118495.895:106): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:16.057409 systemd[1]: Stopped systemd-journald.service. Sep 6 00:28:16.057429 kernel: audit: type=1130 audit(1757118495.905:107): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:16.057446 systemd[1]: Starting systemd-journald.service... Sep 6 00:28:16.057466 kernel: audit: type=1131 audit(1757118495.905:108): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:16.057486 kernel: audit: type=1334 audit(1757118495.911:109): prog-id=18 op=LOAD Sep 6 00:28:16.057504 systemd[1]: Starting systemd-modules-load.service... Sep 6 00:28:16.057523 kernel: audit: type=1334 audit(1757118495.912:110): prog-id=19 op=LOAD Sep 6 00:28:16.057539 kernel: loop: module loaded Sep 6 00:28:16.057557 kernel: audit: type=1334 audit(1757118495.912:111): prog-id=20 op=LOAD Sep 6 00:28:16.057574 systemd[1]: Starting systemd-network-generator.service... Sep 6 00:28:16.057593 kernel: audit: type=1334 audit(1757118495.912:112): prog-id=16 op=UNLOAD Sep 6 00:28:16.057610 kernel: audit: type=1334 audit(1757118495.912:113): prog-id=17 op=UNLOAD Sep 6 00:28:16.057630 systemd[1]: Starting systemd-remount-fs.service... Sep 6 00:28:16.057649 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 00:28:16.057669 systemd[1]: verity-setup.service: Deactivated successfully. Sep 6 00:28:16.057687 systemd[1]: Stopped verity-setup.service. Sep 6 00:28:16.057705 kernel: fuse: init (API version 7.34) Sep 6 00:28:16.057722 kernel: audit: type=1131 audit(1757118495.970:114): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:16.057740 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:28:16.057757 systemd[1]: Mounted dev-hugepages.mount. Sep 6 00:28:16.057777 systemd[1]: Mounted dev-mqueue.mount. Sep 6 00:28:16.057797 systemd[1]: Mounted media.mount. Sep 6 00:28:16.057815 systemd[1]: Mounted sys-kernel-debug.mount. Sep 6 00:28:16.057832 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 6 00:28:16.057849 systemd[1]: Mounted tmp.mount. Sep 6 00:28:16.057869 systemd[1]: Finished kmod-static-nodes.service. Sep 6 00:28:16.057887 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 6 00:28:16.057906 systemd[1]: Finished modprobe@configfs.service. Sep 6 00:28:16.057925 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:28:16.057957 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:28:16.057991 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 00:28:16.058010 systemd[1]: Finished modprobe@drm.service. Sep 6 00:28:16.058028 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:28:16.058048 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:28:16.058082 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 6 00:28:16.058109 systemd[1]: Finished modprobe@fuse.service. Sep 6 00:28:16.058136 systemd-journald[1392]: Journal started Sep 6 00:28:16.058211 systemd-journald[1392]: Runtime Journal (/run/log/journal/ec24da6112671e6b883e2f5741d05fe0) is 4.8M, max 38.3M, 33.5M free. Sep 6 00:28:11.698000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 6 00:28:11.813000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 6 00:28:11.813000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 6 00:28:11.813000 audit: BPF prog-id=10 op=LOAD Sep 6 00:28:11.813000 audit: BPF prog-id=10 op=UNLOAD Sep 6 00:28:11.813000 audit: BPF prog-id=11 op=LOAD Sep 6 00:28:11.813000 audit: BPF prog-id=11 op=UNLOAD Sep 6 00:28:12.047000 audit[1314]: AVC avc: denied { associate } for pid=1314 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 6 00:28:12.047000 audit[1314]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d89c a1=c0000cede0 a2=c0000d70c0 a3=32 items=0 ppid=1297 pid=1314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:28:12.047000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 6 00:28:12.051000 audit[1314]: AVC avc: denied { associate } for pid=1314 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 6 00:28:12.051000 audit[1314]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d975 a2=1ed a3=0 items=2 ppid=1297 pid=1314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:28:12.051000 audit: CWD cwd="/" Sep 6 00:28:12.051000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:12.051000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:12.051000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 6 00:28:15.721000 audit: BPF prog-id=12 op=LOAD Sep 6 00:28:15.721000 audit: BPF prog-id=3 op=UNLOAD Sep 6 00:28:15.721000 audit: BPF prog-id=13 op=LOAD Sep 6 00:28:15.721000 audit: BPF prog-id=14 op=LOAD Sep 6 00:28:15.721000 audit: BPF prog-id=4 op=UNLOAD Sep 6 00:28:15.721000 audit: BPF prog-id=5 op=UNLOAD Sep 6 00:28:15.723000 audit: BPF prog-id=15 op=LOAD Sep 6 00:28:15.723000 audit: BPF prog-id=12 op=UNLOAD Sep 6 00:28:15.723000 audit: BPF prog-id=16 op=LOAD Sep 6 00:28:15.723000 audit: BPF prog-id=17 op=LOAD Sep 6 00:28:15.723000 audit: BPF prog-id=13 op=UNLOAD Sep 6 00:28:15.723000 audit: BPF prog-id=14 op=UNLOAD Sep 6 00:28:15.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:15.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:15.729000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:15.733000 audit: BPF prog-id=15 op=UNLOAD Sep 6 00:28:15.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:15.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:15.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:15.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:15.911000 audit: BPF prog-id=18 op=LOAD Sep 6 00:28:15.912000 audit: BPF prog-id=19 op=LOAD Sep 6 00:28:15.912000 audit: BPF prog-id=20 op=LOAD Sep 6 00:28:15.912000 audit: BPF prog-id=16 op=UNLOAD Sep 6 00:28:15.912000 audit: BPF prog-id=17 op=UNLOAD Sep 6 00:28:15.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:16.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:16.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:16.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:16.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:16.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:16.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:16.045000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:16.052000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 6 00:28:16.052000 audit[1392]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7fff7d2ea480 a2=4000 a3=7fff7d2ea51c items=0 ppid=1 pid=1392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:28:16.052000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 6 00:28:16.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:16.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:16.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:16.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:12.043759 /usr/lib/systemd/system-generators/torcx-generator[1314]: time="2025-09-06T00:28:12Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:28:15.720528 systemd[1]: Queued start job for default target multi-user.target. Sep 6 00:28:12.045481 /usr/lib/systemd/system-generators/torcx-generator[1314]: time="2025-09-06T00:28:12Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 6 00:28:15.720541 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device. Sep 6 00:28:12.045503 /usr/lib/systemd/system-generators/torcx-generator[1314]: time="2025-09-06T00:28:12Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 6 00:28:15.725230 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 6 00:28:16.064125 systemd[1]: Started systemd-journald.service. Sep 6 00:28:12.045535 /usr/lib/systemd/system-generators/torcx-generator[1314]: time="2025-09-06T00:28:12Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Sep 6 00:28:12.045545 /usr/lib/systemd/system-generators/torcx-generator[1314]: time="2025-09-06T00:28:12Z" level=debug msg="skipped missing lower profile" missing profile=oem Sep 6 00:28:12.045579 /usr/lib/systemd/system-generators/torcx-generator[1314]: time="2025-09-06T00:28:12Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Sep 6 00:28:12.045593 /usr/lib/systemd/system-generators/torcx-generator[1314]: time="2025-09-06T00:28:12Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Sep 6 00:28:12.045784 /usr/lib/systemd/system-generators/torcx-generator[1314]: time="2025-09-06T00:28:12Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Sep 6 00:28:12.045822 /usr/lib/systemd/system-generators/torcx-generator[1314]: time="2025-09-06T00:28:12Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 6 00:28:12.045835 /usr/lib/systemd/system-generators/torcx-generator[1314]: time="2025-09-06T00:28:12Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 6 00:28:16.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:12.047531 /usr/lib/systemd/system-generators/torcx-generator[1314]: time="2025-09-06T00:28:12Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Sep 6 00:28:16.066608 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:28:12.047568 /usr/lib/systemd/system-generators/torcx-generator[1314]: time="2025-09-06T00:28:12Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Sep 6 00:28:12.047589 /usr/lib/systemd/system-generators/torcx-generator[1314]: time="2025-09-06T00:28:12Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Sep 6 00:28:12.047604 /usr/lib/systemd/system-generators/torcx-generator[1314]: time="2025-09-06T00:28:12Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Sep 6 00:28:16.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:16.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:12.047622 /usr/lib/systemd/system-generators/torcx-generator[1314]: time="2025-09-06T00:28:12Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Sep 6 00:28:16.069188 systemd[1]: Finished modprobe@loop.service. Sep 6 00:28:12.047636 /usr/lib/systemd/system-generators/torcx-generator[1314]: time="2025-09-06T00:28:12Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Sep 6 00:28:15.134237 /usr/lib/systemd/system-generators/torcx-generator[1314]: time="2025-09-06T00:28:15Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:28:16.070764 systemd[1]: Finished systemd-modules-load.service. Sep 6 00:28:16.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:15.134501 /usr/lib/systemd/system-generators/torcx-generator[1314]: time="2025-09-06T00:28:15Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:28:15.134611 /usr/lib/systemd/system-generators/torcx-generator[1314]: time="2025-09-06T00:28:15Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:28:15.134811 /usr/lib/systemd/system-generators/torcx-generator[1314]: time="2025-09-06T00:28:15Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:28:15.134860 /usr/lib/systemd/system-generators/torcx-generator[1314]: time="2025-09-06T00:28:15Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Sep 6 00:28:16.072423 systemd[1]: Finished systemd-network-generator.service. Sep 6 00:28:15.134919 /usr/lib/systemd/system-generators/torcx-generator[1314]: time="2025-09-06T00:28:15Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Sep 6 00:28:16.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:16.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:16.075290 systemd[1]: Finished systemd-remount-fs.service. Sep 6 00:28:16.077127 systemd[1]: Reached target network-pre.target. Sep 6 00:28:16.080653 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 6 00:28:16.083099 systemd[1]: Mounting sys-kernel-config.mount... Sep 6 00:28:16.089111 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 6 00:28:16.092225 systemd[1]: Starting systemd-hwdb-update.service... Sep 6 00:28:16.094663 systemd[1]: Starting systemd-journal-flush.service... Sep 6 00:28:16.097084 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:28:16.098586 systemd[1]: Starting systemd-random-seed.service... Sep 6 00:28:16.099574 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:28:16.103158 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:28:16.105932 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 6 00:28:16.107004 systemd[1]: Mounted sys-kernel-config.mount. Sep 6 00:28:16.121245 systemd-journald[1392]: Time spent on flushing to /var/log/journal/ec24da6112671e6b883e2f5741d05fe0 is 54.350ms for 1203 entries. Sep 6 00:28:16.121245 systemd-journald[1392]: System Journal (/var/log/journal/ec24da6112671e6b883e2f5741d05fe0) is 8.0M, max 195.6M, 187.6M free. Sep 6 00:28:16.189870 systemd-journald[1392]: Received client request to flush runtime journal. Sep 6 00:28:16.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:16.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:16.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:16.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:16.127605 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 00:28:16.191195 udevadm[1430]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 6 00:28:16.130485 systemd[1]: Starting systemd-udev-settle.service... Sep 6 00:28:16.137048 systemd[1]: Finished systemd-random-seed.service. Sep 6 00:28:16.138082 systemd[1]: Reached target first-boot-complete.target. Sep 6 00:28:16.143561 systemd[1]: Finished flatcar-tmpfiles.service. Sep 6 00:28:16.146178 systemd[1]: Starting systemd-sysusers.service... Sep 6 00:28:16.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:16.153977 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:28:16.191029 systemd[1]: Finished systemd-journal-flush.service. Sep 6 00:28:16.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:16.261925 systemd[1]: Finished systemd-sysusers.service. Sep 6 00:28:16.264426 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 6 00:28:16.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:16.361849 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 6 00:28:16.757934 systemd[1]: Finished systemd-hwdb-update.service. Sep 6 00:28:16.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:16.758000 audit: BPF prog-id=21 op=LOAD Sep 6 00:28:16.758000 audit: BPF prog-id=22 op=LOAD Sep 6 00:28:16.758000 audit: BPF prog-id=7 op=UNLOAD Sep 6 00:28:16.758000 audit: BPF prog-id=8 op=UNLOAD Sep 6 00:28:16.759728 systemd[1]: Starting systemd-udevd.service... Sep 6 00:28:16.779396 systemd-udevd[1436]: Using default interface naming scheme 'v252'. Sep 6 00:28:16.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:16.854000 audit: BPF prog-id=23 op=LOAD Sep 6 00:28:16.854077 systemd[1]: Started systemd-udevd.service. Sep 6 00:28:16.856527 systemd[1]: Starting systemd-networkd.service... Sep 6 00:28:16.875000 audit: BPF prog-id=24 op=LOAD Sep 6 00:28:16.875000 audit: BPF prog-id=25 op=LOAD Sep 6 00:28:16.875000 audit: BPF prog-id=26 op=LOAD Sep 6 00:28:16.877073 systemd[1]: Starting systemd-userdbd.service... Sep 6 00:28:16.891477 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Sep 6 00:28:16.905144 (udev-worker)[1440]: Network interface NamePolicy= disabled on kernel command line. Sep 6 00:28:16.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:16.929894 systemd[1]: Started systemd-userdbd.service. Sep 6 00:28:16.953969 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 6 00:28:16.964572 kernel: ACPI: button: Power Button [PWRF] Sep 6 00:28:16.964670 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Sep 6 00:28:16.970980 kernel: ACPI: button: Sleep Button [SLPF] Sep 6 00:28:16.982000 audit[1441]: AVC avc: denied { confidentiality } for pid=1441 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 6 00:28:16.982000 audit[1441]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55cecf64db70 a1=338ec a2=7f3fb4c58bc5 a3=5 items=110 ppid=1436 pid=1441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:28:16.982000 audit: CWD cwd="/" Sep 6 00:28:16.982000 audit: PATH item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=1 name=(null) inode=15093 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=2 name=(null) inode=15093 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=3 name=(null) inode=15094 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=4 name=(null) inode=15093 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=5 name=(null) inode=15095 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=6 name=(null) inode=15093 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=7 name=(null) inode=15096 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=8 name=(null) inode=15096 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=9 name=(null) inode=15097 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=10 name=(null) inode=15096 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=11 name=(null) inode=15098 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=12 name=(null) inode=15096 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=13 name=(null) inode=15099 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=14 name=(null) inode=15096 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=15 name=(null) inode=15100 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=16 name=(null) inode=15096 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=17 name=(null) inode=15101 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=18 name=(null) inode=15093 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=19 name=(null) inode=15102 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=20 name=(null) inode=15102 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=21 name=(null) inode=15103 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=22 name=(null) inode=15102 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=23 name=(null) inode=15104 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=24 name=(null) inode=15102 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=25 name=(null) inode=15105 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=26 name=(null) inode=15102 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=27 name=(null) inode=15106 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=28 name=(null) inode=15102 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=29 name=(null) inode=15107 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=30 name=(null) inode=15093 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=31 name=(null) inode=15108 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=32 name=(null) inode=15108 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=33 name=(null) inode=15109 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=34 name=(null) inode=15108 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=35 name=(null) inode=15110 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=36 name=(null) inode=15108 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=37 name=(null) inode=15111 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=38 name=(null) inode=15108 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=39 name=(null) inode=15112 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=40 name=(null) inode=15108 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=41 name=(null) inode=15113 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=42 name=(null) inode=15093 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=43 name=(null) inode=15114 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=44 name=(null) inode=15114 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=45 name=(null) inode=15115 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=46 name=(null) inode=15114 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=47 name=(null) inode=15116 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=48 name=(null) inode=15114 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=49 name=(null) inode=15117 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=50 name=(null) inode=15114 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=51 name=(null) inode=15118 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=52 name=(null) inode=15114 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=53 name=(null) inode=15119 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=54 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=55 name=(null) inode=15120 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=56 name=(null) inode=15120 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=57 name=(null) inode=15121 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=58 name=(null) inode=15120 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=59 name=(null) inode=15122 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=60 name=(null) inode=15120 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=61 name=(null) inode=15123 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=62 name=(null) inode=15123 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=63 name=(null) inode=15124 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=64 name=(null) inode=15123 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=65 name=(null) inode=15125 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=66 name=(null) inode=15123 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=67 name=(null) inode=15126 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=68 name=(null) inode=15123 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=69 name=(null) inode=15127 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=70 name=(null) inode=15123 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=71 name=(null) inode=15128 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=72 name=(null) inode=15120 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=73 name=(null) inode=15129 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=74 name=(null) inode=15129 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=75 name=(null) inode=15130 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=76 name=(null) inode=15129 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=77 name=(null) inode=15131 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=78 name=(null) inode=15129 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=79 name=(null) inode=15132 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=80 name=(null) inode=15129 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=81 name=(null) inode=15133 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=82 name=(null) inode=15129 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=83 name=(null) inode=15134 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=84 name=(null) inode=15120 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=85 name=(null) inode=15135 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=86 name=(null) inode=15135 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=87 name=(null) inode=15136 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=88 name=(null) inode=15135 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=89 name=(null) inode=15137 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=90 name=(null) inode=15135 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=91 name=(null) inode=15138 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=92 name=(null) inode=15135 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=93 name=(null) inode=15139 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=94 name=(null) inode=15135 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=95 name=(null) inode=15140 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=96 name=(null) inode=15120 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=97 name=(null) inode=15141 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=98 name=(null) inode=15141 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=99 name=(null) inode=15142 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=100 name=(null) inode=15141 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=101 name=(null) inode=15143 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=102 name=(null) inode=15141 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=103 name=(null) inode=15144 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=104 name=(null) inode=15141 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=105 name=(null) inode=15145 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=106 name=(null) inode=15141 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=107 name=(null) inode=15146 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PATH item=109 name=(null) inode=13879 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:28:16.982000 audit: PROCTITLE proctitle="(udev-worker)" Sep 6 00:28:17.006003 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Sep 6 00:28:17.031976 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Sep 6 00:28:17.036966 kernel: mousedev: PS/2 mouse device common for all mice Sep 6 00:28:17.051544 systemd-networkd[1444]: lo: Link UP Sep 6 00:28:17.051557 systemd-networkd[1444]: lo: Gained carrier Sep 6 00:28:17.052044 systemd-networkd[1444]: Enumeration completed Sep 6 00:28:17.052147 systemd[1]: Started systemd-networkd.service. Sep 6 00:28:17.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:17.054042 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 6 00:28:17.054672 systemd-networkd[1444]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 00:28:17.058997 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 6 00:28:17.059729 systemd-networkd[1444]: eth0: Link UP Sep 6 00:28:17.059851 systemd-networkd[1444]: eth0: Gained carrier Sep 6 00:28:17.071238 systemd-networkd[1444]: eth0: DHCPv4 address 172.31.18.181/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 6 00:28:17.168133 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 00:28:17.169412 systemd[1]: Finished systemd-udev-settle.service. Sep 6 00:28:17.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:17.171282 systemd[1]: Starting lvm2-activation-early.service... Sep 6 00:28:17.222575 lvm[1550]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 00:28:17.252549 systemd[1]: Finished lvm2-activation-early.service. Sep 6 00:28:17.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:17.253236 systemd[1]: Reached target cryptsetup.target. Sep 6 00:28:17.254878 systemd[1]: Starting lvm2-activation.service... Sep 6 00:28:17.260205 lvm[1551]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 00:28:17.286291 systemd[1]: Finished lvm2-activation.service. Sep 6 00:28:17.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:17.286973 systemd[1]: Reached target local-fs-pre.target. Sep 6 00:28:17.287456 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 6 00:28:17.287486 systemd[1]: Reached target local-fs.target. Sep 6 00:28:17.287939 systemd[1]: Reached target machines.target. Sep 6 00:28:17.289710 systemd[1]: Starting ldconfig.service... Sep 6 00:28:17.291672 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:28:17.291747 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:28:17.293143 systemd[1]: Starting systemd-boot-update.service... Sep 6 00:28:17.294758 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 6 00:28:17.296888 systemd[1]: Starting systemd-machine-id-commit.service... Sep 6 00:28:17.299108 systemd[1]: Starting systemd-sysext.service... Sep 6 00:28:17.309354 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1553 (bootctl) Sep 6 00:28:17.310692 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 6 00:28:17.319128 systemd[1]: Unmounting usr-share-oem.mount... Sep 6 00:28:17.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:17.325006 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 6 00:28:17.338175 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 6 00:28:17.338403 systemd[1]: Unmounted usr-share-oem.mount. Sep 6 00:28:17.354981 kernel: loop0: detected capacity change from 0 to 224512 Sep 6 00:28:17.502071 systemd-fsck[1562]: fsck.fat 4.2 (2021-01-31) Sep 6 00:28:17.502071 systemd-fsck[1562]: /dev/nvme0n1p1: 790 files, 120761/258078 clusters Sep 6 00:28:17.504359 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 6 00:28:17.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:17.506699 systemd[1]: Mounting boot.mount... Sep 6 00:28:17.532099 systemd[1]: Mounted boot.mount. Sep 6 00:28:17.546186 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 6 00:28:17.546984 systemd[1]: Finished systemd-machine-id-commit.service. Sep 6 00:28:17.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:17.563972 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 6 00:28:17.567149 systemd[1]: Finished systemd-boot-update.service. Sep 6 00:28:17.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:17.601007 kernel: loop1: detected capacity change from 0 to 224512 Sep 6 00:28:17.624934 (sd-sysext)[1577]: Using extensions 'kubernetes'. Sep 6 00:28:17.625407 (sd-sysext)[1577]: Merged extensions into '/usr'. Sep 6 00:28:17.643629 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:28:17.645246 systemd[1]: Mounting usr-share-oem.mount... Sep 6 00:28:17.646587 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:28:17.651050 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:28:17.653850 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:28:17.656485 systemd[1]: Starting modprobe@loop.service... Sep 6 00:28:17.657376 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:28:17.657647 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:28:17.657906 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:28:17.662433 systemd[1]: Mounted usr-share-oem.mount. Sep 6 00:28:17.663540 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:28:17.663725 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:28:17.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:17.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:17.664820 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:28:17.665037 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:28:17.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:17.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:17.666365 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:28:17.666534 systemd[1]: Finished modprobe@loop.service. Sep 6 00:28:17.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:17.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:17.667718 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:28:17.667878 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:28:17.669432 systemd[1]: Finished systemd-sysext.service. Sep 6 00:28:17.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:17.671349 systemd[1]: Starting ensure-sysext.service... Sep 6 00:28:17.673594 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 6 00:28:17.686238 systemd[1]: Reloading. Sep 6 00:28:17.698039 systemd-tmpfiles[1584]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 6 00:28:17.703343 systemd-tmpfiles[1584]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 6 00:28:17.708630 systemd-tmpfiles[1584]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 6 00:28:17.802811 /usr/lib/systemd/system-generators/torcx-generator[1603]: time="2025-09-06T00:28:17Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:28:17.803448 /usr/lib/systemd/system-generators/torcx-generator[1603]: time="2025-09-06T00:28:17Z" level=info msg="torcx already run" Sep 6 00:28:17.943941 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:28:17.944205 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:28:17.978033 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:28:18.050000 audit: BPF prog-id=27 op=LOAD Sep 6 00:28:18.050000 audit: BPF prog-id=28 op=LOAD Sep 6 00:28:18.050000 audit: BPF prog-id=21 op=UNLOAD Sep 6 00:28:18.050000 audit: BPF prog-id=22 op=UNLOAD Sep 6 00:28:18.051000 audit: BPF prog-id=29 op=LOAD Sep 6 00:28:18.051000 audit: BPF prog-id=18 op=UNLOAD Sep 6 00:28:18.052000 audit: BPF prog-id=30 op=LOAD Sep 6 00:28:18.052000 audit: BPF prog-id=31 op=LOAD Sep 6 00:28:18.052000 audit: BPF prog-id=19 op=UNLOAD Sep 6 00:28:18.052000 audit: BPF prog-id=20 op=UNLOAD Sep 6 00:28:18.054000 audit: BPF prog-id=32 op=LOAD Sep 6 00:28:18.054000 audit: BPF prog-id=23 op=UNLOAD Sep 6 00:28:18.057000 audit: BPF prog-id=33 op=LOAD Sep 6 00:28:18.057000 audit: BPF prog-id=24 op=UNLOAD Sep 6 00:28:18.057000 audit: BPF prog-id=34 op=LOAD Sep 6 00:28:18.057000 audit: BPF prog-id=35 op=LOAD Sep 6 00:28:18.057000 audit: BPF prog-id=25 op=UNLOAD Sep 6 00:28:18.058000 audit: BPF prog-id=26 op=UNLOAD Sep 6 00:28:18.062454 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 6 00:28:18.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:18.069421 systemd[1]: Starting audit-rules.service... Sep 6 00:28:18.072043 systemd[1]: Starting clean-ca-certificates.service... Sep 6 00:28:18.079000 audit: BPF prog-id=36 op=LOAD Sep 6 00:28:18.078072 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 6 00:28:18.083000 audit: BPF prog-id=37 op=LOAD Sep 6 00:28:18.082537 systemd[1]: Starting systemd-resolved.service... Sep 6 00:28:18.087858 systemd[1]: Starting systemd-timesyncd.service... Sep 6 00:28:18.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:18.092185 systemd[1]: Starting systemd-update-utmp.service... Sep 6 00:28:18.095080 systemd[1]: Finished clean-ca-certificates.service. Sep 6 00:28:18.095989 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:28:18.106000 audit[1668]: SYSTEM_BOOT pid=1668 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 6 00:28:18.116314 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:28:18.120638 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:28:18.124686 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:28:18.128404 systemd[1]: Starting modprobe@loop.service... Sep 6 00:28:18.131398 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:28:18.131597 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:28:18.131754 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:28:18.133264 systemd[1]: Finished systemd-update-utmp.service. Sep 6 00:28:18.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:18.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:18.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:18.136681 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:28:18.136867 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:28:18.138354 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:28:18.138517 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:28:18.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:18.139000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:18.140818 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:28:18.141211 systemd[1]: Finished modprobe@loop.service. Sep 6 00:28:18.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:18.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:18.145795 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:28:18.145974 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:28:18.150149 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:28:18.153042 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:28:18.157365 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:28:18.161138 systemd[1]: Starting modprobe@loop.service... Sep 6 00:28:18.162717 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:28:18.162934 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:28:18.163126 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:28:18.164452 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:28:18.164659 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:28:18.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:18.165000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:18.167109 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:28:18.168004 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:28:18.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:18.168000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:18.173532 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:28:18.176525 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:28:18.180372 systemd[1]: Starting modprobe@drm.service... Sep 6 00:28:18.185038 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:28:18.188593 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:28:18.188805 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:28:18.189024 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:28:18.190351 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:28:18.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:18.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:18.190552 systemd[1]: Finished modprobe@loop.service. Sep 6 00:28:18.191789 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:28:18.191986 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:28:18.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:18.191000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:18.193188 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 00:28:18.193360 systemd[1]: Finished modprobe@drm.service. Sep 6 00:28:18.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:18.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:18.195222 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:28:18.195441 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:28:18.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:18.195000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:18.199808 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:28:18.199874 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:28:18.201047 systemd[1]: Finished ensure-sysext.service. Sep 6 00:28:18.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:28:18.240000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 6 00:28:18.240000 audit[1689]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffceffbee70 a2=420 a3=0 items=0 ppid=1660 pid=1689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:28:18.240000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 6 00:28:18.241710 augenrules[1689]: No rules Sep 6 00:28:18.243758 systemd[1]: Finished audit-rules.service. Sep 6 00:28:18.252973 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 6 00:28:18.271313 systemd[1]: Started systemd-timesyncd.service. Sep 6 00:28:18.272058 systemd[1]: Reached target time-set.target. Sep 6 00:28:18.297695 systemd-resolved[1664]: Positive Trust Anchors: Sep 6 00:28:18.297715 systemd-resolved[1664]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 00:28:18.297748 systemd-resolved[1664]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 00:28:18.341049 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:28:18.341078 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 00:28:18.346373 systemd-resolved[1664]: Defaulting to hostname 'linux'. Sep 6 00:28:20.269101 systemd-timesyncd[1667]: Contacted time server 104.207.148.118:123 (0.flatcar.pool.ntp.org). Sep 6 00:28:20.269243 systemd-timesyncd[1667]: Initial clock synchronization to Sat 2025-09-06 00:28:20.268853 UTC. Sep 6 00:28:20.274010 systemd-resolved[1664]: Clock change detected. Flushing caches. Sep 6 00:28:20.274023 systemd[1]: Started systemd-resolved.service. Sep 6 00:28:20.275144 systemd[1]: Reached target network.target. Sep 6 00:28:20.275699 systemd[1]: Reached target nss-lookup.target. Sep 6 00:28:20.364054 ldconfig[1552]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 6 00:28:20.373750 systemd[1]: Finished ldconfig.service. Sep 6 00:28:20.375626 systemd[1]: Starting systemd-update-done.service... Sep 6 00:28:20.385545 systemd[1]: Finished systemd-update-done.service. Sep 6 00:28:20.386088 systemd[1]: Reached target sysinit.target. Sep 6 00:28:20.386644 systemd[1]: Started motdgen.path. Sep 6 00:28:20.387070 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 6 00:28:20.387647 systemd[1]: Started logrotate.timer. Sep 6 00:28:20.388112 systemd[1]: Started mdadm.timer. Sep 6 00:28:20.388860 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 6 00:28:20.389241 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 6 00:28:20.389290 systemd[1]: Reached target paths.target. Sep 6 00:28:20.389681 systemd[1]: Reached target timers.target. Sep 6 00:28:20.390390 systemd[1]: Listening on dbus.socket. Sep 6 00:28:20.391959 systemd[1]: Starting docker.socket... Sep 6 00:28:20.397522 systemd[1]: Listening on sshd.socket. Sep 6 00:28:20.398127 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:28:20.398721 systemd[1]: Listening on docker.socket. Sep 6 00:28:20.399197 systemd[1]: Reached target sockets.target. Sep 6 00:28:20.399675 systemd[1]: Reached target basic.target. Sep 6 00:28:20.400093 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 00:28:20.400134 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 00:28:20.401641 systemd[1]: Starting containerd.service... Sep 6 00:28:20.403706 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Sep 6 00:28:20.406148 systemd[1]: Starting dbus.service... Sep 6 00:28:20.407817 systemd[1]: Starting enable-oem-cloudinit.service... Sep 6 00:28:20.409941 systemd[1]: Starting extend-filesystems.service... Sep 6 00:28:20.410589 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 6 00:28:20.413798 systemd[1]: Starting motdgen.service... Sep 6 00:28:20.416586 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 6 00:28:20.419234 systemd[1]: Starting sshd-keygen.service... Sep 6 00:28:20.426577 systemd[1]: Starting systemd-logind.service... Sep 6 00:28:20.427203 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:28:20.427306 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 6 00:28:20.427986 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 6 00:28:20.429084 systemd[1]: Starting update-engine.service... Sep 6 00:28:20.434464 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 6 00:28:20.453072 jq[1708]: true Sep 6 00:28:20.457532 jq[1701]: false Sep 6 00:28:20.457926 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 6 00:28:20.458179 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 6 00:28:20.466290 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 6 00:28:20.466539 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 6 00:28:20.483895 jq[1713]: true Sep 6 00:28:20.520211 systemd[1]: motdgen.service: Deactivated successfully. Sep 6 00:28:20.520461 systemd[1]: Finished motdgen.service. Sep 6 00:28:20.526522 dbus-daemon[1700]: [system] SELinux support is enabled Sep 6 00:28:20.526715 systemd[1]: Started dbus.service. Sep 6 00:28:20.530485 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 6 00:28:20.530535 systemd[1]: Reached target system-config.target. Sep 6 00:28:20.531214 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 6 00:28:20.531258 systemd[1]: Reached target user-config.target. Sep 6 00:28:20.535511 dbus-daemon[1700]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1444 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 6 00:28:20.540572 systemd[1]: Starting systemd-hostnamed.service... Sep 6 00:28:20.542081 extend-filesystems[1702]: Found loop1 Sep 6 00:28:20.542081 extend-filesystems[1702]: Found nvme0n1 Sep 6 00:28:20.542081 extend-filesystems[1702]: Found nvme0n1p1 Sep 6 00:28:20.542081 extend-filesystems[1702]: Found nvme0n1p2 Sep 6 00:28:20.542081 extend-filesystems[1702]: Found nvme0n1p3 Sep 6 00:28:20.542081 extend-filesystems[1702]: Found usr Sep 6 00:28:20.542081 extend-filesystems[1702]: Found nvme0n1p4 Sep 6 00:28:20.542081 extend-filesystems[1702]: Found nvme0n1p6 Sep 6 00:28:20.542081 extend-filesystems[1702]: Found nvme0n1p7 Sep 6 00:28:20.542081 extend-filesystems[1702]: Found nvme0n1p9 Sep 6 00:28:20.542081 extend-filesystems[1702]: Checking size of /dev/nvme0n1p9 Sep 6 00:28:20.583907 extend-filesystems[1702]: Resized partition /dev/nvme0n1p9 Sep 6 00:28:20.601958 extend-filesystems[1745]: resize2fs 1.46.5 (30-Dec-2021) Sep 6 00:28:20.610345 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 6 00:28:20.620959 env[1712]: time="2025-09-06T00:28:20.620890104Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 6 00:28:20.634801 update_engine[1707]: I0906 00:28:20.633874 1707 main.cc:92] Flatcar Update Engine starting Sep 6 00:28:20.650173 systemd[1]: Started update-engine.service. Sep 6 00:28:20.650538 update_engine[1707]: I0906 00:28:20.650231 1707 update_check_scheduler.cc:74] Next update check in 6m12s Sep 6 00:28:20.653160 systemd[1]: Started locksmithd.service. Sep 6 00:28:20.656535 systemd-networkd[1444]: eth0: Gained IPv6LL Sep 6 00:28:20.658846 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 6 00:28:20.659562 systemd[1]: Reached target network-online.target. Sep 6 00:28:20.661933 systemd[1]: Started amazon-ssm-agent.service. Sep 6 00:28:20.665529 systemd[1]: Starting kubelet.service... Sep 6 00:28:20.668745 systemd[1]: Started nvidia.service. Sep 6 00:28:20.798340 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 6 00:28:20.818975 bash[1760]: Updated "/home/core/.ssh/authorized_keys" Sep 6 00:28:20.819965 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 6 00:28:20.821158 extend-filesystems[1745]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 6 00:28:20.821158 extend-filesystems[1745]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 6 00:28:20.821158 extend-filesystems[1745]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 6 00:28:20.824803 extend-filesystems[1702]: Resized filesystem in /dev/nvme0n1p9 Sep 6 00:28:20.821850 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 6 00:28:20.822067 systemd[1]: Finished extend-filesystems.service. Sep 6 00:28:20.871974 env[1712]: time="2025-09-06T00:28:20.871918147Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 6 00:28:20.892411 env[1712]: time="2025-09-06T00:28:20.892362063Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:28:20.897974 env[1712]: time="2025-09-06T00:28:20.897909591Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.190-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:28:20.919606 amazon-ssm-agent[1762]: 2025/09/06 00:28:20 Failed to load instance info from vault. RegistrationKey does not exist. Sep 6 00:28:20.921273 amazon-ssm-agent[1762]: Initializing new seelog logger Sep 6 00:28:20.921490 amazon-ssm-agent[1762]: New Seelog Logger Creation Complete Sep 6 00:28:20.921572 amazon-ssm-agent[1762]: 2025/09/06 00:28:20 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 6 00:28:20.921572 amazon-ssm-agent[1762]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 6 00:28:20.921856 amazon-ssm-agent[1762]: 2025/09/06 00:28:20 processing appconfig overrides Sep 6 00:28:20.927881 systemd[1]: Created slice system-sshd.slice. Sep 6 00:28:20.939204 env[1712]: time="2025-09-06T00:28:20.939100969Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:28:20.939654 env[1712]: time="2025-09-06T00:28:20.939622763Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:28:20.939735 env[1712]: time="2025-09-06T00:28:20.939657033Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 6 00:28:20.939735 env[1712]: time="2025-09-06T00:28:20.939693172Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 6 00:28:20.939735 env[1712]: time="2025-09-06T00:28:20.939709642Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 6 00:28:20.939866 env[1712]: time="2025-09-06T00:28:20.939848552Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:28:20.940260 env[1712]: time="2025-09-06T00:28:20.940233308Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:28:20.940545 env[1712]: time="2025-09-06T00:28:20.940518298Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:28:20.940603 env[1712]: time="2025-09-06T00:28:20.940547199Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 6 00:28:20.940669 env[1712]: time="2025-09-06T00:28:20.940634969Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 6 00:28:20.940735 env[1712]: time="2025-09-06T00:28:20.940670633Z" level=info msg="metadata content store policy set" policy=shared Sep 6 00:28:20.946230 env[1712]: time="2025-09-06T00:28:20.946182526Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 6 00:28:20.946385 env[1712]: time="2025-09-06T00:28:20.946238526Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 6 00:28:20.946385 env[1712]: time="2025-09-06T00:28:20.946257896Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 6 00:28:20.946385 env[1712]: time="2025-09-06T00:28:20.946307507Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 6 00:28:20.946530 env[1712]: time="2025-09-06T00:28:20.946385325Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 6 00:28:20.946530 env[1712]: time="2025-09-06T00:28:20.946407376Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 6 00:28:20.946530 env[1712]: time="2025-09-06T00:28:20.946429272Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 6 00:28:20.946530 env[1712]: time="2025-09-06T00:28:20.946449964Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 6 00:28:20.946530 env[1712]: time="2025-09-06T00:28:20.946468419Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 6 00:28:20.946530 env[1712]: time="2025-09-06T00:28:20.946488938Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 6 00:28:20.946530 env[1712]: time="2025-09-06T00:28:20.946507816Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 6 00:28:20.946530 env[1712]: time="2025-09-06T00:28:20.946527453Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 6 00:28:20.946861 env[1712]: time="2025-09-06T00:28:20.946667213Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 6 00:28:20.946861 env[1712]: time="2025-09-06T00:28:20.946769998Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 6 00:28:20.947266 env[1712]: time="2025-09-06T00:28:20.947226274Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 6 00:28:20.947347 env[1712]: time="2025-09-06T00:28:20.947285514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 6 00:28:20.947347 env[1712]: time="2025-09-06T00:28:20.947306934Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 6 00:28:20.947441 env[1712]: time="2025-09-06T00:28:20.947394374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 6 00:28:20.947441 env[1712]: time="2025-09-06T00:28:20.947417105Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 6 00:28:20.947514 env[1712]: time="2025-09-06T00:28:20.947495845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 6 00:28:20.947560 env[1712]: time="2025-09-06T00:28:20.947519422Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 6 00:28:20.947560 env[1712]: time="2025-09-06T00:28:20.947537615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 6 00:28:20.947635 env[1712]: time="2025-09-06T00:28:20.947556365Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 6 00:28:20.947635 env[1712]: time="2025-09-06T00:28:20.947574043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 6 00:28:20.947635 env[1712]: time="2025-09-06T00:28:20.947592664Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 6 00:28:20.947635 env[1712]: time="2025-09-06T00:28:20.947612995Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 6 00:28:20.947799 env[1712]: time="2025-09-06T00:28:20.947771149Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 6 00:28:20.947799 env[1712]: time="2025-09-06T00:28:20.947791924Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 6 00:28:20.947878 env[1712]: time="2025-09-06T00:28:20.947811266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 6 00:28:20.947878 env[1712]: time="2025-09-06T00:28:20.947829623Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 6 00:28:20.947878 env[1712]: time="2025-09-06T00:28:20.947850943Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 6 00:28:20.947878 env[1712]: time="2025-09-06T00:28:20.947869628Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 6 00:28:20.948017 env[1712]: time="2025-09-06T00:28:20.947896021Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 6 00:28:20.948017 env[1712]: time="2025-09-06T00:28:20.947939332Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 6 00:28:20.948295 env[1712]: time="2025-09-06T00:28:20.948216033Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 6 00:28:20.952240 env[1712]: time="2025-09-06T00:28:20.948309238Z" level=info msg="Connect containerd service" Sep 6 00:28:20.952240 env[1712]: time="2025-09-06T00:28:20.948379185Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 6 00:28:20.952240 env[1712]: time="2025-09-06T00:28:20.949121533Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 00:28:20.952240 env[1712]: time="2025-09-06T00:28:20.949429527Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 6 00:28:20.952240 env[1712]: time="2025-09-06T00:28:20.949478490Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 6 00:28:20.952240 env[1712]: time="2025-09-06T00:28:20.950514723Z" level=info msg="containerd successfully booted in 0.332043s" Sep 6 00:28:20.949622 systemd[1]: Started containerd.service. Sep 6 00:28:20.954028 env[1712]: time="2025-09-06T00:28:20.953988382Z" level=info msg="Start subscribing containerd event" Sep 6 00:28:20.958692 env[1712]: time="2025-09-06T00:28:20.958654613Z" level=info msg="Start recovering state" Sep 6 00:28:20.958910 env[1712]: time="2025-09-06T00:28:20.958893449Z" level=info msg="Start event monitor" Sep 6 00:28:20.958984 env[1712]: time="2025-09-06T00:28:20.958972819Z" level=info msg="Start snapshots syncer" Sep 6 00:28:20.959046 env[1712]: time="2025-09-06T00:28:20.959034569Z" level=info msg="Start cni network conf syncer for default" Sep 6 00:28:20.959729 env[1712]: time="2025-09-06T00:28:20.959708608Z" level=info msg="Start streaming server" Sep 6 00:28:20.966883 systemd-logind[1706]: Watching system buttons on /dev/input/event1 (Power Button) Sep 6 00:28:20.966915 systemd-logind[1706]: Watching system buttons on /dev/input/event2 (Sleep Button) Sep 6 00:28:20.966940 systemd-logind[1706]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 6 00:28:20.967769 dbus-daemon[1700]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 6 00:28:20.967967 systemd[1]: Started systemd-hostnamed.service. Sep 6 00:28:20.968402 dbus-daemon[1700]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1732 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 6 00:28:20.970382 systemd-logind[1706]: New seat seat0. Sep 6 00:28:20.974346 systemd[1]: Starting polkit.service... Sep 6 00:28:20.995413 systemd[1]: Started systemd-logind.service. Sep 6 00:28:21.007143 polkitd[1802]: Started polkitd version 121 Sep 6 00:28:21.036100 polkitd[1802]: Loading rules from directory /etc/polkit-1/rules.d Sep 6 00:28:21.036357 polkitd[1802]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 6 00:28:21.043746 polkitd[1802]: Finished loading, compiling and executing 2 rules Sep 6 00:28:21.045021 dbus-daemon[1700]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 6 00:28:21.045704 systemd[1]: Started polkit.service. Sep 6 00:28:21.048135 polkitd[1802]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 6 00:28:21.078668 systemd-hostnamed[1732]: Hostname set to (transient) Sep 6 00:28:21.078795 systemd-resolved[1664]: System hostname changed to 'ip-172-31-18-181'. Sep 6 00:28:21.142259 coreos-metadata[1699]: Sep 06 00:28:21.140 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 6 00:28:21.147756 coreos-metadata[1699]: Sep 06 00:28:21.147 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Sep 6 00:28:21.148723 coreos-metadata[1699]: Sep 06 00:28:21.148 INFO Fetch successful Sep 6 00:28:21.148868 coreos-metadata[1699]: Sep 06 00:28:21.148 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 6 00:28:21.151553 coreos-metadata[1699]: Sep 06 00:28:21.151 INFO Fetch successful Sep 6 00:28:21.154123 unknown[1699]: wrote ssh authorized keys file for user: core Sep 6 00:28:21.165361 systemd[1]: nvidia.service: Deactivated successfully. Sep 6 00:28:21.176066 update-ssh-keys[1849]: Updated "/home/core/.ssh/authorized_keys" Sep 6 00:28:21.176635 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Sep 6 00:28:21.489418 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO Create new startup processor Sep 6 00:28:21.489546 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO [LongRunningPluginsManager] registered plugins: {} Sep 6 00:28:21.489546 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO Initializing bookkeeping folders Sep 6 00:28:21.489546 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO removing the completed state files Sep 6 00:28:21.489675 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO Initializing bookkeeping folders for long running plugins Sep 6 00:28:21.489675 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Sep 6 00:28:21.489675 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO Initializing healthcheck folders for long running plugins Sep 6 00:28:21.489675 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO Initializing locations for inventory plugin Sep 6 00:28:21.489675 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO Initializing default location for custom inventory Sep 6 00:28:21.489675 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO Initializing default location for file inventory Sep 6 00:28:21.489675 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO Initializing default location for role inventory Sep 6 00:28:21.489675 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO Init the cloudwatchlogs publisher Sep 6 00:28:21.489675 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO [instanceID=i-0cf8599a459f13eb0] Successfully loaded platform independent plugin aws:configurePackage Sep 6 00:28:21.489675 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO [instanceID=i-0cf8599a459f13eb0] Successfully loaded platform independent plugin aws:downloadContent Sep 6 00:28:21.489675 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO [instanceID=i-0cf8599a459f13eb0] Successfully loaded platform independent plugin aws:runDocument Sep 6 00:28:21.489675 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO [instanceID=i-0cf8599a459f13eb0] Successfully loaded platform independent plugin aws:runPowerShellScript Sep 6 00:28:21.490072 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO [instanceID=i-0cf8599a459f13eb0] Successfully loaded platform independent plugin aws:updateSsmAgent Sep 6 00:28:21.490072 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO [instanceID=i-0cf8599a459f13eb0] Successfully loaded platform independent plugin aws:refreshAssociation Sep 6 00:28:21.490072 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO [instanceID=i-0cf8599a459f13eb0] Successfully loaded platform independent plugin aws:softwareInventory Sep 6 00:28:21.490072 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO [instanceID=i-0cf8599a459f13eb0] Successfully loaded platform independent plugin aws:configureDocker Sep 6 00:28:21.490072 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO [instanceID=i-0cf8599a459f13eb0] Successfully loaded platform independent plugin aws:runDockerAction Sep 6 00:28:21.490072 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO [instanceID=i-0cf8599a459f13eb0] Successfully loaded platform dependent plugin aws:runShellScript Sep 6 00:28:21.490072 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Sep 6 00:28:21.490072 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO OS: linux, Arch: amd64 Sep 6 00:28:21.490630 amazon-ssm-agent[1762]: datastore file /var/lib/amazon/ssm/i-0cf8599a459f13eb0/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Sep 6 00:28:21.492197 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO [MessagingDeliveryService] Starting document processing engine... Sep 6 00:28:21.589177 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO [MessagingDeliveryService] [EngineProcessor] Starting Sep 6 00:28:21.683487 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Sep 6 00:28:21.687664 locksmithd[1761]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 6 00:28:21.773635 sshd_keygen[1725]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 6 00:28:21.777926 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO [MessagingDeliveryService] Starting message polling Sep 6 00:28:21.800945 systemd[1]: Finished sshd-keygen.service. Sep 6 00:28:21.803525 systemd[1]: Starting issuegen.service... Sep 6 00:28:21.806104 systemd[1]: Started sshd@0-172.31.18.181:22-139.178.68.195:50698.service. Sep 6 00:28:21.813084 systemd[1]: issuegen.service: Deactivated successfully. Sep 6 00:28:21.813303 systemd[1]: Finished issuegen.service. Sep 6 00:28:21.816162 systemd[1]: Starting systemd-user-sessions.service... Sep 6 00:28:21.827217 systemd[1]: Finished systemd-user-sessions.service. Sep 6 00:28:21.830278 systemd[1]: Started getty@tty1.service. Sep 6 00:28:21.833804 systemd[1]: Started serial-getty@ttyS0.service. Sep 6 00:28:21.835121 systemd[1]: Reached target getty.target. Sep 6 00:28:21.872701 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO [MessagingDeliveryService] Starting send replies to MDS Sep 6 00:28:21.967747 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO [instanceID=i-0cf8599a459f13eb0] Starting association polling Sep 6 00:28:22.040417 sshd[1900]: Accepted publickey for core from 139.178.68.195 port 50698 ssh2: RSA SHA256:XzAVKqTn5JA8LhAwCCkjW+qfEc9zGxl5jKXGmKv4mAc Sep 6 00:28:22.044293 sshd[1900]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:28:22.061096 systemd[1]: Created slice user-500.slice. Sep 6 00:28:22.063429 systemd[1]: Starting user-runtime-dir@500.service... Sep 6 00:28:22.065096 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Sep 6 00:28:22.069993 systemd-logind[1706]: New session 1 of user core. Sep 6 00:28:22.080293 systemd[1]: Finished user-runtime-dir@500.service. Sep 6 00:28:22.083573 systemd[1]: Starting user@500.service... Sep 6 00:28:22.089290 (systemd)[1908]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:28:22.160436 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO [MessagingDeliveryService] [Association] Launching response handler Sep 6 00:28:22.190925 systemd[1908]: Queued start job for default target default.target. Sep 6 00:28:22.192234 systemd[1908]: Reached target paths.target. Sep 6 00:28:22.192271 systemd[1908]: Reached target sockets.target. Sep 6 00:28:22.192289 systemd[1908]: Reached target timers.target. Sep 6 00:28:22.192306 systemd[1908]: Reached target basic.target. Sep 6 00:28:22.192467 systemd[1]: Started user@500.service. Sep 6 00:28:22.193408 systemd[1908]: Reached target default.target. Sep 6 00:28:22.193470 systemd[1908]: Startup finished in 96ms. Sep 6 00:28:22.194564 systemd[1]: Started session-1.scope. Sep 6 00:28:22.255920 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Sep 6 00:28:22.344153 systemd[1]: Started sshd@1-172.31.18.181:22-139.178.68.195:50710.service. Sep 6 00:28:22.352655 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Sep 6 00:28:22.448628 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Sep 6 00:28:22.517685 sshd[1917]: Accepted publickey for core from 139.178.68.195 port 50710 ssh2: RSA SHA256:XzAVKqTn5JA8LhAwCCkjW+qfEc9zGxl5jKXGmKv4mAc Sep 6 00:28:22.519159 sshd[1917]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:28:22.525164 systemd-logind[1706]: New session 2 of user core. Sep 6 00:28:22.525687 systemd[1]: Started session-2.scope. Sep 6 00:28:22.544634 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO [OfflineService] Starting document processing engine... Sep 6 00:28:22.641720 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO [OfflineService] [EngineProcessor] Starting Sep 6 00:28:22.656096 sshd[1917]: pam_unix(sshd:session): session closed for user core Sep 6 00:28:22.659435 systemd[1]: sshd@1-172.31.18.181:22-139.178.68.195:50710.service: Deactivated successfully. Sep 6 00:28:22.660999 systemd[1]: session-2.scope: Deactivated successfully. Sep 6 00:28:22.661622 systemd-logind[1706]: Session 2 logged out. Waiting for processes to exit. Sep 6 00:28:22.662921 systemd-logind[1706]: Removed session 2. Sep 6 00:28:22.681124 systemd[1]: Started sshd@2-172.31.18.181:22-139.178.68.195:50722.service. Sep 6 00:28:22.737832 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO [OfflineService] [EngineProcessor] Initial processing Sep 6 00:28:22.834523 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO [OfflineService] Starting message polling Sep 6 00:28:22.838975 sshd[1923]: Accepted publickey for core from 139.178.68.195 port 50722 ssh2: RSA SHA256:XzAVKqTn5JA8LhAwCCkjW+qfEc9zGxl5jKXGmKv4mAc Sep 6 00:28:22.840414 sshd[1923]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:28:22.846020 systemd[1]: Started session-3.scope. Sep 6 00:28:22.847423 systemd-logind[1706]: New session 3 of user core. Sep 6 00:28:22.931450 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO [OfflineService] Starting send replies to MDS Sep 6 00:28:22.973622 sshd[1923]: pam_unix(sshd:session): session closed for user core Sep 6 00:28:22.976323 systemd[1]: sshd@2-172.31.18.181:22-139.178.68.195:50722.service: Deactivated successfully. Sep 6 00:28:22.977297 systemd[1]: session-3.scope: Deactivated successfully. Sep 6 00:28:22.978079 systemd-logind[1706]: Session 3 logged out. Waiting for processes to exit. Sep 6 00:28:22.979445 systemd-logind[1706]: Removed session 3. Sep 6 00:28:23.028484 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO [LongRunningPluginsManager] starting long running plugin manager Sep 6 00:28:23.125795 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Sep 6 00:28:23.137417 systemd[1]: Started kubelet.service. Sep 6 00:28:23.139017 systemd[1]: Reached target multi-user.target. Sep 6 00:28:23.141667 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 6 00:28:23.151446 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 6 00:28:23.151670 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 6 00:28:23.152344 systemd[1]: Startup finished in 603ms (kernel) + 5.821s (initrd) + 9.649s (userspace) = 16.074s. Sep 6 00:28:23.223326 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO [HealthCheck] HealthCheck reporting agent health. Sep 6 00:28:23.321112 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO [MessageGatewayService] Starting session document processing engine... Sep 6 00:28:23.418961 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO [MessageGatewayService] [EngineProcessor] Starting Sep 6 00:28:23.517555 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Sep 6 00:28:23.615327 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-0cf8599a459f13eb0, requestId: 0a7ed515-9e5e-446b-ac04-ddba9efb73ca Sep 6 00:28:23.713876 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO [MessageGatewayService] listening reply. Sep 6 00:28:23.812594 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Sep 6 00:28:23.911365 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO [StartupProcessor] Executing startup processor tasks Sep 6 00:28:24.010417 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Sep 6 00:28:24.109766 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Sep 6 00:28:24.209308 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.8 Sep 6 00:28:24.257012 kubelet[1930]: E0906 00:28:24.256805 1930 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:28:24.258766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:28:24.258948 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:28:24.259260 systemd[1]: kubelet.service: Consumed 1.167s CPU time. Sep 6 00:28:24.309608 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0cf8599a459f13eb0?role=subscribe&stream=input Sep 6 00:28:24.409527 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0cf8599a459f13eb0?role=subscribe&stream=input Sep 6 00:28:24.509465 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO [MessageGatewayService] Starting receiving message from control channel Sep 6 00:28:24.609683 amazon-ssm-agent[1762]: 2025-09-06 00:28:21 INFO [MessageGatewayService] [EngineProcessor] Initial processing Sep 6 00:28:33.001927 systemd[1]: Started sshd@3-172.31.18.181:22-139.178.68.195:47212.service. Sep 6 00:28:33.163271 sshd[1937]: Accepted publickey for core from 139.178.68.195 port 47212 ssh2: RSA SHA256:XzAVKqTn5JA8LhAwCCkjW+qfEc9zGxl5jKXGmKv4mAc Sep 6 00:28:33.165170 sshd[1937]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:28:33.170255 systemd-logind[1706]: New session 4 of user core. Sep 6 00:28:33.170908 systemd[1]: Started session-4.scope. Sep 6 00:28:33.300397 sshd[1937]: pam_unix(sshd:session): session closed for user core Sep 6 00:28:33.303820 systemd[1]: sshd@3-172.31.18.181:22-139.178.68.195:47212.service: Deactivated successfully. Sep 6 00:28:33.304744 systemd[1]: session-4.scope: Deactivated successfully. Sep 6 00:28:33.305435 systemd-logind[1706]: Session 4 logged out. Waiting for processes to exit. Sep 6 00:28:33.306363 systemd-logind[1706]: Removed session 4. Sep 6 00:28:33.326565 systemd[1]: Started sshd@4-172.31.18.181:22-139.178.68.195:47214.service. Sep 6 00:28:33.485154 sshd[1943]: Accepted publickey for core from 139.178.68.195 port 47214 ssh2: RSA SHA256:XzAVKqTn5JA8LhAwCCkjW+qfEc9zGxl5jKXGmKv4mAc Sep 6 00:28:33.486759 sshd[1943]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:28:33.491865 systemd-logind[1706]: New session 5 of user core. Sep 6 00:28:33.492462 systemd[1]: Started session-5.scope. Sep 6 00:28:33.616127 sshd[1943]: pam_unix(sshd:session): session closed for user core Sep 6 00:28:33.619338 systemd[1]: sshd@4-172.31.18.181:22-139.178.68.195:47214.service: Deactivated successfully. Sep 6 00:28:33.621002 systemd[1]: session-5.scope: Deactivated successfully. Sep 6 00:28:33.622178 systemd-logind[1706]: Session 5 logged out. Waiting for processes to exit. Sep 6 00:28:33.623146 systemd-logind[1706]: Removed session 5. Sep 6 00:28:33.644064 systemd[1]: Started sshd@5-172.31.18.181:22-139.178.68.195:47228.service. Sep 6 00:28:33.805639 sshd[1949]: Accepted publickey for core from 139.178.68.195 port 47228 ssh2: RSA SHA256:XzAVKqTn5JA8LhAwCCkjW+qfEc9zGxl5jKXGmKv4mAc Sep 6 00:28:33.807132 sshd[1949]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:28:33.812827 systemd[1]: Started session-6.scope. Sep 6 00:28:33.813362 systemd-logind[1706]: New session 6 of user core. Sep 6 00:28:33.942177 sshd[1949]: pam_unix(sshd:session): session closed for user core Sep 6 00:28:33.945886 systemd[1]: sshd@5-172.31.18.181:22-139.178.68.195:47228.service: Deactivated successfully. Sep 6 00:28:33.946769 systemd[1]: session-6.scope: Deactivated successfully. Sep 6 00:28:33.947607 systemd-logind[1706]: Session 6 logged out. Waiting for processes to exit. Sep 6 00:28:33.948845 systemd-logind[1706]: Removed session 6. Sep 6 00:28:33.969349 systemd[1]: Started sshd@6-172.31.18.181:22-139.178.68.195:47234.service. Sep 6 00:28:34.137466 sshd[1955]: Accepted publickey for core from 139.178.68.195 port 47234 ssh2: RSA SHA256:XzAVKqTn5JA8LhAwCCkjW+qfEc9zGxl5jKXGmKv4mAc Sep 6 00:28:34.138960 sshd[1955]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:28:34.144283 systemd-logind[1706]: New session 7 of user core. Sep 6 00:28:34.145051 systemd[1]: Started session-7.scope. Sep 6 00:28:34.280546 sudo[1958]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 6 00:28:34.281489 sudo[1958]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 6 00:28:34.282306 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 6 00:28:34.282560 systemd[1]: Stopped kubelet.service. Sep 6 00:28:34.282606 systemd[1]: kubelet.service: Consumed 1.167s CPU time. Sep 6 00:28:34.283989 systemd[1]: Starting kubelet.service... Sep 6 00:28:34.300532 systemd[1]: Starting coreos-metadata.service... Sep 6 00:28:34.400264 coreos-metadata[1964]: Sep 06 00:28:34.400 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 6 00:28:34.401207 coreos-metadata[1964]: Sep 06 00:28:34.401 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-id: Attempt #1 Sep 6 00:28:34.402449 coreos-metadata[1964]: Sep 06 00:28:34.402 INFO Fetch successful Sep 6 00:28:34.402553 coreos-metadata[1964]: Sep 06 00:28:34.402 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-type: Attempt #1 Sep 6 00:28:34.403333 coreos-metadata[1964]: Sep 06 00:28:34.403 INFO Fetch successful Sep 6 00:28:34.403463 coreos-metadata[1964]: Sep 06 00:28:34.403 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/local-ipv4: Attempt #1 Sep 6 00:28:34.404326 coreos-metadata[1964]: Sep 06 00:28:34.404 INFO Fetch successful Sep 6 00:28:34.404425 coreos-metadata[1964]: Sep 06 00:28:34.404 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-ipv4: Attempt #1 Sep 6 00:28:34.405194 coreos-metadata[1964]: Sep 06 00:28:34.405 INFO Fetch successful Sep 6 00:28:34.405194 coreos-metadata[1964]: Sep 06 00:28:34.405 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/placement/availability-zone: Attempt #1 Sep 6 00:28:34.406051 coreos-metadata[1964]: Sep 06 00:28:34.406 INFO Fetch successful Sep 6 00:28:34.406051 coreos-metadata[1964]: Sep 06 00:28:34.406 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/hostname: Attempt #1 Sep 6 00:28:34.407069 coreos-metadata[1964]: Sep 06 00:28:34.407 INFO Fetch successful Sep 6 00:28:34.407069 coreos-metadata[1964]: Sep 06 00:28:34.407 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-hostname: Attempt #1 Sep 6 00:28:34.408470 coreos-metadata[1964]: Sep 06 00:28:34.408 INFO Fetch successful Sep 6 00:28:34.408470 coreos-metadata[1964]: Sep 06 00:28:34.408 INFO Fetching http://169.254.169.254/2019-10-01/dynamic/instance-identity/document: Attempt #1 Sep 6 00:28:34.409906 coreos-metadata[1964]: Sep 06 00:28:34.409 INFO Fetch successful Sep 6 00:28:34.419472 systemd[1]: Finished coreos-metadata.service. Sep 6 00:28:35.388779 systemd[1]: Started kubelet.service. Sep 6 00:28:35.460477 kubelet[1982]: E0906 00:28:35.460423 1982 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:28:35.464598 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:28:35.464888 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:28:35.983121 systemd[1]: Stopped kubelet.service. Sep 6 00:28:35.986595 systemd[1]: Starting kubelet.service... Sep 6 00:28:36.029489 systemd[1]: Reloading. Sep 6 00:28:36.162963 /usr/lib/systemd/system-generators/torcx-generator[2028]: time="2025-09-06T00:28:36Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:28:36.163002 /usr/lib/systemd/system-generators/torcx-generator[2028]: time="2025-09-06T00:28:36Z" level=info msg="torcx already run" Sep 6 00:28:36.280981 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:28:36.281006 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:28:36.300838 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:28:36.415676 systemd[1]: Stopping kubelet.service... Sep 6 00:28:36.416345 systemd[1]: kubelet.service: Deactivated successfully. Sep 6 00:28:36.416567 systemd[1]: Stopped kubelet.service. Sep 6 00:28:36.418701 systemd[1]: Starting kubelet.service... Sep 6 00:28:36.639874 systemd[1]: Started kubelet.service. Sep 6 00:28:36.703994 kubelet[2088]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:28:36.703994 kubelet[2088]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 6 00:28:36.703994 kubelet[2088]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:28:36.705439 kubelet[2088]: I0906 00:28:36.704103 2088 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 00:28:37.131582 kubelet[2088]: I0906 00:28:37.131345 2088 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 6 00:28:37.131582 kubelet[2088]: I0906 00:28:37.131396 2088 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 00:28:37.132288 kubelet[2088]: I0906 00:28:37.132244 2088 server.go:954] "Client rotation is on, will bootstrap in background" Sep 6 00:28:37.199104 kubelet[2088]: I0906 00:28:37.199072 2088 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 00:28:37.231844 kubelet[2088]: E0906 00:28:37.231780 2088 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 00:28:37.231844 kubelet[2088]: I0906 00:28:37.231820 2088 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 00:28:37.234530 kubelet[2088]: I0906 00:28:37.234503 2088 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 00:28:37.234791 kubelet[2088]: I0906 00:28:37.234749 2088 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 00:28:37.234980 kubelet[2088]: I0906 00:28:37.234787 2088 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.18.181","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 6 00:28:37.235095 kubelet[2088]: I0906 00:28:37.234985 2088 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 00:28:37.235095 kubelet[2088]: I0906 00:28:37.234994 2088 container_manager_linux.go:304] "Creating device plugin manager" Sep 6 00:28:37.235161 kubelet[2088]: I0906 00:28:37.235107 2088 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:28:37.242007 kubelet[2088]: I0906 00:28:37.241974 2088 kubelet.go:446] "Attempting to sync node with API server" Sep 6 00:28:37.242007 kubelet[2088]: I0906 00:28:37.242019 2088 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 00:28:37.242193 kubelet[2088]: I0906 00:28:37.242043 2088 kubelet.go:352] "Adding apiserver pod source" Sep 6 00:28:37.242193 kubelet[2088]: I0906 00:28:37.242054 2088 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 00:28:37.242616 kubelet[2088]: E0906 00:28:37.242581 2088 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:28:37.242733 kubelet[2088]: E0906 00:28:37.242632 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:28:37.249190 kubelet[2088]: I0906 00:28:37.249158 2088 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 6 00:28:37.249722 kubelet[2088]: I0906 00:28:37.249603 2088 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 6 00:28:37.250990 kubelet[2088]: W0906 00:28:37.250954 2088 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 6 00:28:37.254901 kubelet[2088]: I0906 00:28:37.254863 2088 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 6 00:28:37.255055 kubelet[2088]: I0906 00:28:37.254923 2088 server.go:1287] "Started kubelet" Sep 6 00:28:37.269457 kubelet[2088]: I0906 00:28:37.269422 2088 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 00:28:37.270551 kubelet[2088]: I0906 00:28:37.270459 2088 server.go:479] "Adding debug handlers to kubelet server" Sep 6 00:28:37.273047 kubelet[2088]: I0906 00:28:37.272973 2088 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 00:28:37.273369 kubelet[2088]: I0906 00:28:37.273354 2088 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 00:28:37.274456 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 6 00:28:37.275359 kubelet[2088]: I0906 00:28:37.274611 2088 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 00:28:37.276379 kubelet[2088]: I0906 00:28:37.276059 2088 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 00:28:37.285338 kubelet[2088]: E0906 00:28:37.282575 2088 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.18.181\" not found" Sep 6 00:28:37.285338 kubelet[2088]: I0906 00:28:37.282612 2088 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 6 00:28:37.285338 kubelet[2088]: I0906 00:28:37.282879 2088 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 6 00:28:37.285338 kubelet[2088]: I0906 00:28:37.282945 2088 reconciler.go:26] "Reconciler: start to sync state" Sep 6 00:28:37.285338 kubelet[2088]: E0906 00:28:37.283308 2088 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 00:28:37.298710 kubelet[2088]: E0906 00:28:37.298671 2088 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.18.181\" not found" node="172.31.18.181" Sep 6 00:28:37.308596 kubelet[2088]: I0906 00:28:37.308564 2088 factory.go:221] Registration of the containerd container factory successfully Sep 6 00:28:37.308807 kubelet[2088]: I0906 00:28:37.308790 2088 factory.go:221] Registration of the systemd container factory successfully Sep 6 00:28:37.309017 kubelet[2088]: I0906 00:28:37.308992 2088 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 00:28:37.328411 kubelet[2088]: I0906 00:28:37.328382 2088 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 6 00:28:37.328411 kubelet[2088]: I0906 00:28:37.328407 2088 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 6 00:28:37.328580 kubelet[2088]: I0906 00:28:37.328430 2088 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:28:37.331080 kubelet[2088]: I0906 00:28:37.331056 2088 policy_none.go:49] "None policy: Start" Sep 6 00:28:37.331226 kubelet[2088]: I0906 00:28:37.331090 2088 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 6 00:28:37.331226 kubelet[2088]: I0906 00:28:37.331106 2088 state_mem.go:35] "Initializing new in-memory state store" Sep 6 00:28:37.347723 systemd[1]: Created slice kubepods.slice. Sep 6 00:28:37.366536 systemd[1]: Created slice kubepods-burstable.slice. Sep 6 00:28:37.371990 systemd[1]: Created slice kubepods-besteffort.slice. Sep 6 00:28:37.384484 kubelet[2088]: I0906 00:28:37.384381 2088 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 6 00:28:37.386555 kubelet[2088]: I0906 00:28:37.385620 2088 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 00:28:37.386555 kubelet[2088]: I0906 00:28:37.385642 2088 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 00:28:37.388878 kubelet[2088]: E0906 00:28:37.388852 2088 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 6 00:28:37.389055 kubelet[2088]: E0906 00:28:37.389040 2088 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.18.181\" not found" Sep 6 00:28:37.389340 kubelet[2088]: I0906 00:28:37.389099 2088 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 00:28:37.487385 kubelet[2088]: I0906 00:28:37.487358 2088 kubelet_node_status.go:75] "Attempting to register node" node="172.31.18.181" Sep 6 00:28:37.498796 kubelet[2088]: I0906 00:28:37.498656 2088 kubelet_node_status.go:78] "Successfully registered node" node="172.31.18.181" Sep 6 00:28:37.509946 kubelet[2088]: I0906 00:28:37.509925 2088 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Sep 6 00:28:37.510495 env[1712]: time="2025-09-06T00:28:37.510417318Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 6 00:28:37.510941 kubelet[2088]: I0906 00:28:37.510928 2088 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Sep 6 00:28:37.530694 kubelet[2088]: I0906 00:28:37.530640 2088 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 6 00:28:37.531955 kubelet[2088]: I0906 00:28:37.531906 2088 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 6 00:28:37.531955 kubelet[2088]: I0906 00:28:37.531945 2088 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 6 00:28:37.532111 kubelet[2088]: I0906 00:28:37.531967 2088 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 6 00:28:37.532111 kubelet[2088]: I0906 00:28:37.531973 2088 kubelet.go:2382] "Starting kubelet main sync loop" Sep 6 00:28:37.532111 kubelet[2088]: E0906 00:28:37.532021 2088 kubelet.go:2406] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Sep 6 00:28:37.680102 sudo[1958]: pam_unix(sudo:session): session closed for user root Sep 6 00:28:37.705080 sshd[1955]: pam_unix(sshd:session): session closed for user core Sep 6 00:28:37.708445 systemd[1]: sshd@6-172.31.18.181:22-139.178.68.195:47234.service: Deactivated successfully. Sep 6 00:28:37.709629 systemd[1]: session-7.scope: Deactivated successfully. Sep 6 00:28:37.710436 systemd-logind[1706]: Session 7 logged out. Waiting for processes to exit. Sep 6 00:28:37.711473 systemd-logind[1706]: Removed session 7. Sep 6 00:28:38.134493 kubelet[2088]: I0906 00:28:38.134344 2088 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Sep 6 00:28:38.135097 kubelet[2088]: W0906 00:28:38.134874 2088 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Sep 6 00:28:38.135211 kubelet[2088]: W0906 00:28:38.135190 2088 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Sep 6 00:28:38.135292 kubelet[2088]: W0906 00:28:38.135263 2088 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Sep 6 00:28:38.243110 kubelet[2088]: E0906 00:28:38.243063 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:28:38.247224 kubelet[2088]: I0906 00:28:38.247173 2088 apiserver.go:52] "Watching apiserver" Sep 6 00:28:38.258042 systemd[1]: Created slice kubepods-besteffort-pod109b8a2a_1223_4fda_81c2_767c3de02848.slice. Sep 6 00:28:38.269738 systemd[1]: Created slice kubepods-burstable-pod889a0a7f_c309_48b8_8ada_69f6a7916d34.slice. Sep 6 00:28:38.284963 kubelet[2088]: I0906 00:28:38.284912 2088 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 6 00:28:38.290107 kubelet[2088]: I0906 00:28:38.290065 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/109b8a2a-1223-4fda-81c2-767c3de02848-lib-modules\") pod \"kube-proxy-4mbsr\" (UID: \"109b8a2a-1223-4fda-81c2-767c3de02848\") " pod="kube-system/kube-proxy-4mbsr" Sep 6 00:28:38.290107 kubelet[2088]: I0906 00:28:38.290105 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/889a0a7f-c309-48b8-8ada-69f6a7916d34-cilium-cgroup\") pod \"cilium-rdtzx\" (UID: \"889a0a7f-c309-48b8-8ada-69f6a7916d34\") " pod="kube-system/cilium-rdtzx" Sep 6 00:28:38.290296 kubelet[2088]: I0906 00:28:38.290125 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/889a0a7f-c309-48b8-8ada-69f6a7916d34-cni-path\") pod \"cilium-rdtzx\" (UID: \"889a0a7f-c309-48b8-8ada-69f6a7916d34\") " pod="kube-system/cilium-rdtzx" Sep 6 00:28:38.290296 kubelet[2088]: I0906 00:28:38.290141 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/889a0a7f-c309-48b8-8ada-69f6a7916d34-etc-cni-netd\") pod \"cilium-rdtzx\" (UID: \"889a0a7f-c309-48b8-8ada-69f6a7916d34\") " pod="kube-system/cilium-rdtzx" Sep 6 00:28:38.290296 kubelet[2088]: I0906 00:28:38.290156 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/889a0a7f-c309-48b8-8ada-69f6a7916d34-cilium-config-path\") pod \"cilium-rdtzx\" (UID: \"889a0a7f-c309-48b8-8ada-69f6a7916d34\") " pod="kube-system/cilium-rdtzx" Sep 6 00:28:38.290296 kubelet[2088]: I0906 00:28:38.290172 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6b7k\" (UniqueName: \"kubernetes.io/projected/889a0a7f-c309-48b8-8ada-69f6a7916d34-kube-api-access-m6b7k\") pod \"cilium-rdtzx\" (UID: \"889a0a7f-c309-48b8-8ada-69f6a7916d34\") " pod="kube-system/cilium-rdtzx" Sep 6 00:28:38.290296 kubelet[2088]: I0906 00:28:38.290186 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/889a0a7f-c309-48b8-8ada-69f6a7916d34-hostproc\") pod \"cilium-rdtzx\" (UID: \"889a0a7f-c309-48b8-8ada-69f6a7916d34\") " pod="kube-system/cilium-rdtzx" Sep 6 00:28:38.290296 kubelet[2088]: I0906 00:28:38.290200 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/889a0a7f-c309-48b8-8ada-69f6a7916d34-clustermesh-secrets\") pod \"cilium-rdtzx\" (UID: \"889a0a7f-c309-48b8-8ada-69f6a7916d34\") " pod="kube-system/cilium-rdtzx" Sep 6 00:28:38.290472 kubelet[2088]: I0906 00:28:38.290216 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/889a0a7f-c309-48b8-8ada-69f6a7916d34-host-proc-sys-net\") pod \"cilium-rdtzx\" (UID: \"889a0a7f-c309-48b8-8ada-69f6a7916d34\") " pod="kube-system/cilium-rdtzx" Sep 6 00:28:38.290472 kubelet[2088]: I0906 00:28:38.290230 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/889a0a7f-c309-48b8-8ada-69f6a7916d34-host-proc-sys-kernel\") pod \"cilium-rdtzx\" (UID: \"889a0a7f-c309-48b8-8ada-69f6a7916d34\") " pod="kube-system/cilium-rdtzx" Sep 6 00:28:38.290472 kubelet[2088]: I0906 00:28:38.290244 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/889a0a7f-c309-48b8-8ada-69f6a7916d34-cilium-run\") pod \"cilium-rdtzx\" (UID: \"889a0a7f-c309-48b8-8ada-69f6a7916d34\") " pod="kube-system/cilium-rdtzx" Sep 6 00:28:38.290472 kubelet[2088]: I0906 00:28:38.290257 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/889a0a7f-c309-48b8-8ada-69f6a7916d34-bpf-maps\") pod \"cilium-rdtzx\" (UID: \"889a0a7f-c309-48b8-8ada-69f6a7916d34\") " pod="kube-system/cilium-rdtzx" Sep 6 00:28:38.290472 kubelet[2088]: I0906 00:28:38.290272 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/889a0a7f-c309-48b8-8ada-69f6a7916d34-hubble-tls\") pod \"cilium-rdtzx\" (UID: \"889a0a7f-c309-48b8-8ada-69f6a7916d34\") " pod="kube-system/cilium-rdtzx" Sep 6 00:28:38.290730 kubelet[2088]: I0906 00:28:38.290713 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/109b8a2a-1223-4fda-81c2-767c3de02848-kube-proxy\") pod \"kube-proxy-4mbsr\" (UID: \"109b8a2a-1223-4fda-81c2-767c3de02848\") " pod="kube-system/kube-proxy-4mbsr" Sep 6 00:28:38.290803 kubelet[2088]: I0906 00:28:38.290783 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/109b8a2a-1223-4fda-81c2-767c3de02848-xtables-lock\") pod \"kube-proxy-4mbsr\" (UID: \"109b8a2a-1223-4fda-81c2-767c3de02848\") " pod="kube-system/kube-proxy-4mbsr" Sep 6 00:28:38.290833 kubelet[2088]: I0906 00:28:38.290804 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhxql\" (UniqueName: \"kubernetes.io/projected/109b8a2a-1223-4fda-81c2-767c3de02848-kube-api-access-qhxql\") pod \"kube-proxy-4mbsr\" (UID: \"109b8a2a-1223-4fda-81c2-767c3de02848\") " pod="kube-system/kube-proxy-4mbsr" Sep 6 00:28:38.290833 kubelet[2088]: I0906 00:28:38.290824 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/889a0a7f-c309-48b8-8ada-69f6a7916d34-lib-modules\") pod \"cilium-rdtzx\" (UID: \"889a0a7f-c309-48b8-8ada-69f6a7916d34\") " pod="kube-system/cilium-rdtzx" Sep 6 00:28:38.290887 kubelet[2088]: I0906 00:28:38.290841 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/889a0a7f-c309-48b8-8ada-69f6a7916d34-xtables-lock\") pod \"cilium-rdtzx\" (UID: \"889a0a7f-c309-48b8-8ada-69f6a7916d34\") " pod="kube-system/cilium-rdtzx" Sep 6 00:28:38.392171 kubelet[2088]: I0906 00:28:38.392073 2088 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 6 00:28:38.568953 env[1712]: time="2025-09-06T00:28:38.568905752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4mbsr,Uid:109b8a2a-1223-4fda-81c2-767c3de02848,Namespace:kube-system,Attempt:0,}" Sep 6 00:28:38.577610 env[1712]: time="2025-09-06T00:28:38.577568959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rdtzx,Uid:889a0a7f-c309-48b8-8ada-69f6a7916d34,Namespace:kube-system,Attempt:0,}" Sep 6 00:28:39.102441 env[1712]: time="2025-09-06T00:28:39.102396844Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:28:39.103657 env[1712]: time="2025-09-06T00:28:39.103617755Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:28:39.109547 env[1712]: time="2025-09-06T00:28:39.109489624Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:28:39.110480 env[1712]: time="2025-09-06T00:28:39.110445147Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:28:39.112407 env[1712]: time="2025-09-06T00:28:39.112370071Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:28:39.113980 env[1712]: time="2025-09-06T00:28:39.113939783Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:28:39.114918 env[1712]: time="2025-09-06T00:28:39.114883009Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:28:39.115973 env[1712]: time="2025-09-06T00:28:39.115941424Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:28:39.144932 env[1712]: time="2025-09-06T00:28:39.144671003Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:28:39.144932 env[1712]: time="2025-09-06T00:28:39.144756529Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:28:39.144932 env[1712]: time="2025-09-06T00:28:39.144772862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:28:39.145407 env[1712]: time="2025-09-06T00:28:39.145357140Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/386e7f55d109367f623d4ab4bfcc0dbad1e69361bad4dad1844497269947770f pid=2147 runtime=io.containerd.runc.v2 Sep 6 00:28:39.146466 env[1712]: time="2025-09-06T00:28:39.146402400Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:28:39.146563 env[1712]: time="2025-09-06T00:28:39.146502152Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:28:39.146563 env[1712]: time="2025-09-06T00:28:39.146548802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:28:39.146807 env[1712]: time="2025-09-06T00:28:39.146747086Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/961a219bf861932d282186c637d5e238bbab383dfbbda836bf90a1ab2a81d2c7 pid=2149 runtime=io.containerd.runc.v2 Sep 6 00:28:39.165604 systemd[1]: Started cri-containerd-386e7f55d109367f623d4ab4bfcc0dbad1e69361bad4dad1844497269947770f.scope. Sep 6 00:28:39.192352 systemd[1]: Started cri-containerd-961a219bf861932d282186c637d5e238bbab383dfbbda836bf90a1ab2a81d2c7.scope. Sep 6 00:28:39.214171 env[1712]: time="2025-09-06T00:28:39.213964633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rdtzx,Uid:889a0a7f-c309-48b8-8ada-69f6a7916d34,Namespace:kube-system,Attempt:0,} returns sandbox id \"386e7f55d109367f623d4ab4bfcc0dbad1e69361bad4dad1844497269947770f\"" Sep 6 00:28:39.219300 env[1712]: time="2025-09-06T00:28:39.219256554Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 6 00:28:39.237628 env[1712]: time="2025-09-06T00:28:39.237572932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4mbsr,Uid:109b8a2a-1223-4fda-81c2-767c3de02848,Namespace:kube-system,Attempt:0,} returns sandbox id \"961a219bf861932d282186c637d5e238bbab383dfbbda836bf90a1ab2a81d2c7\"" Sep 6 00:28:39.243444 kubelet[2088]: E0906 00:28:39.243285 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:28:39.402222 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2055120506.mount: Deactivated successfully. Sep 6 00:28:40.243652 kubelet[2088]: E0906 00:28:40.243614 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:28:41.244463 kubelet[2088]: E0906 00:28:41.244370 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:28:42.245172 kubelet[2088]: E0906 00:28:42.245063 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:28:43.246004 kubelet[2088]: E0906 00:28:43.245930 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:28:44.199815 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3963850377.mount: Deactivated successfully. Sep 6 00:28:44.246438 kubelet[2088]: E0906 00:28:44.246401 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:28:45.246965 kubelet[2088]: E0906 00:28:45.246885 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:28:45.627193 amazon-ssm-agent[1762]: 2025-09-06 00:28:45 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Sep 6 00:28:46.247517 kubelet[2088]: E0906 00:28:46.247415 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:28:47.055473 env[1712]: time="2025-09-06T00:28:47.055415052Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:28:47.058512 env[1712]: time="2025-09-06T00:28:47.058463658Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:28:47.060994 env[1712]: time="2025-09-06T00:28:47.060948448Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:28:47.061530 env[1712]: time="2025-09-06T00:28:47.061491713Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 6 00:28:47.064201 env[1712]: time="2025-09-06T00:28:47.063794208Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 6 00:28:47.064483 env[1712]: time="2025-09-06T00:28:47.064436142Z" level=info msg="CreateContainer within sandbox \"386e7f55d109367f623d4ab4bfcc0dbad1e69361bad4dad1844497269947770f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:28:47.079366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3221348581.mount: Deactivated successfully. Sep 6 00:28:47.094004 env[1712]: time="2025-09-06T00:28:47.093932917Z" level=info msg="CreateContainer within sandbox \"386e7f55d109367f623d4ab4bfcc0dbad1e69361bad4dad1844497269947770f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ec648126fc21452493c2c1e205680fe5f30335f8a5f51857659873899e36ecad\"" Sep 6 00:28:47.094654 env[1712]: time="2025-09-06T00:28:47.094576077Z" level=info msg="StartContainer for \"ec648126fc21452493c2c1e205680fe5f30335f8a5f51857659873899e36ecad\"" Sep 6 00:28:47.132267 systemd[1]: Started cri-containerd-ec648126fc21452493c2c1e205680fe5f30335f8a5f51857659873899e36ecad.scope. Sep 6 00:28:47.171352 env[1712]: time="2025-09-06T00:28:47.169118955Z" level=info msg="StartContainer for \"ec648126fc21452493c2c1e205680fe5f30335f8a5f51857659873899e36ecad\" returns successfully" Sep 6 00:28:47.179044 systemd[1]: cri-containerd-ec648126fc21452493c2c1e205680fe5f30335f8a5f51857659873899e36ecad.scope: Deactivated successfully. Sep 6 00:28:47.248084 kubelet[2088]: E0906 00:28:47.248035 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:28:47.333713 env[1712]: time="2025-09-06T00:28:47.333237665Z" level=info msg="shim disconnected" id=ec648126fc21452493c2c1e205680fe5f30335f8a5f51857659873899e36ecad Sep 6 00:28:47.333906 env[1712]: time="2025-09-06T00:28:47.333868193Z" level=warning msg="cleaning up after shim disconnected" id=ec648126fc21452493c2c1e205680fe5f30335f8a5f51857659873899e36ecad namespace=k8s.io Sep 6 00:28:47.333906 env[1712]: time="2025-09-06T00:28:47.333893611Z" level=info msg="cleaning up dead shim" Sep 6 00:28:47.342532 env[1712]: time="2025-09-06T00:28:47.342473151Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:28:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2267 runtime=io.containerd.runc.v2\n" Sep 6 00:28:47.629490 env[1712]: time="2025-09-06T00:28:47.629148327Z" level=info msg="CreateContainer within sandbox \"386e7f55d109367f623d4ab4bfcc0dbad1e69361bad4dad1844497269947770f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 00:28:47.678463 env[1712]: time="2025-09-06T00:28:47.678415992Z" level=info msg="CreateContainer within sandbox \"386e7f55d109367f623d4ab4bfcc0dbad1e69361bad4dad1844497269947770f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"86c2c012fcc89de326017989b28adacc69b5666aaa4923a394fe2d3b8579245b\"" Sep 6 00:28:47.679482 env[1712]: time="2025-09-06T00:28:47.679438609Z" level=info msg="StartContainer for \"86c2c012fcc89de326017989b28adacc69b5666aaa4923a394fe2d3b8579245b\"" Sep 6 00:28:47.713802 systemd[1]: Started cri-containerd-86c2c012fcc89de326017989b28adacc69b5666aaa4923a394fe2d3b8579245b.scope. Sep 6 00:28:47.785928 env[1712]: time="2025-09-06T00:28:47.785873870Z" level=info msg="StartContainer for \"86c2c012fcc89de326017989b28adacc69b5666aaa4923a394fe2d3b8579245b\" returns successfully" Sep 6 00:28:47.802623 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 00:28:47.803751 systemd[1]: Stopped systemd-sysctl.service. Sep 6 00:28:47.804625 systemd[1]: Stopping systemd-sysctl.service... Sep 6 00:28:47.808721 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:28:47.817027 systemd[1]: cri-containerd-86c2c012fcc89de326017989b28adacc69b5666aaa4923a394fe2d3b8579245b.scope: Deactivated successfully. Sep 6 00:28:47.826068 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:28:47.861471 env[1712]: time="2025-09-06T00:28:47.861418474Z" level=info msg="shim disconnected" id=86c2c012fcc89de326017989b28adacc69b5666aaa4923a394fe2d3b8579245b Sep 6 00:28:47.861811 env[1712]: time="2025-09-06T00:28:47.861784848Z" level=warning msg="cleaning up after shim disconnected" id=86c2c012fcc89de326017989b28adacc69b5666aaa4923a394fe2d3b8579245b namespace=k8s.io Sep 6 00:28:47.861948 env[1712]: time="2025-09-06T00:28:47.861932256Z" level=info msg="cleaning up dead shim" Sep 6 00:28:47.881025 env[1712]: time="2025-09-06T00:28:47.880912482Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:28:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2331 runtime=io.containerd.runc.v2\n" Sep 6 00:28:48.083760 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec648126fc21452493c2c1e205680fe5f30335f8a5f51857659873899e36ecad-rootfs.mount: Deactivated successfully. Sep 6 00:28:48.248376 kubelet[2088]: E0906 00:28:48.248240 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:28:48.283258 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3286719278.mount: Deactivated successfully. Sep 6 00:28:48.631990 env[1712]: time="2025-09-06T00:28:48.631845072Z" level=info msg="CreateContainer within sandbox \"386e7f55d109367f623d4ab4bfcc0dbad1e69361bad4dad1844497269947770f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 00:28:48.655755 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2212407499.mount: Deactivated successfully. Sep 6 00:28:48.675532 env[1712]: time="2025-09-06T00:28:48.675448915Z" level=info msg="CreateContainer within sandbox \"386e7f55d109367f623d4ab4bfcc0dbad1e69361bad4dad1844497269947770f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2f3e852440a5f31fd0169ea6c06f5c2c3daf4393d24a33e770649813df4da936\"" Sep 6 00:28:48.676354 env[1712]: time="2025-09-06T00:28:48.676303048Z" level=info msg="StartContainer for \"2f3e852440a5f31fd0169ea6c06f5c2c3daf4393d24a33e770649813df4da936\"" Sep 6 00:28:48.725885 systemd[1]: Started cri-containerd-2f3e852440a5f31fd0169ea6c06f5c2c3daf4393d24a33e770649813df4da936.scope. Sep 6 00:28:48.820019 systemd[1]: cri-containerd-2f3e852440a5f31fd0169ea6c06f5c2c3daf4393d24a33e770649813df4da936.scope: Deactivated successfully. Sep 6 00:28:48.825931 env[1712]: time="2025-09-06T00:28:48.825878643Z" level=info msg="StartContainer for \"2f3e852440a5f31fd0169ea6c06f5c2c3daf4393d24a33e770649813df4da936\" returns successfully" Sep 6 00:28:48.935778 env[1712]: time="2025-09-06T00:28:48.935631366Z" level=info msg="shim disconnected" id=2f3e852440a5f31fd0169ea6c06f5c2c3daf4393d24a33e770649813df4da936 Sep 6 00:28:48.935778 env[1712]: time="2025-09-06T00:28:48.935694781Z" level=warning msg="cleaning up after shim disconnected" id=2f3e852440a5f31fd0169ea6c06f5c2c3daf4393d24a33e770649813df4da936 namespace=k8s.io Sep 6 00:28:48.935778 env[1712]: time="2025-09-06T00:28:48.935708305Z" level=info msg="cleaning up dead shim" Sep 6 00:28:48.952102 env[1712]: time="2025-09-06T00:28:48.952047487Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:28:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2391 runtime=io.containerd.runc.v2\n" Sep 6 00:28:49.077982 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2674653272.mount: Deactivated successfully. Sep 6 00:28:49.122177 env[1712]: time="2025-09-06T00:28:49.122121002Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:28:49.128015 env[1712]: time="2025-09-06T00:28:49.127556684Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:28:49.133504 env[1712]: time="2025-09-06T00:28:49.133459329Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:28:49.143254 env[1712]: time="2025-09-06T00:28:49.143205736Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:28:49.143862 env[1712]: time="2025-09-06T00:28:49.143819104Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\"" Sep 6 00:28:49.147523 env[1712]: time="2025-09-06T00:28:49.147480002Z" level=info msg="CreateContainer within sandbox \"961a219bf861932d282186c637d5e238bbab383dfbbda836bf90a1ab2a81d2c7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 6 00:28:49.170418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1996811.mount: Deactivated successfully. Sep 6 00:28:49.177944 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2791501320.mount: Deactivated successfully. Sep 6 00:28:49.187704 env[1712]: time="2025-09-06T00:28:49.187574610Z" level=info msg="CreateContainer within sandbox \"961a219bf861932d282186c637d5e238bbab383dfbbda836bf90a1ab2a81d2c7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ae8ddcca64125f01d6cb6c9dbe1d03d52bc7e456206396ac215e089cb861a79b\"" Sep 6 00:28:49.188965 env[1712]: time="2025-09-06T00:28:49.188934077Z" level=info msg="StartContainer for \"ae8ddcca64125f01d6cb6c9dbe1d03d52bc7e456206396ac215e089cb861a79b\"" Sep 6 00:28:49.209988 systemd[1]: Started cri-containerd-ae8ddcca64125f01d6cb6c9dbe1d03d52bc7e456206396ac215e089cb861a79b.scope. Sep 6 00:28:49.251343 kubelet[2088]: E0906 00:28:49.248476 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:28:49.257812 env[1712]: time="2025-09-06T00:28:49.257756303Z" level=info msg="StartContainer for \"ae8ddcca64125f01d6cb6c9dbe1d03d52bc7e456206396ac215e089cb861a79b\" returns successfully" Sep 6 00:28:49.638864 env[1712]: time="2025-09-06T00:28:49.638589127Z" level=info msg="CreateContainer within sandbox \"386e7f55d109367f623d4ab4bfcc0dbad1e69361bad4dad1844497269947770f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 00:28:49.644228 kubelet[2088]: I0906 00:28:49.644130 2088 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4mbsr" podStartSLOduration=2.737864018 podStartE2EDuration="12.644111273s" podCreationTimestamp="2025-09-06 00:28:37 +0000 UTC" firstStartedPulling="2025-09-06 00:28:39.238834013 +0000 UTC m=+2.593200544" lastFinishedPulling="2025-09-06 00:28:49.145081263 +0000 UTC m=+12.499447799" observedRunningTime="2025-09-06 00:28:49.643593158 +0000 UTC m=+12.997959716" watchObservedRunningTime="2025-09-06 00:28:49.644111273 +0000 UTC m=+12.998477857" Sep 6 00:28:49.668858 env[1712]: time="2025-09-06T00:28:49.668772514Z" level=info msg="CreateContainer within sandbox \"386e7f55d109367f623d4ab4bfcc0dbad1e69361bad4dad1844497269947770f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f885329941211dec762c3dd97db9ac2577682d6df835be647366999daf2f8050\"" Sep 6 00:28:49.669857 env[1712]: time="2025-09-06T00:28:49.669822264Z" level=info msg="StartContainer for \"f885329941211dec762c3dd97db9ac2577682d6df835be647366999daf2f8050\"" Sep 6 00:28:49.697031 systemd[1]: Started cri-containerd-f885329941211dec762c3dd97db9ac2577682d6df835be647366999daf2f8050.scope. Sep 6 00:28:49.734543 systemd[1]: cri-containerd-f885329941211dec762c3dd97db9ac2577682d6df835be647366999daf2f8050.scope: Deactivated successfully. Sep 6 00:28:49.736184 env[1712]: time="2025-09-06T00:28:49.736145152Z" level=info msg="StartContainer for \"f885329941211dec762c3dd97db9ac2577682d6df835be647366999daf2f8050\" returns successfully" Sep 6 00:28:49.809978 env[1712]: time="2025-09-06T00:28:49.809924028Z" level=info msg="shim disconnected" id=f885329941211dec762c3dd97db9ac2577682d6df835be647366999daf2f8050 Sep 6 00:28:49.810241 env[1712]: time="2025-09-06T00:28:49.809982316Z" level=warning msg="cleaning up after shim disconnected" id=f885329941211dec762c3dd97db9ac2577682d6df835be647366999daf2f8050 namespace=k8s.io Sep 6 00:28:49.810241 env[1712]: time="2025-09-06T00:28:49.809997178Z" level=info msg="cleaning up dead shim" Sep 6 00:28:49.820134 env[1712]: time="2025-09-06T00:28:49.820086177Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:28:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2615 runtime=io.containerd.runc.v2\n" Sep 6 00:28:50.249128 kubelet[2088]: E0906 00:28:50.249084 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:28:50.643649 env[1712]: time="2025-09-06T00:28:50.643301706Z" level=info msg="CreateContainer within sandbox \"386e7f55d109367f623d4ab4bfcc0dbad1e69361bad4dad1844497269947770f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 00:28:50.670686 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1423628600.mount: Deactivated successfully. Sep 6 00:28:50.679649 env[1712]: time="2025-09-06T00:28:50.679497960Z" level=info msg="CreateContainer within sandbox \"386e7f55d109367f623d4ab4bfcc0dbad1e69361bad4dad1844497269947770f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"cbe7e204ab4185914e9929e9031af72273836eef7f1ac081a0d64eed814ef064\"" Sep 6 00:28:50.680158 env[1712]: time="2025-09-06T00:28:50.680131624Z" level=info msg="StartContainer for \"cbe7e204ab4185914e9929e9031af72273836eef7f1ac081a0d64eed814ef064\"" Sep 6 00:28:50.703542 systemd[1]: Started cri-containerd-cbe7e204ab4185914e9929e9031af72273836eef7f1ac081a0d64eed814ef064.scope. Sep 6 00:28:50.744264 env[1712]: time="2025-09-06T00:28:50.744199104Z" level=info msg="StartContainer for \"cbe7e204ab4185914e9929e9031af72273836eef7f1ac081a0d64eed814ef064\" returns successfully" Sep 6 00:28:50.961917 kubelet[2088]: I0906 00:28:50.961816 2088 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 6 00:28:51.090186 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 6 00:28:51.217756 kernel: Initializing XFRM netlink socket Sep 6 00:28:51.249862 kubelet[2088]: E0906 00:28:51.249815 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:28:52.250975 kubelet[2088]: E0906 00:28:52.250937 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:28:52.861137 (udev-worker)[2477]: Network interface NamePolicy= disabled on kernel command line. Sep 6 00:28:52.863335 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 6 00:28:52.864784 (udev-worker)[2756]: Network interface NamePolicy= disabled on kernel command line. Sep 6 00:28:52.865221 systemd-networkd[1444]: cilium_host: Link UP Sep 6 00:28:52.865754 systemd-networkd[1444]: cilium_net: Link UP Sep 6 00:28:52.865759 systemd-networkd[1444]: cilium_net: Gained carrier Sep 6 00:28:52.866015 systemd-networkd[1444]: cilium_host: Gained carrier Sep 6 00:28:52.995338 systemd-networkd[1444]: cilium_vxlan: Link UP Sep 6 00:28:52.995348 systemd-networkd[1444]: cilium_vxlan: Gained carrier Sep 6 00:28:53.219342 kernel: NET: Registered PF_ALG protocol family Sep 6 00:28:53.251875 kubelet[2088]: E0906 00:28:53.251817 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:28:53.322266 systemd-networkd[1444]: cilium_net: Gained IPv6LL Sep 6 00:28:53.488722 systemd-networkd[1444]: cilium_host: Gained IPv6LL Sep 6 00:28:53.896506 systemd-networkd[1444]: lxc_health: Link UP Sep 6 00:28:53.897592 (udev-worker)[2777]: Network interface NamePolicy= disabled on kernel command line. Sep 6 00:28:53.907638 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 6 00:28:53.907270 systemd-networkd[1444]: lxc_health: Gained carrier Sep 6 00:28:54.253089 kubelet[2088]: E0906 00:28:54.252933 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:28:54.833544 systemd-networkd[1444]: cilium_vxlan: Gained IPv6LL Sep 6 00:28:54.900465 kubelet[2088]: I0906 00:28:54.900392 2088 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rdtzx" podStartSLOduration=10.055754388 podStartE2EDuration="17.900369975s" podCreationTimestamp="2025-09-06 00:28:37 +0000 UTC" firstStartedPulling="2025-09-06 00:28:39.218225415 +0000 UTC m=+2.572591963" lastFinishedPulling="2025-09-06 00:28:47.062841006 +0000 UTC m=+10.417207550" observedRunningTime="2025-09-06 00:28:51.664009007 +0000 UTC m=+15.018375565" watchObservedRunningTime="2025-09-06 00:28:54.900369975 +0000 UTC m=+18.254736519" Sep 6 00:28:54.907006 systemd[1]: Created slice kubepods-besteffort-podb08e9c84_00f0_4c64_8e21_7f764bf51b33.slice. Sep 6 00:28:54.915583 kubelet[2088]: I0906 00:28:54.915540 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c8gt\" (UniqueName: \"kubernetes.io/projected/b08e9c84-00f0-4c64-8e21-7f764bf51b33-kube-api-access-4c8gt\") pod \"nginx-deployment-7fcdb87857-76sn9\" (UID: \"b08e9c84-00f0-4c64-8e21-7f764bf51b33\") " pod="default/nginx-deployment-7fcdb87857-76sn9" Sep 6 00:28:55.211108 env[1712]: time="2025-09-06T00:28:55.210427251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-76sn9,Uid:b08e9c84-00f0-4c64-8e21-7f764bf51b33,Namespace:default,Attempt:0,}" Sep 6 00:28:55.216557 systemd-networkd[1444]: lxc_health: Gained IPv6LL Sep 6 00:28:55.253690 kubelet[2088]: E0906 00:28:55.253634 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:28:55.282542 systemd-networkd[1444]: lxc54eaee45901c: Link UP Sep 6 00:28:55.288338 kernel: eth0: renamed from tmp40b62 Sep 6 00:28:55.294948 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 6 00:28:55.296241 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc54eaee45901c: link becomes ready Sep 6 00:28:55.295294 systemd-networkd[1444]: lxc54eaee45901c: Gained carrier Sep 6 00:28:56.254760 kubelet[2088]: E0906 00:28:56.254718 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:28:57.242472 kubelet[2088]: E0906 00:28:57.242428 2088 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:28:57.256524 kubelet[2088]: E0906 00:28:57.256483 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:28:57.328515 systemd-networkd[1444]: lxc54eaee45901c: Gained IPv6LL Sep 6 00:28:58.257962 kubelet[2088]: E0906 00:28:58.257909 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:28:58.490838 amazon-ssm-agent[1762]: 2025-09-06 00:28:58 INFO [HealthCheck] HealthCheck reporting agent health. Sep 6 00:28:58.854847 env[1712]: time="2025-09-06T00:28:58.854682898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:28:58.855827 env[1712]: time="2025-09-06T00:28:58.855111976Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:28:58.855827 env[1712]: time="2025-09-06T00:28:58.855173376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:28:58.856229 env[1712]: time="2025-09-06T00:28:58.856147934Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/40b62625f58bc767abdb703b0df58446b0dae8a13f202d03a0ee29b51e04615e pid=3130 runtime=io.containerd.runc.v2 Sep 6 00:28:58.881784 systemd[1]: Started cri-containerd-40b62625f58bc767abdb703b0df58446b0dae8a13f202d03a0ee29b51e04615e.scope. Sep 6 00:28:58.929889 env[1712]: time="2025-09-06T00:28:58.929831263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-76sn9,Uid:b08e9c84-00f0-4c64-8e21-7f764bf51b33,Namespace:default,Attempt:0,} returns sandbox id \"40b62625f58bc767abdb703b0df58446b0dae8a13f202d03a0ee29b51e04615e\"" Sep 6 00:28:58.931440 env[1712]: time="2025-09-06T00:28:58.931405408Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Sep 6 00:28:59.259204 kubelet[2088]: E0906 00:28:59.259016 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:00.259377 kubelet[2088]: E0906 00:29:00.259206 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:01.260200 kubelet[2088]: E0906 00:29:01.260134 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:02.261583 kubelet[2088]: E0906 00:29:02.261513 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:02.878179 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3912000276.mount: Deactivated successfully. Sep 6 00:29:03.264278 kubelet[2088]: E0906 00:29:03.263913 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:04.264985 kubelet[2088]: E0906 00:29:04.264905 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:04.952023 env[1712]: time="2025-09-06T00:29:04.951967462Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:04.954701 env[1712]: time="2025-09-06T00:29:04.954658268Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4cbb30cb60f877a307c1f0bcdaca389dd24689ff60c6fb370f0cca7367185c48,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:04.957341 env[1712]: time="2025-09-06T00:29:04.957278748Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:04.959144 env[1712]: time="2025-09-06T00:29:04.959101699Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:883ca821a91fc20bcde818eeee4e1ed55ef63a020d6198ecd5a03af5a4eac530,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:04.959780 env[1712]: time="2025-09-06T00:29:04.959740918Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:4cbb30cb60f877a307c1f0bcdaca389dd24689ff60c6fb370f0cca7367185c48\"" Sep 6 00:29:04.962614 env[1712]: time="2025-09-06T00:29:04.962573143Z" level=info msg="CreateContainer within sandbox \"40b62625f58bc767abdb703b0df58446b0dae8a13f202d03a0ee29b51e04615e\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Sep 6 00:29:04.986516 env[1712]: time="2025-09-06T00:29:04.986436920Z" level=info msg="CreateContainer within sandbox \"40b62625f58bc767abdb703b0df58446b0dae8a13f202d03a0ee29b51e04615e\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"6ba5b02e2e0b627da6377a3aac900995ba57368e0072995af5f5eaa79a7846ec\"" Sep 6 00:29:04.987368 env[1712]: time="2025-09-06T00:29:04.987307048Z" level=info msg="StartContainer for \"6ba5b02e2e0b627da6377a3aac900995ba57368e0072995af5f5eaa79a7846ec\"" Sep 6 00:29:05.016637 systemd[1]: Started cri-containerd-6ba5b02e2e0b627da6377a3aac900995ba57368e0072995af5f5eaa79a7846ec.scope. Sep 6 00:29:05.054969 env[1712]: time="2025-09-06T00:29:05.054915056Z" level=info msg="StartContainer for \"6ba5b02e2e0b627da6377a3aac900995ba57368e0072995af5f5eaa79a7846ec\" returns successfully" Sep 6 00:29:05.265980 kubelet[2088]: E0906 00:29:05.265845 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:05.686918 kubelet[2088]: I0906 00:29:05.686858 2088 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-76sn9" podStartSLOduration=5.656461507 podStartE2EDuration="11.686840944s" podCreationTimestamp="2025-09-06 00:28:54 +0000 UTC" firstStartedPulling="2025-09-06 00:28:58.931015093 +0000 UTC m=+22.285381625" lastFinishedPulling="2025-09-06 00:29:04.961394526 +0000 UTC m=+28.315761062" observedRunningTime="2025-09-06 00:29:05.686565335 +0000 UTC m=+29.040931891" watchObservedRunningTime="2025-09-06 00:29:05.686840944 +0000 UTC m=+29.041207498" Sep 6 00:29:06.079307 update_engine[1707]: I0906 00:29:06.079231 1707 update_attempter.cc:509] Updating boot flags... Sep 6 00:29:06.267101 kubelet[2088]: E0906 00:29:06.267055 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:07.267969 kubelet[2088]: E0906 00:29:07.267895 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:08.212720 systemd[1]: Created slice kubepods-besteffort-pod8998162e_2cf6_461a_b04c_463fe8a7acc2.slice. Sep 6 00:29:08.238988 kubelet[2088]: I0906 00:29:08.238934 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5kfl\" (UniqueName: \"kubernetes.io/projected/8998162e-2cf6-461a-b04c-463fe8a7acc2-kube-api-access-q5kfl\") pod \"nfs-server-provisioner-0\" (UID: \"8998162e-2cf6-461a-b04c-463fe8a7acc2\") " pod="default/nfs-server-provisioner-0" Sep 6 00:29:08.238988 kubelet[2088]: I0906 00:29:08.238978 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/8998162e-2cf6-461a-b04c-463fe8a7acc2-data\") pod \"nfs-server-provisioner-0\" (UID: \"8998162e-2cf6-461a-b04c-463fe8a7acc2\") " pod="default/nfs-server-provisioner-0" Sep 6 00:29:08.268715 kubelet[2088]: E0906 00:29:08.268624 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:08.517915 env[1712]: time="2025-09-06T00:29:08.517588994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:8998162e-2cf6-461a-b04c-463fe8a7acc2,Namespace:default,Attempt:0,}" Sep 6 00:29:08.588507 (udev-worker)[3237]: Network interface NamePolicy= disabled on kernel command line. Sep 6 00:29:08.589537 (udev-worker)[3225]: Network interface NamePolicy= disabled on kernel command line. Sep 6 00:29:08.595432 systemd-networkd[1444]: lxcb7bb4cac493b: Link UP Sep 6 00:29:08.606548 kernel: eth0: renamed from tmp6d211 Sep 6 00:29:08.613507 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 6 00:29:08.613627 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcb7bb4cac493b: link becomes ready Sep 6 00:29:08.613559 systemd-networkd[1444]: lxcb7bb4cac493b: Gained carrier Sep 6 00:29:08.838115 env[1712]: time="2025-09-06T00:29:08.838020877Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:29:08.838115 env[1712]: time="2025-09-06T00:29:08.838082785Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:29:08.838383 env[1712]: time="2025-09-06T00:29:08.838097234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:29:08.838655 env[1712]: time="2025-09-06T00:29:08.838595890Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6d211ed07bab93a6fa529dcff75a7cf24869217475e30bc9748a2672a39e2596 pid=3518 runtime=io.containerd.runc.v2 Sep 6 00:29:08.859428 systemd[1]: Started cri-containerd-6d211ed07bab93a6fa529dcff75a7cf24869217475e30bc9748a2672a39e2596.scope. Sep 6 00:29:08.913775 env[1712]: time="2025-09-06T00:29:08.913728954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:8998162e-2cf6-461a-b04c-463fe8a7acc2,Namespace:default,Attempt:0,} returns sandbox id \"6d211ed07bab93a6fa529dcff75a7cf24869217475e30bc9748a2672a39e2596\"" Sep 6 00:29:08.916141 env[1712]: time="2025-09-06T00:29:08.916107775Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Sep 6 00:29:09.270044 kubelet[2088]: E0906 00:29:09.269345 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:10.269586 kubelet[2088]: E0906 00:29:10.269516 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:10.448932 systemd-networkd[1444]: lxcb7bb4cac493b: Gained IPv6LL Sep 6 00:29:11.270566 kubelet[2088]: E0906 00:29:11.270524 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:11.430416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4089674934.mount: Deactivated successfully. Sep 6 00:29:12.270809 kubelet[2088]: E0906 00:29:12.270763 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:13.271378 kubelet[2088]: E0906 00:29:13.271334 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:13.601384 env[1712]: time="2025-09-06T00:29:13.601307621Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:13.605670 env[1712]: time="2025-09-06T00:29:13.605625240Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:13.608709 env[1712]: time="2025-09-06T00:29:13.608666192Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:13.612065 env[1712]: time="2025-09-06T00:29:13.612013619Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:13.612372 env[1712]: time="2025-09-06T00:29:13.612337358Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Sep 6 00:29:13.615419 env[1712]: time="2025-09-06T00:29:13.615372497Z" level=info msg="CreateContainer within sandbox \"6d211ed07bab93a6fa529dcff75a7cf24869217475e30bc9748a2672a39e2596\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Sep 6 00:29:13.630294 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1865330069.mount: Deactivated successfully. Sep 6 00:29:13.636078 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1717435026.mount: Deactivated successfully. Sep 6 00:29:13.647603 env[1712]: time="2025-09-06T00:29:13.647558501Z" level=info msg="CreateContainer within sandbox \"6d211ed07bab93a6fa529dcff75a7cf24869217475e30bc9748a2672a39e2596\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"6399b6ffde893fa227e33b4b1d8774eca936cb536096543970cafdd6b0bdcb6e\"" Sep 6 00:29:13.648599 env[1712]: time="2025-09-06T00:29:13.648418913Z" level=info msg="StartContainer for \"6399b6ffde893fa227e33b4b1d8774eca936cb536096543970cafdd6b0bdcb6e\"" Sep 6 00:29:13.676144 systemd[1]: Started cri-containerd-6399b6ffde893fa227e33b4b1d8774eca936cb536096543970cafdd6b0bdcb6e.scope. Sep 6 00:29:13.742621 env[1712]: time="2025-09-06T00:29:13.742551162Z" level=info msg="StartContainer for \"6399b6ffde893fa227e33b4b1d8774eca936cb536096543970cafdd6b0bdcb6e\" returns successfully" Sep 6 00:29:14.272296 kubelet[2088]: E0906 00:29:14.272241 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:14.738186 kubelet[2088]: I0906 00:29:14.738114 2088 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.039784894 podStartE2EDuration="6.738094959s" podCreationTimestamp="2025-09-06 00:29:08 +0000 UTC" firstStartedPulling="2025-09-06 00:29:08.915541772 +0000 UTC m=+32.269908317" lastFinishedPulling="2025-09-06 00:29:13.613851836 +0000 UTC m=+36.968218382" observedRunningTime="2025-09-06 00:29:14.73790413 +0000 UTC m=+38.092270698" watchObservedRunningTime="2025-09-06 00:29:14.738094959 +0000 UTC m=+38.092461515" Sep 6 00:29:15.273162 kubelet[2088]: E0906 00:29:15.273084 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:15.660046 amazon-ssm-agent[1762]: 2025-09-06 00:29:15 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Sep 6 00:29:16.273594 kubelet[2088]: E0906 00:29:16.273535 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:17.243114 kubelet[2088]: E0906 00:29:17.242970 2088 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:17.274053 kubelet[2088]: E0906 00:29:17.274012 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:18.274997 kubelet[2088]: E0906 00:29:18.274953 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:19.275361 kubelet[2088]: E0906 00:29:19.275286 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:20.276039 kubelet[2088]: E0906 00:29:20.275986 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:21.277032 kubelet[2088]: E0906 00:29:21.276982 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:22.277875 kubelet[2088]: E0906 00:29:22.277824 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:23.278817 kubelet[2088]: E0906 00:29:23.278760 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:23.383135 systemd[1]: Created slice kubepods-besteffort-pod73f7b8ae_df3c_482a_a5c5_3f2b9f67b1db.slice. Sep 6 00:29:23.445973 kubelet[2088]: I0906 00:29:23.445928 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6b312fa4-8f1a-41b1-ada0-e57aa0458094\" (UniqueName: \"kubernetes.io/nfs/73f7b8ae-df3c-482a-a5c5-3f2b9f67b1db-pvc-6b312fa4-8f1a-41b1-ada0-e57aa0458094\") pod \"test-pod-1\" (UID: \"73f7b8ae-df3c-482a-a5c5-3f2b9f67b1db\") " pod="default/test-pod-1" Sep 6 00:29:23.445973 kubelet[2088]: I0906 00:29:23.445975 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr5dw\" (UniqueName: \"kubernetes.io/projected/73f7b8ae-df3c-482a-a5c5-3f2b9f67b1db-kube-api-access-fr5dw\") pod \"test-pod-1\" (UID: \"73f7b8ae-df3c-482a-a5c5-3f2b9f67b1db\") " pod="default/test-pod-1" Sep 6 00:29:23.617353 kernel: FS-Cache: Loaded Sep 6 00:29:23.672048 kernel: RPC: Registered named UNIX socket transport module. Sep 6 00:29:23.672185 kernel: RPC: Registered udp transport module. Sep 6 00:29:23.672221 kernel: RPC: Registered tcp transport module. Sep 6 00:29:23.674372 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Sep 6 00:29:23.746363 kernel: FS-Cache: Netfs 'nfs' registered for caching Sep 6 00:29:23.931976 kernel: NFS: Registering the id_resolver key type Sep 6 00:29:23.932175 kernel: Key type id_resolver registered Sep 6 00:29:23.932217 kernel: Key type id_legacy registered Sep 6 00:29:24.007752 nfsidmap[3641]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Sep 6 00:29:24.013450 nfsidmap[3642]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Sep 6 00:29:24.279812 kubelet[2088]: E0906 00:29:24.279670 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:24.287515 env[1712]: time="2025-09-06T00:29:24.287458989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:73f7b8ae-df3c-482a-a5c5-3f2b9f67b1db,Namespace:default,Attempt:0,}" Sep 6 00:29:24.341780 (udev-worker)[3629]: Network interface NamePolicy= disabled on kernel command line. Sep 6 00:29:24.342720 (udev-worker)[3635]: Network interface NamePolicy= disabled on kernel command line. Sep 6 00:29:24.345988 systemd-networkd[1444]: lxc65d724e60ca3: Link UP Sep 6 00:29:24.351640 kernel: eth0: renamed from tmp15cd5 Sep 6 00:29:24.357877 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 6 00:29:24.358028 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc65d724e60ca3: link becomes ready Sep 6 00:29:24.358990 systemd-networkd[1444]: lxc65d724e60ca3: Gained carrier Sep 6 00:29:24.565103 env[1712]: time="2025-09-06T00:29:24.564363166Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:29:24.565103 env[1712]: time="2025-09-06T00:29:24.564405331Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:29:24.565393 env[1712]: time="2025-09-06T00:29:24.564421511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:29:24.565393 env[1712]: time="2025-09-06T00:29:24.564596988Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/15cd50daee33bfdf858a4a10f2080f38bd05d51795464fe8d5977d5570918fde pid=3668 runtime=io.containerd.runc.v2 Sep 6 00:29:24.595060 systemd[1]: run-containerd-runc-k8s.io-15cd50daee33bfdf858a4a10f2080f38bd05d51795464fe8d5977d5570918fde-runc.XSCOPc.mount: Deactivated successfully. Sep 6 00:29:24.598171 systemd[1]: Started cri-containerd-15cd50daee33bfdf858a4a10f2080f38bd05d51795464fe8d5977d5570918fde.scope. Sep 6 00:29:24.647505 env[1712]: time="2025-09-06T00:29:24.647459858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:73f7b8ae-df3c-482a-a5c5-3f2b9f67b1db,Namespace:default,Attempt:0,} returns sandbox id \"15cd50daee33bfdf858a4a10f2080f38bd05d51795464fe8d5977d5570918fde\"" Sep 6 00:29:24.649196 env[1712]: time="2025-09-06T00:29:24.649153159Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Sep 6 00:29:24.967419 env[1712]: time="2025-09-06T00:29:24.967351206Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:24.971745 env[1712]: time="2025-09-06T00:29:24.971693999Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:4cbb30cb60f877a307c1f0bcdaca389dd24689ff60c6fb370f0cca7367185c48,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:24.974813 env[1712]: time="2025-09-06T00:29:24.974766709Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:24.977755 env[1712]: time="2025-09-06T00:29:24.977694949Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:883ca821a91fc20bcde818eeee4e1ed55ef63a020d6198ecd5a03af5a4eac530,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:24.978449 env[1712]: time="2025-09-06T00:29:24.978411345Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:4cbb30cb60f877a307c1f0bcdaca389dd24689ff60c6fb370f0cca7367185c48\"" Sep 6 00:29:24.981548 env[1712]: time="2025-09-06T00:29:24.981508213Z" level=info msg="CreateContainer within sandbox \"15cd50daee33bfdf858a4a10f2080f38bd05d51795464fe8d5977d5570918fde\" for container &ContainerMetadata{Name:test,Attempt:0,}" Sep 6 00:29:25.010491 env[1712]: time="2025-09-06T00:29:25.010421444Z" level=info msg="CreateContainer within sandbox \"15cd50daee33bfdf858a4a10f2080f38bd05d51795464fe8d5977d5570918fde\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"497833c6136c83744e14112b9b50455dc51977a2d6c26ec634b3163bc33ceebc\"" Sep 6 00:29:25.011142 env[1712]: time="2025-09-06T00:29:25.011107709Z" level=info msg="StartContainer for \"497833c6136c83744e14112b9b50455dc51977a2d6c26ec634b3163bc33ceebc\"" Sep 6 00:29:25.043921 systemd[1]: Started cri-containerd-497833c6136c83744e14112b9b50455dc51977a2d6c26ec634b3163bc33ceebc.scope. Sep 6 00:29:25.086886 env[1712]: time="2025-09-06T00:29:25.086839534Z" level=info msg="StartContainer for \"497833c6136c83744e14112b9b50455dc51977a2d6c26ec634b3163bc33ceebc\" returns successfully" Sep 6 00:29:25.280556 kubelet[2088]: E0906 00:29:25.280411 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:25.568252 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount912085075.mount: Deactivated successfully. Sep 6 00:29:25.763192 kubelet[2088]: I0906 00:29:25.763136 2088 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=17.431794632 podStartE2EDuration="17.763120094s" podCreationTimestamp="2025-09-06 00:29:08 +0000 UTC" firstStartedPulling="2025-09-06 00:29:24.648419756 +0000 UTC m=+48.002786289" lastFinishedPulling="2025-09-06 00:29:24.979745212 +0000 UTC m=+48.334111751" observedRunningTime="2025-09-06 00:29:25.762532629 +0000 UTC m=+49.116899184" watchObservedRunningTime="2025-09-06 00:29:25.763120094 +0000 UTC m=+49.117486646" Sep 6 00:29:26.192727 systemd-networkd[1444]: lxc65d724e60ca3: Gained IPv6LL Sep 6 00:29:26.281209 kubelet[2088]: E0906 00:29:26.281120 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:27.281859 kubelet[2088]: E0906 00:29:27.281801 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:28.282411 kubelet[2088]: E0906 00:29:28.282364 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:29.283198 kubelet[2088]: E0906 00:29:29.283147 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:30.283339 kubelet[2088]: E0906 00:29:30.283256 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:31.283936 kubelet[2088]: E0906 00:29:31.283893 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:32.284086 kubelet[2088]: E0906 00:29:32.284039 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:33.285056 kubelet[2088]: E0906 00:29:33.284975 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:34.285167 kubelet[2088]: E0906 00:29:34.285124 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:35.286236 kubelet[2088]: E0906 00:29:35.286164 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:35.836447 systemd[1]: run-containerd-runc-k8s.io-cbe7e204ab4185914e9929e9031af72273836eef7f1ac081a0d64eed814ef064-runc.AReyAV.mount: Deactivated successfully. Sep 6 00:29:36.120558 env[1712]: time="2025-09-06T00:29:36.120214674Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 00:29:36.159611 env[1712]: time="2025-09-06T00:29:36.159555721Z" level=info msg="StopContainer for \"cbe7e204ab4185914e9929e9031af72273836eef7f1ac081a0d64eed814ef064\" with timeout 2 (s)" Sep 6 00:29:36.159878 env[1712]: time="2025-09-06T00:29:36.159829191Z" level=info msg="Stop container \"cbe7e204ab4185914e9929e9031af72273836eef7f1ac081a0d64eed814ef064\" with signal terminated" Sep 6 00:29:36.168307 systemd-networkd[1444]: lxc_health: Link DOWN Sep 6 00:29:36.168332 systemd-networkd[1444]: lxc_health: Lost carrier Sep 6 00:29:36.225832 systemd[1]: cri-containerd-cbe7e204ab4185914e9929e9031af72273836eef7f1ac081a0d64eed814ef064.scope: Deactivated successfully. Sep 6 00:29:36.226192 systemd[1]: cri-containerd-cbe7e204ab4185914e9929e9031af72273836eef7f1ac081a0d64eed814ef064.scope: Consumed 7.206s CPU time. Sep 6 00:29:36.251776 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cbe7e204ab4185914e9929e9031af72273836eef7f1ac081a0d64eed814ef064-rootfs.mount: Deactivated successfully. Sep 6 00:29:36.286736 kubelet[2088]: E0906 00:29:36.286693 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:36.294848 env[1712]: time="2025-09-06T00:29:36.294796519Z" level=info msg="shim disconnected" id=cbe7e204ab4185914e9929e9031af72273836eef7f1ac081a0d64eed814ef064 Sep 6 00:29:36.294848 env[1712]: time="2025-09-06T00:29:36.294845154Z" level=warning msg="cleaning up after shim disconnected" id=cbe7e204ab4185914e9929e9031af72273836eef7f1ac081a0d64eed814ef064 namespace=k8s.io Sep 6 00:29:36.294848 env[1712]: time="2025-09-06T00:29:36.294855466Z" level=info msg="cleaning up dead shim" Sep 6 00:29:36.303856 env[1712]: time="2025-09-06T00:29:36.303797821Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:29:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3799 runtime=io.containerd.runc.v2\n" Sep 6 00:29:36.307492 env[1712]: time="2025-09-06T00:29:36.307424643Z" level=info msg="StopContainer for \"cbe7e204ab4185914e9929e9031af72273836eef7f1ac081a0d64eed814ef064\" returns successfully" Sep 6 00:29:36.308117 env[1712]: time="2025-09-06T00:29:36.308061438Z" level=info msg="StopPodSandbox for \"386e7f55d109367f623d4ab4bfcc0dbad1e69361bad4dad1844497269947770f\"" Sep 6 00:29:36.308229 env[1712]: time="2025-09-06T00:29:36.308120245Z" level=info msg="Container to stop \"86c2c012fcc89de326017989b28adacc69b5666aaa4923a394fe2d3b8579245b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:29:36.308229 env[1712]: time="2025-09-06T00:29:36.308139950Z" level=info msg="Container to stop \"f885329941211dec762c3dd97db9ac2577682d6df835be647366999daf2f8050\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:29:36.308229 env[1712]: time="2025-09-06T00:29:36.308150392Z" level=info msg="Container to stop \"cbe7e204ab4185914e9929e9031af72273836eef7f1ac081a0d64eed814ef064\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:29:36.308229 env[1712]: time="2025-09-06T00:29:36.308161487Z" level=info msg="Container to stop \"ec648126fc21452493c2c1e205680fe5f30335f8a5f51857659873899e36ecad\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:29:36.308229 env[1712]: time="2025-09-06T00:29:36.308171587Z" level=info msg="Container to stop \"2f3e852440a5f31fd0169ea6c06f5c2c3daf4393d24a33e770649813df4da936\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:29:36.310415 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-386e7f55d109367f623d4ab4bfcc0dbad1e69361bad4dad1844497269947770f-shm.mount: Deactivated successfully. Sep 6 00:29:36.319230 systemd[1]: cri-containerd-386e7f55d109367f623d4ab4bfcc0dbad1e69361bad4dad1844497269947770f.scope: Deactivated successfully. Sep 6 00:29:36.341306 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-386e7f55d109367f623d4ab4bfcc0dbad1e69361bad4dad1844497269947770f-rootfs.mount: Deactivated successfully. Sep 6 00:29:36.355349 env[1712]: time="2025-09-06T00:29:36.355261224Z" level=info msg="shim disconnected" id=386e7f55d109367f623d4ab4bfcc0dbad1e69361bad4dad1844497269947770f Sep 6 00:29:36.355349 env[1712]: time="2025-09-06T00:29:36.355308252Z" level=warning msg="cleaning up after shim disconnected" id=386e7f55d109367f623d4ab4bfcc0dbad1e69361bad4dad1844497269947770f namespace=k8s.io Sep 6 00:29:36.355349 env[1712]: time="2025-09-06T00:29:36.355336238Z" level=info msg="cleaning up dead shim" Sep 6 00:29:36.366343 env[1712]: time="2025-09-06T00:29:36.366277472Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:29:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3829 runtime=io.containerd.runc.v2\n" Sep 6 00:29:36.367280 env[1712]: time="2025-09-06T00:29:36.367247020Z" level=info msg="TearDown network for sandbox \"386e7f55d109367f623d4ab4bfcc0dbad1e69361bad4dad1844497269947770f\" successfully" Sep 6 00:29:36.367410 env[1712]: time="2025-09-06T00:29:36.367357464Z" level=info msg="StopPodSandbox for \"386e7f55d109367f623d4ab4bfcc0dbad1e69361bad4dad1844497269947770f\" returns successfully" Sep 6 00:29:36.448048 kubelet[2088]: I0906 00:29:36.447145 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6b7k\" (UniqueName: \"kubernetes.io/projected/889a0a7f-c309-48b8-8ada-69f6a7916d34-kube-api-access-m6b7k\") pod \"889a0a7f-c309-48b8-8ada-69f6a7916d34\" (UID: \"889a0a7f-c309-48b8-8ada-69f6a7916d34\") " Sep 6 00:29:36.448048 kubelet[2088]: I0906 00:29:36.447198 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/889a0a7f-c309-48b8-8ada-69f6a7916d34-cilium-config-path\") pod \"889a0a7f-c309-48b8-8ada-69f6a7916d34\" (UID: \"889a0a7f-c309-48b8-8ada-69f6a7916d34\") " Sep 6 00:29:36.448048 kubelet[2088]: I0906 00:29:36.447216 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/889a0a7f-c309-48b8-8ada-69f6a7916d34-bpf-maps\") pod \"889a0a7f-c309-48b8-8ada-69f6a7916d34\" (UID: \"889a0a7f-c309-48b8-8ada-69f6a7916d34\") " Sep 6 00:29:36.448048 kubelet[2088]: I0906 00:29:36.447234 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/889a0a7f-c309-48b8-8ada-69f6a7916d34-xtables-lock\") pod \"889a0a7f-c309-48b8-8ada-69f6a7916d34\" (UID: \"889a0a7f-c309-48b8-8ada-69f6a7916d34\") " Sep 6 00:29:36.448048 kubelet[2088]: I0906 00:29:36.447250 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/889a0a7f-c309-48b8-8ada-69f6a7916d34-clustermesh-secrets\") pod \"889a0a7f-c309-48b8-8ada-69f6a7916d34\" (UID: \"889a0a7f-c309-48b8-8ada-69f6a7916d34\") " Sep 6 00:29:36.448048 kubelet[2088]: I0906 00:29:36.447270 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/889a0a7f-c309-48b8-8ada-69f6a7916d34-host-proc-sys-net\") pod \"889a0a7f-c309-48b8-8ada-69f6a7916d34\" (UID: \"889a0a7f-c309-48b8-8ada-69f6a7916d34\") " Sep 6 00:29:36.448403 kubelet[2088]: I0906 00:29:36.447283 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/889a0a7f-c309-48b8-8ada-69f6a7916d34-cilium-run\") pod \"889a0a7f-c309-48b8-8ada-69f6a7916d34\" (UID: \"889a0a7f-c309-48b8-8ada-69f6a7916d34\") " Sep 6 00:29:36.448403 kubelet[2088]: I0906 00:29:36.447300 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/889a0a7f-c309-48b8-8ada-69f6a7916d34-etc-cni-netd\") pod \"889a0a7f-c309-48b8-8ada-69f6a7916d34\" (UID: \"889a0a7f-c309-48b8-8ada-69f6a7916d34\") " Sep 6 00:29:36.448403 kubelet[2088]: I0906 00:29:36.447347 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/889a0a7f-c309-48b8-8ada-69f6a7916d34-cni-path\") pod \"889a0a7f-c309-48b8-8ada-69f6a7916d34\" (UID: \"889a0a7f-c309-48b8-8ada-69f6a7916d34\") " Sep 6 00:29:36.448403 kubelet[2088]: I0906 00:29:36.447361 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/889a0a7f-c309-48b8-8ada-69f6a7916d34-lib-modules\") pod \"889a0a7f-c309-48b8-8ada-69f6a7916d34\" (UID: \"889a0a7f-c309-48b8-8ada-69f6a7916d34\") " Sep 6 00:29:36.448403 kubelet[2088]: I0906 00:29:36.447376 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/889a0a7f-c309-48b8-8ada-69f6a7916d34-hostproc\") pod \"889a0a7f-c309-48b8-8ada-69f6a7916d34\" (UID: \"889a0a7f-c309-48b8-8ada-69f6a7916d34\") " Sep 6 00:29:36.448403 kubelet[2088]: I0906 00:29:36.447389 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/889a0a7f-c309-48b8-8ada-69f6a7916d34-host-proc-sys-kernel\") pod \"889a0a7f-c309-48b8-8ada-69f6a7916d34\" (UID: \"889a0a7f-c309-48b8-8ada-69f6a7916d34\") " Sep 6 00:29:36.448566 kubelet[2088]: I0906 00:29:36.447406 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/889a0a7f-c309-48b8-8ada-69f6a7916d34-hubble-tls\") pod \"889a0a7f-c309-48b8-8ada-69f6a7916d34\" (UID: \"889a0a7f-c309-48b8-8ada-69f6a7916d34\") " Sep 6 00:29:36.448566 kubelet[2088]: I0906 00:29:36.447422 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/889a0a7f-c309-48b8-8ada-69f6a7916d34-cilium-cgroup\") pod \"889a0a7f-c309-48b8-8ada-69f6a7916d34\" (UID: \"889a0a7f-c309-48b8-8ada-69f6a7916d34\") " Sep 6 00:29:36.448566 kubelet[2088]: I0906 00:29:36.447469 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/889a0a7f-c309-48b8-8ada-69f6a7916d34-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "889a0a7f-c309-48b8-8ada-69f6a7916d34" (UID: "889a0a7f-c309-48b8-8ada-69f6a7916d34"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:29:36.449009 kubelet[2088]: I0906 00:29:36.448966 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/889a0a7f-c309-48b8-8ada-69f6a7916d34-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "889a0a7f-c309-48b8-8ada-69f6a7916d34" (UID: "889a0a7f-c309-48b8-8ada-69f6a7916d34"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:29:36.449106 kubelet[2088]: I0906 00:29:36.449025 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/889a0a7f-c309-48b8-8ada-69f6a7916d34-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "889a0a7f-c309-48b8-8ada-69f6a7916d34" (UID: "889a0a7f-c309-48b8-8ada-69f6a7916d34"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:29:36.449829 kubelet[2088]: I0906 00:29:36.449803 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/889a0a7f-c309-48b8-8ada-69f6a7916d34-cni-path" (OuterVolumeSpecName: "cni-path") pod "889a0a7f-c309-48b8-8ada-69f6a7916d34" (UID: "889a0a7f-c309-48b8-8ada-69f6a7916d34"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:29:36.450102 kubelet[2088]: I0906 00:29:36.450081 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/889a0a7f-c309-48b8-8ada-69f6a7916d34-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "889a0a7f-c309-48b8-8ada-69f6a7916d34" (UID: "889a0a7f-c309-48b8-8ada-69f6a7916d34"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 6 00:29:36.450159 kubelet[2088]: I0906 00:29:36.450128 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/889a0a7f-c309-48b8-8ada-69f6a7916d34-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "889a0a7f-c309-48b8-8ada-69f6a7916d34" (UID: "889a0a7f-c309-48b8-8ada-69f6a7916d34"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:29:36.450159 kubelet[2088]: I0906 00:29:36.450143 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/889a0a7f-c309-48b8-8ada-69f6a7916d34-hostproc" (OuterVolumeSpecName: "hostproc") pod "889a0a7f-c309-48b8-8ada-69f6a7916d34" (UID: "889a0a7f-c309-48b8-8ada-69f6a7916d34"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:29:36.450219 kubelet[2088]: I0906 00:29:36.450159 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/889a0a7f-c309-48b8-8ada-69f6a7916d34-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "889a0a7f-c309-48b8-8ada-69f6a7916d34" (UID: "889a0a7f-c309-48b8-8ada-69f6a7916d34"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:29:36.451154 kubelet[2088]: I0906 00:29:36.451132 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/889a0a7f-c309-48b8-8ada-69f6a7916d34-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "889a0a7f-c309-48b8-8ada-69f6a7916d34" (UID: "889a0a7f-c309-48b8-8ada-69f6a7916d34"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:29:36.451286 kubelet[2088]: I0906 00:29:36.451269 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/889a0a7f-c309-48b8-8ada-69f6a7916d34-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "889a0a7f-c309-48b8-8ada-69f6a7916d34" (UID: "889a0a7f-c309-48b8-8ada-69f6a7916d34"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:29:36.451373 kubelet[2088]: I0906 00:29:36.451362 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/889a0a7f-c309-48b8-8ada-69f6a7916d34-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "889a0a7f-c309-48b8-8ada-69f6a7916d34" (UID: "889a0a7f-c309-48b8-8ada-69f6a7916d34"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:29:36.453938 kubelet[2088]: I0906 00:29:36.453904 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/889a0a7f-c309-48b8-8ada-69f6a7916d34-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "889a0a7f-c309-48b8-8ada-69f6a7916d34" (UID: "889a0a7f-c309-48b8-8ada-69f6a7916d34"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 00:29:36.454990 kubelet[2088]: I0906 00:29:36.454955 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/889a0a7f-c309-48b8-8ada-69f6a7916d34-kube-api-access-m6b7k" (OuterVolumeSpecName: "kube-api-access-m6b7k") pod "889a0a7f-c309-48b8-8ada-69f6a7916d34" (UID: "889a0a7f-c309-48b8-8ada-69f6a7916d34"). InnerVolumeSpecName "kube-api-access-m6b7k". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 00:29:36.457387 kubelet[2088]: I0906 00:29:36.457338 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/889a0a7f-c309-48b8-8ada-69f6a7916d34-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "889a0a7f-c309-48b8-8ada-69f6a7916d34" (UID: "889a0a7f-c309-48b8-8ada-69f6a7916d34"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 6 00:29:36.547984 kubelet[2088]: I0906 00:29:36.547933 2088 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/889a0a7f-c309-48b8-8ada-69f6a7916d34-cni-path\") on node \"172.31.18.181\" DevicePath \"\"" Sep 6 00:29:36.547984 kubelet[2088]: I0906 00:29:36.547968 2088 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/889a0a7f-c309-48b8-8ada-69f6a7916d34-lib-modules\") on node \"172.31.18.181\" DevicePath \"\"" Sep 6 00:29:36.547984 kubelet[2088]: I0906 00:29:36.547982 2088 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/889a0a7f-c309-48b8-8ada-69f6a7916d34-hostproc\") on node \"172.31.18.181\" DevicePath \"\"" Sep 6 00:29:36.547984 kubelet[2088]: I0906 00:29:36.547993 2088 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/889a0a7f-c309-48b8-8ada-69f6a7916d34-host-proc-sys-kernel\") on node \"172.31.18.181\" DevicePath \"\"" Sep 6 00:29:36.548286 kubelet[2088]: I0906 00:29:36.548008 2088 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/889a0a7f-c309-48b8-8ada-69f6a7916d34-hubble-tls\") on node \"172.31.18.181\" DevicePath \"\"" Sep 6 00:29:36.548286 kubelet[2088]: I0906 00:29:36.548019 2088 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/889a0a7f-c309-48b8-8ada-69f6a7916d34-cilium-cgroup\") on node \"172.31.18.181\" DevicePath \"\"" Sep 6 00:29:36.548286 kubelet[2088]: I0906 00:29:36.548029 2088 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m6b7k\" (UniqueName: \"kubernetes.io/projected/889a0a7f-c309-48b8-8ada-69f6a7916d34-kube-api-access-m6b7k\") on node \"172.31.18.181\" DevicePath \"\"" Sep 6 00:29:36.548286 kubelet[2088]: I0906 00:29:36.548042 2088 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/889a0a7f-c309-48b8-8ada-69f6a7916d34-cilium-config-path\") on node \"172.31.18.181\" DevicePath \"\"" Sep 6 00:29:36.548286 kubelet[2088]: I0906 00:29:36.548052 2088 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/889a0a7f-c309-48b8-8ada-69f6a7916d34-bpf-maps\") on node \"172.31.18.181\" DevicePath \"\"" Sep 6 00:29:36.548286 kubelet[2088]: I0906 00:29:36.548062 2088 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/889a0a7f-c309-48b8-8ada-69f6a7916d34-xtables-lock\") on node \"172.31.18.181\" DevicePath \"\"" Sep 6 00:29:36.548286 kubelet[2088]: I0906 00:29:36.548072 2088 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/889a0a7f-c309-48b8-8ada-69f6a7916d34-clustermesh-secrets\") on node \"172.31.18.181\" DevicePath \"\"" Sep 6 00:29:36.548286 kubelet[2088]: I0906 00:29:36.548083 2088 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/889a0a7f-c309-48b8-8ada-69f6a7916d34-host-proc-sys-net\") on node \"172.31.18.181\" DevicePath \"\"" Sep 6 00:29:36.548636 kubelet[2088]: I0906 00:29:36.548093 2088 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/889a0a7f-c309-48b8-8ada-69f6a7916d34-cilium-run\") on node \"172.31.18.181\" DevicePath \"\"" Sep 6 00:29:36.548636 kubelet[2088]: I0906 00:29:36.548104 2088 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/889a0a7f-c309-48b8-8ada-69f6a7916d34-etc-cni-netd\") on node \"172.31.18.181\" DevicePath \"\"" Sep 6 00:29:36.774999 kubelet[2088]: I0906 00:29:36.773814 2088 scope.go:117] "RemoveContainer" containerID="cbe7e204ab4185914e9929e9031af72273836eef7f1ac081a0d64eed814ef064" Sep 6 00:29:36.775122 env[1712]: time="2025-09-06T00:29:36.774847718Z" level=info msg="RemoveContainer for \"cbe7e204ab4185914e9929e9031af72273836eef7f1ac081a0d64eed814ef064\"" Sep 6 00:29:36.780461 systemd[1]: Removed slice kubepods-burstable-pod889a0a7f_c309_48b8_8ada_69f6a7916d34.slice. Sep 6 00:29:36.781393 env[1712]: time="2025-09-06T00:29:36.780549562Z" level=info msg="RemoveContainer for \"cbe7e204ab4185914e9929e9031af72273836eef7f1ac081a0d64eed814ef064\" returns successfully" Sep 6 00:29:36.781474 kubelet[2088]: I0906 00:29:36.781307 2088 scope.go:117] "RemoveContainer" containerID="f885329941211dec762c3dd97db9ac2577682d6df835be647366999daf2f8050" Sep 6 00:29:36.780595 systemd[1]: kubepods-burstable-pod889a0a7f_c309_48b8_8ada_69f6a7916d34.slice: Consumed 7.340s CPU time. Sep 6 00:29:36.783206 env[1712]: time="2025-09-06T00:29:36.782938244Z" level=info msg="RemoveContainer for \"f885329941211dec762c3dd97db9ac2577682d6df835be647366999daf2f8050\"" Sep 6 00:29:36.788483 env[1712]: time="2025-09-06T00:29:36.788415341Z" level=info msg="RemoveContainer for \"f885329941211dec762c3dd97db9ac2577682d6df835be647366999daf2f8050\" returns successfully" Sep 6 00:29:36.788634 kubelet[2088]: I0906 00:29:36.788621 2088 scope.go:117] "RemoveContainer" containerID="2f3e852440a5f31fd0169ea6c06f5c2c3daf4393d24a33e770649813df4da936" Sep 6 00:29:36.789867 env[1712]: time="2025-09-06T00:29:36.789827342Z" level=info msg="RemoveContainer for \"2f3e852440a5f31fd0169ea6c06f5c2c3daf4393d24a33e770649813df4da936\"" Sep 6 00:29:36.794956 env[1712]: time="2025-09-06T00:29:36.794894998Z" level=info msg="RemoveContainer for \"2f3e852440a5f31fd0169ea6c06f5c2c3daf4393d24a33e770649813df4da936\" returns successfully" Sep 6 00:29:36.795335 kubelet[2088]: I0906 00:29:36.795194 2088 scope.go:117] "RemoveContainer" containerID="86c2c012fcc89de326017989b28adacc69b5666aaa4923a394fe2d3b8579245b" Sep 6 00:29:36.797469 env[1712]: time="2025-09-06T00:29:36.797298923Z" level=info msg="RemoveContainer for \"86c2c012fcc89de326017989b28adacc69b5666aaa4923a394fe2d3b8579245b\"" Sep 6 00:29:36.802653 env[1712]: time="2025-09-06T00:29:36.802598590Z" level=info msg="RemoveContainer for \"86c2c012fcc89de326017989b28adacc69b5666aaa4923a394fe2d3b8579245b\" returns successfully" Sep 6 00:29:36.803013 kubelet[2088]: I0906 00:29:36.802982 2088 scope.go:117] "RemoveContainer" containerID="ec648126fc21452493c2c1e205680fe5f30335f8a5f51857659873899e36ecad" Sep 6 00:29:36.804170 env[1712]: time="2025-09-06T00:29:36.804137534Z" level=info msg="RemoveContainer for \"ec648126fc21452493c2c1e205680fe5f30335f8a5f51857659873899e36ecad\"" Sep 6 00:29:36.809284 env[1712]: time="2025-09-06T00:29:36.809240282Z" level=info msg="RemoveContainer for \"ec648126fc21452493c2c1e205680fe5f30335f8a5f51857659873899e36ecad\" returns successfully" Sep 6 00:29:36.809519 kubelet[2088]: I0906 00:29:36.809497 2088 scope.go:117] "RemoveContainer" containerID="cbe7e204ab4185914e9929e9031af72273836eef7f1ac081a0d64eed814ef064" Sep 6 00:29:36.809830 env[1712]: time="2025-09-06T00:29:36.809764787Z" level=error msg="ContainerStatus for \"cbe7e204ab4185914e9929e9031af72273836eef7f1ac081a0d64eed814ef064\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cbe7e204ab4185914e9929e9031af72273836eef7f1ac081a0d64eed814ef064\": not found" Sep 6 00:29:36.809949 kubelet[2088]: E0906 00:29:36.809925 2088 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cbe7e204ab4185914e9929e9031af72273836eef7f1ac081a0d64eed814ef064\": not found" containerID="cbe7e204ab4185914e9929e9031af72273836eef7f1ac081a0d64eed814ef064" Sep 6 00:29:36.810056 kubelet[2088]: I0906 00:29:36.809956 2088 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cbe7e204ab4185914e9929e9031af72273836eef7f1ac081a0d64eed814ef064"} err="failed to get container status \"cbe7e204ab4185914e9929e9031af72273836eef7f1ac081a0d64eed814ef064\": rpc error: code = NotFound desc = an error occurred when try to find container \"cbe7e204ab4185914e9929e9031af72273836eef7f1ac081a0d64eed814ef064\": not found" Sep 6 00:29:36.810056 kubelet[2088]: I0906 00:29:36.810024 2088 scope.go:117] "RemoveContainer" containerID="f885329941211dec762c3dd97db9ac2577682d6df835be647366999daf2f8050" Sep 6 00:29:36.810210 env[1712]: time="2025-09-06T00:29:36.810163627Z" level=error msg="ContainerStatus for \"f885329941211dec762c3dd97db9ac2577682d6df835be647366999daf2f8050\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f885329941211dec762c3dd97db9ac2577682d6df835be647366999daf2f8050\": not found" Sep 6 00:29:36.810457 kubelet[2088]: E0906 00:29:36.810306 2088 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f885329941211dec762c3dd97db9ac2577682d6df835be647366999daf2f8050\": not found" containerID="f885329941211dec762c3dd97db9ac2577682d6df835be647366999daf2f8050" Sep 6 00:29:36.810457 kubelet[2088]: I0906 00:29:36.810342 2088 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f885329941211dec762c3dd97db9ac2577682d6df835be647366999daf2f8050"} err="failed to get container status \"f885329941211dec762c3dd97db9ac2577682d6df835be647366999daf2f8050\": rpc error: code = NotFound desc = an error occurred when try to find container \"f885329941211dec762c3dd97db9ac2577682d6df835be647366999daf2f8050\": not found" Sep 6 00:29:36.810457 kubelet[2088]: I0906 00:29:36.810361 2088 scope.go:117] "RemoveContainer" containerID="2f3e852440a5f31fd0169ea6c06f5c2c3daf4393d24a33e770649813df4da936" Sep 6 00:29:36.810802 env[1712]: time="2025-09-06T00:29:36.810738282Z" level=error msg="ContainerStatus for \"2f3e852440a5f31fd0169ea6c06f5c2c3daf4393d24a33e770649813df4da936\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2f3e852440a5f31fd0169ea6c06f5c2c3daf4393d24a33e770649813df4da936\": not found" Sep 6 00:29:36.811038 kubelet[2088]: E0906 00:29:36.810966 2088 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2f3e852440a5f31fd0169ea6c06f5c2c3daf4393d24a33e770649813df4da936\": not found" containerID="2f3e852440a5f31fd0169ea6c06f5c2c3daf4393d24a33e770649813df4da936" Sep 6 00:29:36.811038 kubelet[2088]: I0906 00:29:36.810995 2088 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2f3e852440a5f31fd0169ea6c06f5c2c3daf4393d24a33e770649813df4da936"} err="failed to get container status \"2f3e852440a5f31fd0169ea6c06f5c2c3daf4393d24a33e770649813df4da936\": rpc error: code = NotFound desc = an error occurred when try to find container \"2f3e852440a5f31fd0169ea6c06f5c2c3daf4393d24a33e770649813df4da936\": not found" Sep 6 00:29:36.811038 kubelet[2088]: I0906 00:29:36.811011 2088 scope.go:117] "RemoveContainer" containerID="86c2c012fcc89de326017989b28adacc69b5666aaa4923a394fe2d3b8579245b" Sep 6 00:29:36.811334 env[1712]: time="2025-09-06T00:29:36.811270118Z" level=error msg="ContainerStatus for \"86c2c012fcc89de326017989b28adacc69b5666aaa4923a394fe2d3b8579245b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"86c2c012fcc89de326017989b28adacc69b5666aaa4923a394fe2d3b8579245b\": not found" Sep 6 00:29:36.811517 kubelet[2088]: E0906 00:29:36.811473 2088 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"86c2c012fcc89de326017989b28adacc69b5666aaa4923a394fe2d3b8579245b\": not found" containerID="86c2c012fcc89de326017989b28adacc69b5666aaa4923a394fe2d3b8579245b" Sep 6 00:29:36.811517 kubelet[2088]: I0906 00:29:36.811501 2088 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"86c2c012fcc89de326017989b28adacc69b5666aaa4923a394fe2d3b8579245b"} err="failed to get container status \"86c2c012fcc89de326017989b28adacc69b5666aaa4923a394fe2d3b8579245b\": rpc error: code = NotFound desc = an error occurred when try to find container \"86c2c012fcc89de326017989b28adacc69b5666aaa4923a394fe2d3b8579245b\": not found" Sep 6 00:29:36.811517 kubelet[2088]: I0906 00:29:36.811520 2088 scope.go:117] "RemoveContainer" containerID="ec648126fc21452493c2c1e205680fe5f30335f8a5f51857659873899e36ecad" Sep 6 00:29:36.811727 env[1712]: time="2025-09-06T00:29:36.811667754Z" level=error msg="ContainerStatus for \"ec648126fc21452493c2c1e205680fe5f30335f8a5f51857659873899e36ecad\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ec648126fc21452493c2c1e205680fe5f30335f8a5f51857659873899e36ecad\": not found" Sep 6 00:29:36.811823 kubelet[2088]: E0906 00:29:36.811804 2088 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ec648126fc21452493c2c1e205680fe5f30335f8a5f51857659873899e36ecad\": not found" containerID="ec648126fc21452493c2c1e205680fe5f30335f8a5f51857659873899e36ecad" Sep 6 00:29:36.811869 kubelet[2088]: I0906 00:29:36.811826 2088 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ec648126fc21452493c2c1e205680fe5f30335f8a5f51857659873899e36ecad"} err="failed to get container status \"ec648126fc21452493c2c1e205680fe5f30335f8a5f51857659873899e36ecad\": rpc error: code = NotFound desc = an error occurred when try to find container \"ec648126fc21452493c2c1e205680fe5f30335f8a5f51857659873899e36ecad\": not found" Sep 6 00:29:36.831480 systemd[1]: var-lib-kubelet-pods-889a0a7f\x2dc309\x2d48b8\x2d8ada\x2d69f6a7916d34-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dm6b7k.mount: Deactivated successfully. Sep 6 00:29:36.831588 systemd[1]: var-lib-kubelet-pods-889a0a7f\x2dc309\x2d48b8\x2d8ada\x2d69f6a7916d34-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 6 00:29:36.831651 systemd[1]: var-lib-kubelet-pods-889a0a7f\x2dc309\x2d48b8\x2d8ada\x2d69f6a7916d34-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 6 00:29:37.242654 kubelet[2088]: E0906 00:29:37.242599 2088 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:37.283876 env[1712]: time="2025-09-06T00:29:37.283835364Z" level=info msg="StopPodSandbox for \"386e7f55d109367f623d4ab4bfcc0dbad1e69361bad4dad1844497269947770f\"" Sep 6 00:29:37.284390 env[1712]: time="2025-09-06T00:29:37.284301054Z" level=info msg="TearDown network for sandbox \"386e7f55d109367f623d4ab4bfcc0dbad1e69361bad4dad1844497269947770f\" successfully" Sep 6 00:29:37.284390 env[1712]: time="2025-09-06T00:29:37.284369822Z" level=info msg="StopPodSandbox for \"386e7f55d109367f623d4ab4bfcc0dbad1e69361bad4dad1844497269947770f\" returns successfully" Sep 6 00:29:37.284973 env[1712]: time="2025-09-06T00:29:37.284734058Z" level=info msg="RemovePodSandbox for \"386e7f55d109367f623d4ab4bfcc0dbad1e69361bad4dad1844497269947770f\"" Sep 6 00:29:37.284973 env[1712]: time="2025-09-06T00:29:37.284949491Z" level=info msg="Forcibly stopping sandbox \"386e7f55d109367f623d4ab4bfcc0dbad1e69361bad4dad1844497269947770f\"" Sep 6 00:29:37.284973 env[1712]: time="2025-09-06T00:29:37.285021884Z" level=info msg="TearDown network for sandbox \"386e7f55d109367f623d4ab4bfcc0dbad1e69361bad4dad1844497269947770f\" successfully" Sep 6 00:29:37.286916 kubelet[2088]: E0906 00:29:37.286879 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:37.291171 env[1712]: time="2025-09-06T00:29:37.291102371Z" level=info msg="RemovePodSandbox \"386e7f55d109367f623d4ab4bfcc0dbad1e69361bad4dad1844497269947770f\" returns successfully" Sep 6 00:29:37.399735 kubelet[2088]: E0906 00:29:37.399694 2088 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:29:37.535150 kubelet[2088]: I0906 00:29:37.535029 2088 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="889a0a7f-c309-48b8-8ada-69f6a7916d34" path="/var/lib/kubelet/pods/889a0a7f-c309-48b8-8ada-69f6a7916d34/volumes" Sep 6 00:29:38.287617 kubelet[2088]: E0906 00:29:38.287564 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:38.595715 kubelet[2088]: I0906 00:29:38.595669 2088 setters.go:602] "Node became not ready" node="172.31.18.181" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-06T00:29:38Z","lastTransitionTime":"2025-09-06T00:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 6 00:29:38.772335 kubelet[2088]: I0906 00:29:38.772280 2088 memory_manager.go:355] "RemoveStaleState removing state" podUID="889a0a7f-c309-48b8-8ada-69f6a7916d34" containerName="cilium-agent" Sep 6 00:29:38.778668 systemd[1]: Created slice kubepods-besteffort-pod11373180_5c7d_43e3_b8ba_7ab92609f199.slice. Sep 6 00:29:38.787343 systemd[1]: Created slice kubepods-burstable-pod6d33f1e8_febd_4d4d_a069_1f364282327b.slice. Sep 6 00:29:38.861963 kubelet[2088]: I0906 00:29:38.861840 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6d33f1e8-febd-4d4d-a069-1f364282327b-etc-cni-netd\") pod \"cilium-qjmqj\" (UID: \"6d33f1e8-febd-4d4d-a069-1f364282327b\") " pod="kube-system/cilium-qjmqj" Sep 6 00:29:38.861963 kubelet[2088]: I0906 00:29:38.861884 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d33f1e8-febd-4d4d-a069-1f364282327b-xtables-lock\") pod \"cilium-qjmqj\" (UID: \"6d33f1e8-febd-4d4d-a069-1f364282327b\") " pod="kube-system/cilium-qjmqj" Sep 6 00:29:38.861963 kubelet[2088]: I0906 00:29:38.861904 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6d33f1e8-febd-4d4d-a069-1f364282327b-cilium-config-path\") pod \"cilium-qjmqj\" (UID: \"6d33f1e8-febd-4d4d-a069-1f364282327b\") " pod="kube-system/cilium-qjmqj" Sep 6 00:29:38.861963 kubelet[2088]: I0906 00:29:38.861920 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z598\" (UniqueName: \"kubernetes.io/projected/6d33f1e8-febd-4d4d-a069-1f364282327b-kube-api-access-2z598\") pod \"cilium-qjmqj\" (UID: \"6d33f1e8-febd-4d4d-a069-1f364282327b\") " pod="kube-system/cilium-qjmqj" Sep 6 00:29:38.861963 kubelet[2088]: I0906 00:29:38.861935 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6d33f1e8-febd-4d4d-a069-1f364282327b-hostproc\") pod \"cilium-qjmqj\" (UID: \"6d33f1e8-febd-4d4d-a069-1f364282327b\") " pod="kube-system/cilium-qjmqj" Sep 6 00:29:38.862346 kubelet[2088]: I0906 00:29:38.862303 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6d33f1e8-febd-4d4d-a069-1f364282327b-cilium-cgroup\") pod \"cilium-qjmqj\" (UID: \"6d33f1e8-febd-4d4d-a069-1f364282327b\") " pod="kube-system/cilium-qjmqj" Sep 6 00:29:38.862523 kubelet[2088]: I0906 00:29:38.862479 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d33f1e8-febd-4d4d-a069-1f364282327b-lib-modules\") pod \"cilium-qjmqj\" (UID: \"6d33f1e8-febd-4d4d-a069-1f364282327b\") " pod="kube-system/cilium-qjmqj" Sep 6 00:29:38.862523 kubelet[2088]: I0906 00:29:38.862514 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6d33f1e8-febd-4d4d-a069-1f364282327b-cilium-run\") pod \"cilium-qjmqj\" (UID: \"6d33f1e8-febd-4d4d-a069-1f364282327b\") " pod="kube-system/cilium-qjmqj" Sep 6 00:29:38.862684 kubelet[2088]: I0906 00:29:38.862536 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6d33f1e8-febd-4d4d-a069-1f364282327b-cni-path\") pod \"cilium-qjmqj\" (UID: \"6d33f1e8-febd-4d4d-a069-1f364282327b\") " pod="kube-system/cilium-qjmqj" Sep 6 00:29:38.862684 kubelet[2088]: I0906 00:29:38.862556 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6d33f1e8-febd-4d4d-a069-1f364282327b-clustermesh-secrets\") pod \"cilium-qjmqj\" (UID: \"6d33f1e8-febd-4d4d-a069-1f364282327b\") " pod="kube-system/cilium-qjmqj" Sep 6 00:29:38.862684 kubelet[2088]: I0906 00:29:38.862571 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6d33f1e8-febd-4d4d-a069-1f364282327b-cilium-ipsec-secrets\") pod \"cilium-qjmqj\" (UID: \"6d33f1e8-febd-4d4d-a069-1f364282327b\") " pod="kube-system/cilium-qjmqj" Sep 6 00:29:38.862684 kubelet[2088]: I0906 00:29:38.862587 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6d33f1e8-febd-4d4d-a069-1f364282327b-host-proc-sys-kernel\") pod \"cilium-qjmqj\" (UID: \"6d33f1e8-febd-4d4d-a069-1f364282327b\") " pod="kube-system/cilium-qjmqj" Sep 6 00:29:38.862684 kubelet[2088]: I0906 00:29:38.862601 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6d33f1e8-febd-4d4d-a069-1f364282327b-hubble-tls\") pod \"cilium-qjmqj\" (UID: \"6d33f1e8-febd-4d4d-a069-1f364282327b\") " pod="kube-system/cilium-qjmqj" Sep 6 00:29:38.862821 kubelet[2088]: I0906 00:29:38.862625 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/11373180-5c7d-43e3-b8ba-7ab92609f199-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-jv6h4\" (UID: \"11373180-5c7d-43e3-b8ba-7ab92609f199\") " pod="kube-system/cilium-operator-6c4d7847fc-jv6h4" Sep 6 00:29:38.862821 kubelet[2088]: I0906 00:29:38.862642 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpfwq\" (UniqueName: \"kubernetes.io/projected/11373180-5c7d-43e3-b8ba-7ab92609f199-kube-api-access-mpfwq\") pod \"cilium-operator-6c4d7847fc-jv6h4\" (UID: \"11373180-5c7d-43e3-b8ba-7ab92609f199\") " pod="kube-system/cilium-operator-6c4d7847fc-jv6h4" Sep 6 00:29:38.862821 kubelet[2088]: I0906 00:29:38.862659 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6d33f1e8-febd-4d4d-a069-1f364282327b-bpf-maps\") pod \"cilium-qjmqj\" (UID: \"6d33f1e8-febd-4d4d-a069-1f364282327b\") " pod="kube-system/cilium-qjmqj" Sep 6 00:29:38.862821 kubelet[2088]: I0906 00:29:38.862673 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6d33f1e8-febd-4d4d-a069-1f364282327b-host-proc-sys-net\") pod \"cilium-qjmqj\" (UID: \"6d33f1e8-febd-4d4d-a069-1f364282327b\") " pod="kube-system/cilium-qjmqj" Sep 6 00:29:39.085072 env[1712]: time="2025-09-06T00:29:39.085023320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-jv6h4,Uid:11373180-5c7d-43e3-b8ba-7ab92609f199,Namespace:kube-system,Attempt:0,}" Sep 6 00:29:39.095376 env[1712]: time="2025-09-06T00:29:39.095300723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qjmqj,Uid:6d33f1e8-febd-4d4d-a069-1f364282327b,Namespace:kube-system,Attempt:0,}" Sep 6 00:29:39.117555 env[1712]: time="2025-09-06T00:29:39.110393900Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:29:39.117555 env[1712]: time="2025-09-06T00:29:39.110431556Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:29:39.117555 env[1712]: time="2025-09-06T00:29:39.110442322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:29:39.117555 env[1712]: time="2025-09-06T00:29:39.110654544Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ea0ab41e41d7787eedd70d2236d72589f6acc49d42becf0d305520d92ed93060 pid=3859 runtime=io.containerd.runc.v2 Sep 6 00:29:39.136018 systemd[1]: Started cri-containerd-ea0ab41e41d7787eedd70d2236d72589f6acc49d42becf0d305520d92ed93060.scope. Sep 6 00:29:39.145335 env[1712]: time="2025-09-06T00:29:39.144514353Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:29:39.145335 env[1712]: time="2025-09-06T00:29:39.144636954Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:29:39.145335 env[1712]: time="2025-09-06T00:29:39.144676389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:29:39.145335 env[1712]: time="2025-09-06T00:29:39.144958934Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7ac23779894f014aeb192d9dbc337db623ce11ae0066e1c4b73c68c09e07479e pid=3884 runtime=io.containerd.runc.v2 Sep 6 00:29:39.180398 systemd[1]: Started cri-containerd-7ac23779894f014aeb192d9dbc337db623ce11ae0066e1c4b73c68c09e07479e.scope. Sep 6 00:29:39.222405 env[1712]: time="2025-09-06T00:29:39.222349697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-jv6h4,Uid:11373180-5c7d-43e3-b8ba-7ab92609f199,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea0ab41e41d7787eedd70d2236d72589f6acc49d42becf0d305520d92ed93060\"" Sep 6 00:29:39.225301 env[1712]: time="2025-09-06T00:29:39.225259431Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 6 00:29:39.228530 env[1712]: time="2025-09-06T00:29:39.228490710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qjmqj,Uid:6d33f1e8-febd-4d4d-a069-1f364282327b,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ac23779894f014aeb192d9dbc337db623ce11ae0066e1c4b73c68c09e07479e\"" Sep 6 00:29:39.231618 env[1712]: time="2025-09-06T00:29:39.231573057Z" level=info msg="CreateContainer within sandbox \"7ac23779894f014aeb192d9dbc337db623ce11ae0066e1c4b73c68c09e07479e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:29:39.253141 env[1712]: time="2025-09-06T00:29:39.253069308Z" level=info msg="CreateContainer within sandbox \"7ac23779894f014aeb192d9dbc337db623ce11ae0066e1c4b73c68c09e07479e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"466aefe2dd96e1e57a9b1fc53bea65751e049910e3def4a82d1e58914507f6f7\"" Sep 6 00:29:39.253792 env[1712]: time="2025-09-06T00:29:39.253742596Z" level=info msg="StartContainer for \"466aefe2dd96e1e57a9b1fc53bea65751e049910e3def4a82d1e58914507f6f7\"" Sep 6 00:29:39.273202 systemd[1]: Started cri-containerd-466aefe2dd96e1e57a9b1fc53bea65751e049910e3def4a82d1e58914507f6f7.scope. Sep 6 00:29:39.287879 kubelet[2088]: E0906 00:29:39.287790 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:39.289117 systemd[1]: cri-containerd-466aefe2dd96e1e57a9b1fc53bea65751e049910e3def4a82d1e58914507f6f7.scope: Deactivated successfully. Sep 6 00:29:39.311404 env[1712]: time="2025-09-06T00:29:39.311298402Z" level=info msg="shim disconnected" id=466aefe2dd96e1e57a9b1fc53bea65751e049910e3def4a82d1e58914507f6f7 Sep 6 00:29:39.311404 env[1712]: time="2025-09-06T00:29:39.311400569Z" level=warning msg="cleaning up after shim disconnected" id=466aefe2dd96e1e57a9b1fc53bea65751e049910e3def4a82d1e58914507f6f7 namespace=k8s.io Sep 6 00:29:39.311404 env[1712]: time="2025-09-06T00:29:39.311410073Z" level=info msg="cleaning up dead shim" Sep 6 00:29:39.319735 env[1712]: time="2025-09-06T00:29:39.319666678Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:29:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3964 runtime=io.containerd.runc.v2\ntime=\"2025-09-06T00:29:39Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/466aefe2dd96e1e57a9b1fc53bea65751e049910e3def4a82d1e58914507f6f7/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Sep 6 00:29:39.320115 env[1712]: time="2025-09-06T00:29:39.319988513Z" level=error msg="copy shim log" error="read /proc/self/fd/64: file already closed" Sep 6 00:29:39.320426 env[1712]: time="2025-09-06T00:29:39.320376139Z" level=error msg="Failed to pipe stdout of container \"466aefe2dd96e1e57a9b1fc53bea65751e049910e3def4a82d1e58914507f6f7\"" error="reading from a closed fifo" Sep 6 00:29:39.320633 env[1712]: time="2025-09-06T00:29:39.320552339Z" level=error msg="Failed to pipe stderr of container \"466aefe2dd96e1e57a9b1fc53bea65751e049910e3def4a82d1e58914507f6f7\"" error="reading from a closed fifo" Sep 6 00:29:39.324561 env[1712]: time="2025-09-06T00:29:39.324478465Z" level=error msg="StartContainer for \"466aefe2dd96e1e57a9b1fc53bea65751e049910e3def4a82d1e58914507f6f7\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Sep 6 00:29:39.324992 kubelet[2088]: E0906 00:29:39.324948 2088 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="466aefe2dd96e1e57a9b1fc53bea65751e049910e3def4a82d1e58914507f6f7" Sep 6 00:29:39.327639 kubelet[2088]: E0906 00:29:39.327440 2088 kuberuntime_manager.go:1341] "Unhandled Error" err=< Sep 6 00:29:39.327639 kubelet[2088]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Sep 6 00:29:39.327639 kubelet[2088]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Sep 6 00:29:39.327639 kubelet[2088]: rm /hostbin/cilium-mount Sep 6 00:29:39.327882 kubelet[2088]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2z598,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-qjmqj_kube-system(6d33f1e8-febd-4d4d-a069-1f364282327b): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Sep 6 00:29:39.327882 kubelet[2088]: > logger="UnhandledError" Sep 6 00:29:39.328660 kubelet[2088]: E0906 00:29:39.328611 2088 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-qjmqj" podUID="6d33f1e8-febd-4d4d-a069-1f364282327b" Sep 6 00:29:39.788166 env[1712]: time="2025-09-06T00:29:39.788113986Z" level=info msg="StopPodSandbox for \"7ac23779894f014aeb192d9dbc337db623ce11ae0066e1c4b73c68c09e07479e\"" Sep 6 00:29:39.788380 env[1712]: time="2025-09-06T00:29:39.788177454Z" level=info msg="Container to stop \"466aefe2dd96e1e57a9b1fc53bea65751e049910e3def4a82d1e58914507f6f7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:29:39.795568 systemd[1]: cri-containerd-7ac23779894f014aeb192d9dbc337db623ce11ae0066e1c4b73c68c09e07479e.scope: Deactivated successfully. Sep 6 00:29:39.829993 env[1712]: time="2025-09-06T00:29:39.829935919Z" level=info msg="shim disconnected" id=7ac23779894f014aeb192d9dbc337db623ce11ae0066e1c4b73c68c09e07479e Sep 6 00:29:39.829993 env[1712]: time="2025-09-06T00:29:39.829985902Z" level=warning msg="cleaning up after shim disconnected" id=7ac23779894f014aeb192d9dbc337db623ce11ae0066e1c4b73c68c09e07479e namespace=k8s.io Sep 6 00:29:39.829993 env[1712]: time="2025-09-06T00:29:39.829996223Z" level=info msg="cleaning up dead shim" Sep 6 00:29:39.841674 env[1712]: time="2025-09-06T00:29:39.841626058Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:29:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3998 runtime=io.containerd.runc.v2\n" Sep 6 00:29:39.842024 env[1712]: time="2025-09-06T00:29:39.841993015Z" level=info msg="TearDown network for sandbox \"7ac23779894f014aeb192d9dbc337db623ce11ae0066e1c4b73c68c09e07479e\" successfully" Sep 6 00:29:39.842121 env[1712]: time="2025-09-06T00:29:39.842020023Z" level=info msg="StopPodSandbox for \"7ac23779894f014aeb192d9dbc337db623ce11ae0066e1c4b73c68c09e07479e\" returns successfully" Sep 6 00:29:39.869714 kubelet[2088]: I0906 00:29:39.869678 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6d33f1e8-febd-4d4d-a069-1f364282327b-cilium-config-path\") pod \"6d33f1e8-febd-4d4d-a069-1f364282327b\" (UID: \"6d33f1e8-febd-4d4d-a069-1f364282327b\") " Sep 6 00:29:39.869922 kubelet[2088]: I0906 00:29:39.869909 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d33f1e8-febd-4d4d-a069-1f364282327b-xtables-lock\") pod \"6d33f1e8-febd-4d4d-a069-1f364282327b\" (UID: \"6d33f1e8-febd-4d4d-a069-1f364282327b\") " Sep 6 00:29:39.869990 kubelet[2088]: I0906 00:29:39.869982 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d33f1e8-febd-4d4d-a069-1f364282327b-lib-modules\") pod \"6d33f1e8-febd-4d4d-a069-1f364282327b\" (UID: \"6d33f1e8-febd-4d4d-a069-1f364282327b\") " Sep 6 00:29:39.870052 kubelet[2088]: I0906 00:29:39.870043 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6d33f1e8-febd-4d4d-a069-1f364282327b-clustermesh-secrets\") pod \"6d33f1e8-febd-4d4d-a069-1f364282327b\" (UID: \"6d33f1e8-febd-4d4d-a069-1f364282327b\") " Sep 6 00:29:39.870116 kubelet[2088]: I0906 00:29:39.870108 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2z598\" (UniqueName: \"kubernetes.io/projected/6d33f1e8-febd-4d4d-a069-1f364282327b-kube-api-access-2z598\") pod \"6d33f1e8-febd-4d4d-a069-1f364282327b\" (UID: \"6d33f1e8-febd-4d4d-a069-1f364282327b\") " Sep 6 00:29:39.870168 kubelet[2088]: I0906 00:29:39.870160 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6d33f1e8-febd-4d4d-a069-1f364282327b-cilium-cgroup\") pod \"6d33f1e8-febd-4d4d-a069-1f364282327b\" (UID: \"6d33f1e8-febd-4d4d-a069-1f364282327b\") " Sep 6 00:29:39.870221 kubelet[2088]: I0906 00:29:39.870213 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6d33f1e8-febd-4d4d-a069-1f364282327b-cni-path\") pod \"6d33f1e8-febd-4d4d-a069-1f364282327b\" (UID: \"6d33f1e8-febd-4d4d-a069-1f364282327b\") " Sep 6 00:29:39.870276 kubelet[2088]: I0906 00:29:39.870268 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6d33f1e8-febd-4d4d-a069-1f364282327b-cilium-ipsec-secrets\") pod \"6d33f1e8-febd-4d4d-a069-1f364282327b\" (UID: \"6d33f1e8-febd-4d4d-a069-1f364282327b\") " Sep 6 00:29:39.870389 kubelet[2088]: I0906 00:29:39.870379 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6d33f1e8-febd-4d4d-a069-1f364282327b-etc-cni-netd\") pod \"6d33f1e8-febd-4d4d-a069-1f364282327b\" (UID: \"6d33f1e8-febd-4d4d-a069-1f364282327b\") " Sep 6 00:29:39.870456 kubelet[2088]: I0906 00:29:39.870446 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6d33f1e8-febd-4d4d-a069-1f364282327b-host-proc-sys-net\") pod \"6d33f1e8-febd-4d4d-a069-1f364282327b\" (UID: \"6d33f1e8-febd-4d4d-a069-1f364282327b\") " Sep 6 00:29:39.870514 kubelet[2088]: I0906 00:29:39.870504 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6d33f1e8-febd-4d4d-a069-1f364282327b-host-proc-sys-kernel\") pod \"6d33f1e8-febd-4d4d-a069-1f364282327b\" (UID: \"6d33f1e8-febd-4d4d-a069-1f364282327b\") " Sep 6 00:29:39.870574 kubelet[2088]: I0906 00:29:39.870566 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6d33f1e8-febd-4d4d-a069-1f364282327b-bpf-maps\") pod \"6d33f1e8-febd-4d4d-a069-1f364282327b\" (UID: \"6d33f1e8-febd-4d4d-a069-1f364282327b\") " Sep 6 00:29:39.870646 kubelet[2088]: I0906 00:29:39.870623 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6d33f1e8-febd-4d4d-a069-1f364282327b-hostproc\") pod \"6d33f1e8-febd-4d4d-a069-1f364282327b\" (UID: \"6d33f1e8-febd-4d4d-a069-1f364282327b\") " Sep 6 00:29:39.870700 kubelet[2088]: I0906 00:29:39.870692 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6d33f1e8-febd-4d4d-a069-1f364282327b-cilium-run\") pod \"6d33f1e8-febd-4d4d-a069-1f364282327b\" (UID: \"6d33f1e8-febd-4d4d-a069-1f364282327b\") " Sep 6 00:29:39.870762 kubelet[2088]: I0906 00:29:39.870754 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6d33f1e8-febd-4d4d-a069-1f364282327b-hubble-tls\") pod \"6d33f1e8-febd-4d4d-a069-1f364282327b\" (UID: \"6d33f1e8-febd-4d4d-a069-1f364282327b\") " Sep 6 00:29:39.871071 kubelet[2088]: I0906 00:29:39.871040 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d33f1e8-febd-4d4d-a069-1f364282327b-cni-path" (OuterVolumeSpecName: "cni-path") pod "6d33f1e8-febd-4d4d-a069-1f364282327b" (UID: "6d33f1e8-febd-4d4d-a069-1f364282327b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:29:39.873611 kubelet[2088]: I0906 00:29:39.873571 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d33f1e8-febd-4d4d-a069-1f364282327b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6d33f1e8-febd-4d4d-a069-1f364282327b" (UID: "6d33f1e8-febd-4d4d-a069-1f364282327b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 6 00:29:39.873732 kubelet[2088]: I0906 00:29:39.873639 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d33f1e8-febd-4d4d-a069-1f364282327b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6d33f1e8-febd-4d4d-a069-1f364282327b" (UID: "6d33f1e8-febd-4d4d-a069-1f364282327b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:29:39.873732 kubelet[2088]: I0906 00:29:39.873655 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d33f1e8-febd-4d4d-a069-1f364282327b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6d33f1e8-febd-4d4d-a069-1f364282327b" (UID: "6d33f1e8-febd-4d4d-a069-1f364282327b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:29:39.874735 kubelet[2088]: I0906 00:29:39.874707 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d33f1e8-febd-4d4d-a069-1f364282327b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6d33f1e8-febd-4d4d-a069-1f364282327b" (UID: "6d33f1e8-febd-4d4d-a069-1f364282327b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:29:39.874849 kubelet[2088]: I0906 00:29:39.874838 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d33f1e8-febd-4d4d-a069-1f364282327b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6d33f1e8-febd-4d4d-a069-1f364282327b" (UID: "6d33f1e8-febd-4d4d-a069-1f364282327b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:29:39.874918 kubelet[2088]: I0906 00:29:39.874907 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d33f1e8-febd-4d4d-a069-1f364282327b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6d33f1e8-febd-4d4d-a069-1f364282327b" (UID: "6d33f1e8-febd-4d4d-a069-1f364282327b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:29:39.874973 kubelet[2088]: I0906 00:29:39.874964 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d33f1e8-febd-4d4d-a069-1f364282327b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6d33f1e8-febd-4d4d-a069-1f364282327b" (UID: "6d33f1e8-febd-4d4d-a069-1f364282327b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:29:39.875030 kubelet[2088]: I0906 00:29:39.875021 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d33f1e8-febd-4d4d-a069-1f364282327b-hostproc" (OuterVolumeSpecName: "hostproc") pod "6d33f1e8-febd-4d4d-a069-1f364282327b" (UID: "6d33f1e8-febd-4d4d-a069-1f364282327b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:29:39.875081 kubelet[2088]: I0906 00:29:39.875072 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d33f1e8-febd-4d4d-a069-1f364282327b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6d33f1e8-febd-4d4d-a069-1f364282327b" (UID: "6d33f1e8-febd-4d4d-a069-1f364282327b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:29:39.875186 kubelet[2088]: I0906 00:29:39.875174 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d33f1e8-febd-4d4d-a069-1f364282327b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6d33f1e8-febd-4d4d-a069-1f364282327b" (UID: "6d33f1e8-febd-4d4d-a069-1f364282327b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 00:29:39.875253 kubelet[2088]: I0906 00:29:39.875245 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d33f1e8-febd-4d4d-a069-1f364282327b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6d33f1e8-febd-4d4d-a069-1f364282327b" (UID: "6d33f1e8-febd-4d4d-a069-1f364282327b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:29:39.878030 kubelet[2088]: I0906 00:29:39.877991 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d33f1e8-febd-4d4d-a069-1f364282327b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6d33f1e8-febd-4d4d-a069-1f364282327b" (UID: "6d33f1e8-febd-4d4d-a069-1f364282327b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 6 00:29:39.878142 kubelet[2088]: I0906 00:29:39.878109 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d33f1e8-febd-4d4d-a069-1f364282327b-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "6d33f1e8-febd-4d4d-a069-1f364282327b" (UID: "6d33f1e8-febd-4d4d-a069-1f364282327b"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 6 00:29:39.880828 kubelet[2088]: I0906 00:29:39.880762 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d33f1e8-febd-4d4d-a069-1f364282327b-kube-api-access-2z598" (OuterVolumeSpecName: "kube-api-access-2z598") pod "6d33f1e8-febd-4d4d-a069-1f364282327b" (UID: "6d33f1e8-febd-4d4d-a069-1f364282327b"). InnerVolumeSpecName "kube-api-access-2z598". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 00:29:39.971439 systemd[1]: var-lib-kubelet-pods-6d33f1e8\x2dfebd\x2d4d4d\x2da069\x2d1f364282327b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2z598.mount: Deactivated successfully. Sep 6 00:29:39.971541 systemd[1]: var-lib-kubelet-pods-6d33f1e8\x2dfebd\x2d4d4d\x2da069\x2d1f364282327b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 6 00:29:39.971602 systemd[1]: var-lib-kubelet-pods-6d33f1e8\x2dfebd\x2d4d4d\x2da069\x2d1f364282327b-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 6 00:29:39.973637 kubelet[2088]: I0906 00:29:39.973565 2088 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6d33f1e8-febd-4d4d-a069-1f364282327b-hostproc\") on node \"172.31.18.181\" DevicePath \"\"" Sep 6 00:29:39.973637 kubelet[2088]: I0906 00:29:39.973588 2088 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6d33f1e8-febd-4d4d-a069-1f364282327b-cilium-run\") on node \"172.31.18.181\" DevicePath \"\"" Sep 6 00:29:39.973637 kubelet[2088]: I0906 00:29:39.973597 2088 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6d33f1e8-febd-4d4d-a069-1f364282327b-hubble-tls\") on node \"172.31.18.181\" DevicePath \"\"" Sep 6 00:29:39.973637 kubelet[2088]: I0906 00:29:39.973606 2088 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6d33f1e8-febd-4d4d-a069-1f364282327b-clustermesh-secrets\") on node \"172.31.18.181\" DevicePath \"\"" Sep 6 00:29:39.973637 kubelet[2088]: I0906 00:29:39.973622 2088 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6d33f1e8-febd-4d4d-a069-1f364282327b-cilium-config-path\") on node \"172.31.18.181\" DevicePath \"\"" Sep 6 00:29:39.973637 kubelet[2088]: I0906 00:29:39.973630 2088 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d33f1e8-febd-4d4d-a069-1f364282327b-xtables-lock\") on node \"172.31.18.181\" DevicePath \"\"" Sep 6 00:29:39.973637 kubelet[2088]: I0906 00:29:39.973638 2088 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d33f1e8-febd-4d4d-a069-1f364282327b-lib-modules\") on node \"172.31.18.181\" DevicePath \"\"" Sep 6 00:29:39.971659 systemd[1]: var-lib-kubelet-pods-6d33f1e8\x2dfebd\x2d4d4d\x2da069\x2d1f364282327b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 6 00:29:39.973878 kubelet[2088]: I0906 00:29:39.973645 2088 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2z598\" (UniqueName: \"kubernetes.io/projected/6d33f1e8-febd-4d4d-a069-1f364282327b-kube-api-access-2z598\") on node \"172.31.18.181\" DevicePath \"\"" Sep 6 00:29:39.973878 kubelet[2088]: I0906 00:29:39.973653 2088 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6d33f1e8-febd-4d4d-a069-1f364282327b-cilium-cgroup\") on node \"172.31.18.181\" DevicePath \"\"" Sep 6 00:29:39.973878 kubelet[2088]: I0906 00:29:39.973660 2088 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6d33f1e8-febd-4d4d-a069-1f364282327b-cni-path\") on node \"172.31.18.181\" DevicePath \"\"" Sep 6 00:29:39.973878 kubelet[2088]: I0906 00:29:39.973667 2088 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6d33f1e8-febd-4d4d-a069-1f364282327b-cilium-ipsec-secrets\") on node \"172.31.18.181\" DevicePath \"\"" Sep 6 00:29:39.973878 kubelet[2088]: I0906 00:29:39.973675 2088 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6d33f1e8-febd-4d4d-a069-1f364282327b-bpf-maps\") on node \"172.31.18.181\" DevicePath \"\"" Sep 6 00:29:39.973878 kubelet[2088]: I0906 00:29:39.973682 2088 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6d33f1e8-febd-4d4d-a069-1f364282327b-etc-cni-netd\") on node \"172.31.18.181\" DevicePath \"\"" Sep 6 00:29:39.973878 kubelet[2088]: I0906 00:29:39.973690 2088 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6d33f1e8-febd-4d4d-a069-1f364282327b-host-proc-sys-net\") on node \"172.31.18.181\" DevicePath \"\"" Sep 6 00:29:39.973878 kubelet[2088]: I0906 00:29:39.973699 2088 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6d33f1e8-febd-4d4d-a069-1f364282327b-host-proc-sys-kernel\") on node \"172.31.18.181\" DevicePath \"\"" Sep 6 00:29:40.159517 env[1712]: time="2025-09-06T00:29:40.159434985Z" level=error msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" failed" error="failed to pull and unpack image \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://cdn01.quay.io/quayio-production-s3/sha256/ed/ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIATAAF2YHTCKFFWO5C%2F20250906%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250906T002940Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=fae391a6703d5201a213c55c51c882a09429bbb69cf599038893c6de3487876c®ion=us-east-1&namespace=cilium&repo_name=operator-generic&akamai_signature=exp=1757119480~hmac=a9a7c130eb7fddb690a49085cb0cc8a666de4be94f459bf91f1209e8350a3a4d\": dial tcp: lookup cdn01.quay.io: no such host" Sep 6 00:29:40.159987 kubelet[2088]: E0906 00:29:40.159779 2088 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://cdn01.quay.io/quayio-production-s3/sha256/ed/ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIATAAF2YHTCKFFWO5C%2F20250906%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250906T002940Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=fae391a6703d5201a213c55c51c882a09429bbb69cf599038893c6de3487876c®ion=us-east-1&namespace=cilium&repo_name=operator-generic&akamai_signature=exp=1757119480~hmac=a9a7c130eb7fddb690a49085cb0cc8a666de4be94f459bf91f1209e8350a3a4d\": dial tcp: lookup cdn01.quay.io: no such host" image="quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e" Sep 6 00:29:40.159987 kubelet[2088]: E0906 00:29:40.159830 2088 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://cdn01.quay.io/quayio-production-s3/sha256/ed/ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIATAAF2YHTCKFFWO5C%2F20250906%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250906T002940Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=fae391a6703d5201a213c55c51c882a09429bbb69cf599038893c6de3487876c®ion=us-east-1&namespace=cilium&repo_name=operator-generic&akamai_signature=exp=1757119480~hmac=a9a7c130eb7fddb690a49085cb0cc8a666de4be94f459bf91f1209e8350a3a4d\": dial tcp: lookup cdn01.quay.io: no such host" image="quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e" Sep 6 00:29:40.160218 kubelet[2088]: E0906 00:29:40.159960 2088 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:cilium-operator,Image:quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Command:[cilium-operator-generic],Args:[--config-dir=/tmp/cilium/config-map --debug=$(CILIUM_DEBUG)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:K8S_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CILIUM_K8S_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CILIUM_DEBUG,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:cilium-config,},Key:debug,Optional:*true,},SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cilium-config-path,ReadOnly:true,MountPath:/tmp/cilium/config-map,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mpfwq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 9234 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:3,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-operator-6c4d7847fc-jv6h4_kube-system(11373180-5c7d-43e3-b8ba-7ab92609f199): ErrImagePull: failed to pull and unpack image \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://cdn01.quay.io/quayio-production-s3/sha256/ed/ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIATAAF2YHTCKFFWO5C%2F20250906%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250906T002940Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=fae391a6703d5201a213c55c51c882a09429bbb69cf599038893c6de3487876c®ion=us-east-1&namespace=cilium&repo_name=operator-generic&akamai_signature=exp=1757119480~hmac=a9a7c130eb7fddb690a49085cb0cc8a666de4be94f459bf91f1209e8350a3a4d\": dial tcp: lookup cdn01.quay.io: no such host" logger="UnhandledError" Sep 6 00:29:40.161378 kubelet[2088]: E0906 00:29:40.161330 2088 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cilium-operator\" with ErrImagePull: \"failed to pull and unpack image \\\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\\\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \\\"https://cdn01.quay.io/quayio-production-s3/sha256/ed/ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIATAAF2YHTCKFFWO5C%2F20250906%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250906T002940Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=fae391a6703d5201a213c55c51c882a09429bbb69cf599038893c6de3487876c®ion=us-east-1&namespace=cilium&repo_name=operator-generic&akamai_signature=exp=1757119480~hmac=a9a7c130eb7fddb690a49085cb0cc8a666de4be94f459bf91f1209e8350a3a4d\\\": dial tcp: lookup cdn01.quay.io: no such host\"" pod="kube-system/cilium-operator-6c4d7847fc-jv6h4" podUID="11373180-5c7d-43e3-b8ba-7ab92609f199" Sep 6 00:29:40.288660 kubelet[2088]: E0906 00:29:40.288574 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:40.801051 kubelet[2088]: E0906 00:29:40.800972 2088 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cilium-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\\\": ErrImagePull: failed to pull and unpack image \\\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\\\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \\\"https://cdn01.quay.io/quayio-production-s3/sha256/ed/ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIATAAF2YHTCKFFWO5C%2F20250906%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250906T002940Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=fae391a6703d5201a213c55c51c882a09429bbb69cf599038893c6de3487876c®ion=us-east-1&namespace=cilium&repo_name=operator-generic&akamai_signature=exp=1757119480~hmac=a9a7c130eb7fddb690a49085cb0cc8a666de4be94f459bf91f1209e8350a3a4d\\\": dial tcp: lookup cdn01.quay.io: no such host\"" pod="kube-system/cilium-operator-6c4d7847fc-jv6h4" podUID="11373180-5c7d-43e3-b8ba-7ab92609f199" Sep 6 00:29:40.801520 kubelet[2088]: I0906 00:29:40.801496 2088 scope.go:117] "RemoveContainer" containerID="466aefe2dd96e1e57a9b1fc53bea65751e049910e3def4a82d1e58914507f6f7" Sep 6 00:29:40.803406 env[1712]: time="2025-09-06T00:29:40.803306068Z" level=info msg="RemoveContainer for \"466aefe2dd96e1e57a9b1fc53bea65751e049910e3def4a82d1e58914507f6f7\"" Sep 6 00:29:40.804288 systemd[1]: Removed slice kubepods-burstable-pod6d33f1e8_febd_4d4d_a069_1f364282327b.slice. Sep 6 00:29:40.818562 env[1712]: time="2025-09-06T00:29:40.818516072Z" level=info msg="RemoveContainer for \"466aefe2dd96e1e57a9b1fc53bea65751e049910e3def4a82d1e58914507f6f7\" returns successfully" Sep 6 00:29:40.884545 kubelet[2088]: I0906 00:29:40.884507 2088 memory_manager.go:355] "RemoveStaleState removing state" podUID="6d33f1e8-febd-4d4d-a069-1f364282327b" containerName="mount-cgroup" Sep 6 00:29:40.890638 systemd[1]: Created slice kubepods-burstable-pod46f0b164_7bfa_4087_8e19_0c9375e9344b.slice. Sep 6 00:29:40.982362 kubelet[2088]: I0906 00:29:40.982231 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/46f0b164-7bfa-4087-8e19-0c9375e9344b-hubble-tls\") pod \"cilium-p9w4w\" (UID: \"46f0b164-7bfa-4087-8e19-0c9375e9344b\") " pod="kube-system/cilium-p9w4w" Sep 6 00:29:40.982362 kubelet[2088]: I0906 00:29:40.982280 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/46f0b164-7bfa-4087-8e19-0c9375e9344b-hostproc\") pod \"cilium-p9w4w\" (UID: \"46f0b164-7bfa-4087-8e19-0c9375e9344b\") " pod="kube-system/cilium-p9w4w" Sep 6 00:29:40.982362 kubelet[2088]: I0906 00:29:40.982298 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/46f0b164-7bfa-4087-8e19-0c9375e9344b-cni-path\") pod \"cilium-p9w4w\" (UID: \"46f0b164-7bfa-4087-8e19-0c9375e9344b\") " pod="kube-system/cilium-p9w4w" Sep 6 00:29:40.982362 kubelet[2088]: I0906 00:29:40.982330 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/46f0b164-7bfa-4087-8e19-0c9375e9344b-etc-cni-netd\") pod \"cilium-p9w4w\" (UID: \"46f0b164-7bfa-4087-8e19-0c9375e9344b\") " pod="kube-system/cilium-p9w4w" Sep 6 00:29:40.982362 kubelet[2088]: I0906 00:29:40.982350 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/46f0b164-7bfa-4087-8e19-0c9375e9344b-host-proc-sys-kernel\") pod \"cilium-p9w4w\" (UID: \"46f0b164-7bfa-4087-8e19-0c9375e9344b\") " pod="kube-system/cilium-p9w4w" Sep 6 00:29:40.982362 kubelet[2088]: I0906 00:29:40.982369 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/46f0b164-7bfa-4087-8e19-0c9375e9344b-cilium-cgroup\") pod \"cilium-p9w4w\" (UID: \"46f0b164-7bfa-4087-8e19-0c9375e9344b\") " pod="kube-system/cilium-p9w4w" Sep 6 00:29:40.982668 kubelet[2088]: I0906 00:29:40.982385 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46f0b164-7bfa-4087-8e19-0c9375e9344b-lib-modules\") pod \"cilium-p9w4w\" (UID: \"46f0b164-7bfa-4087-8e19-0c9375e9344b\") " pod="kube-system/cilium-p9w4w" Sep 6 00:29:40.982668 kubelet[2088]: I0906 00:29:40.982400 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vsrv\" (UniqueName: \"kubernetes.io/projected/46f0b164-7bfa-4087-8e19-0c9375e9344b-kube-api-access-9vsrv\") pod \"cilium-p9w4w\" (UID: \"46f0b164-7bfa-4087-8e19-0c9375e9344b\") " pod="kube-system/cilium-p9w4w" Sep 6 00:29:40.982668 kubelet[2088]: I0906 00:29:40.982419 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/46f0b164-7bfa-4087-8e19-0c9375e9344b-cilium-run\") pod \"cilium-p9w4w\" (UID: \"46f0b164-7bfa-4087-8e19-0c9375e9344b\") " pod="kube-system/cilium-p9w4w" Sep 6 00:29:40.982668 kubelet[2088]: I0906 00:29:40.982432 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/46f0b164-7bfa-4087-8e19-0c9375e9344b-bpf-maps\") pod \"cilium-p9w4w\" (UID: \"46f0b164-7bfa-4087-8e19-0c9375e9344b\") " pod="kube-system/cilium-p9w4w" Sep 6 00:29:40.982668 kubelet[2088]: I0906 00:29:40.982450 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/46f0b164-7bfa-4087-8e19-0c9375e9344b-xtables-lock\") pod \"cilium-p9w4w\" (UID: \"46f0b164-7bfa-4087-8e19-0c9375e9344b\") " pod="kube-system/cilium-p9w4w" Sep 6 00:29:40.982668 kubelet[2088]: I0906 00:29:40.982468 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/46f0b164-7bfa-4087-8e19-0c9375e9344b-clustermesh-secrets\") pod \"cilium-p9w4w\" (UID: \"46f0b164-7bfa-4087-8e19-0c9375e9344b\") " pod="kube-system/cilium-p9w4w" Sep 6 00:29:40.982668 kubelet[2088]: I0906 00:29:40.982484 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/46f0b164-7bfa-4087-8e19-0c9375e9344b-cilium-config-path\") pod \"cilium-p9w4w\" (UID: \"46f0b164-7bfa-4087-8e19-0c9375e9344b\") " pod="kube-system/cilium-p9w4w" Sep 6 00:29:40.982668 kubelet[2088]: I0906 00:29:40.982502 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/46f0b164-7bfa-4087-8e19-0c9375e9344b-cilium-ipsec-secrets\") pod \"cilium-p9w4w\" (UID: \"46f0b164-7bfa-4087-8e19-0c9375e9344b\") " pod="kube-system/cilium-p9w4w" Sep 6 00:29:40.982668 kubelet[2088]: I0906 00:29:40.982517 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/46f0b164-7bfa-4087-8e19-0c9375e9344b-host-proc-sys-net\") pod \"cilium-p9w4w\" (UID: \"46f0b164-7bfa-4087-8e19-0c9375e9344b\") " pod="kube-system/cilium-p9w4w" Sep 6 00:29:41.197684 env[1712]: time="2025-09-06T00:29:41.197637233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p9w4w,Uid:46f0b164-7bfa-4087-8e19-0c9375e9344b,Namespace:kube-system,Attempt:0,}" Sep 6 00:29:41.223276 env[1712]: time="2025-09-06T00:29:41.223186937Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:29:41.223276 env[1712]: time="2025-09-06T00:29:41.223236263Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:29:41.223276 env[1712]: time="2025-09-06T00:29:41.223250831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:29:41.223722 env[1712]: time="2025-09-06T00:29:41.223663633Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d6e4a3d39fabfff7009b2ddc68ffba1597944007fb417d1fe2ea58dff03ba558 pid=4027 runtime=io.containerd.runc.v2 Sep 6 00:29:41.237800 systemd[1]: Started cri-containerd-d6e4a3d39fabfff7009b2ddc68ffba1597944007fb417d1fe2ea58dff03ba558.scope. Sep 6 00:29:41.269163 env[1712]: time="2025-09-06T00:29:41.269118584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p9w4w,Uid:46f0b164-7bfa-4087-8e19-0c9375e9344b,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6e4a3d39fabfff7009b2ddc68ffba1597944007fb417d1fe2ea58dff03ba558\"" Sep 6 00:29:41.271824 env[1712]: time="2025-09-06T00:29:41.271785296Z" level=info msg="CreateContainer within sandbox \"d6e4a3d39fabfff7009b2ddc68ffba1597944007fb417d1fe2ea58dff03ba558\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:29:41.289730 kubelet[2088]: E0906 00:29:41.289653 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:41.292599 env[1712]: time="2025-09-06T00:29:41.292528375Z" level=info msg="CreateContainer within sandbox \"d6e4a3d39fabfff7009b2ddc68ffba1597944007fb417d1fe2ea58dff03ba558\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"05fe5ad39a85abc4fb520df4f664d40dd84f9fcae8d686a820b1ff781108454e\"" Sep 6 00:29:41.293749 env[1712]: time="2025-09-06T00:29:41.293695163Z" level=info msg="StartContainer for \"05fe5ad39a85abc4fb520df4f664d40dd84f9fcae8d686a820b1ff781108454e\"" Sep 6 00:29:41.315477 systemd[1]: Started cri-containerd-05fe5ad39a85abc4fb520df4f664d40dd84f9fcae8d686a820b1ff781108454e.scope. Sep 6 00:29:41.349344 env[1712]: time="2025-09-06T00:29:41.349268500Z" level=info msg="StartContainer for \"05fe5ad39a85abc4fb520df4f664d40dd84f9fcae8d686a820b1ff781108454e\" returns successfully" Sep 6 00:29:41.535830 kubelet[2088]: I0906 00:29:41.535176 2088 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d33f1e8-febd-4d4d-a069-1f364282327b" path="/var/lib/kubelet/pods/6d33f1e8-febd-4d4d-a069-1f364282327b/volumes" Sep 6 00:29:41.634862 systemd[1]: cri-containerd-05fe5ad39a85abc4fb520df4f664d40dd84f9fcae8d686a820b1ff781108454e.scope: Deactivated successfully. Sep 6 00:29:41.678975 env[1712]: time="2025-09-06T00:29:41.678919729Z" level=info msg="shim disconnected" id=05fe5ad39a85abc4fb520df4f664d40dd84f9fcae8d686a820b1ff781108454e Sep 6 00:29:41.678975 env[1712]: time="2025-09-06T00:29:41.678964350Z" level=warning msg="cleaning up after shim disconnected" id=05fe5ad39a85abc4fb520df4f664d40dd84f9fcae8d686a820b1ff781108454e namespace=k8s.io Sep 6 00:29:41.678975 env[1712]: time="2025-09-06T00:29:41.678974831Z" level=info msg="cleaning up dead shim" Sep 6 00:29:41.688095 env[1712]: time="2025-09-06T00:29:41.688025643Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:29:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4110 runtime=io.containerd.runc.v2\n" Sep 6 00:29:41.805668 env[1712]: time="2025-09-06T00:29:41.805536016Z" level=info msg="CreateContainer within sandbox \"d6e4a3d39fabfff7009b2ddc68ffba1597944007fb417d1fe2ea58dff03ba558\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 00:29:41.825857 env[1712]: time="2025-09-06T00:29:41.825798458Z" level=info msg="CreateContainer within sandbox \"d6e4a3d39fabfff7009b2ddc68ffba1597944007fb417d1fe2ea58dff03ba558\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d8cd6a7800b9b3397e20e36117f8dadb619b8f2eaf50db4db9f2b506d0ffc636\"" Sep 6 00:29:41.826584 env[1712]: time="2025-09-06T00:29:41.826538649Z" level=info msg="StartContainer for \"d8cd6a7800b9b3397e20e36117f8dadb619b8f2eaf50db4db9f2b506d0ffc636\"" Sep 6 00:29:41.846361 systemd[1]: Started cri-containerd-d8cd6a7800b9b3397e20e36117f8dadb619b8f2eaf50db4db9f2b506d0ffc636.scope. Sep 6 00:29:41.879577 env[1712]: time="2025-09-06T00:29:41.879502070Z" level=info msg="StartContainer for \"d8cd6a7800b9b3397e20e36117f8dadb619b8f2eaf50db4db9f2b506d0ffc636\" returns successfully" Sep 6 00:29:42.059239 systemd[1]: cri-containerd-d8cd6a7800b9b3397e20e36117f8dadb619b8f2eaf50db4db9f2b506d0ffc636.scope: Deactivated successfully. Sep 6 00:29:42.101872 env[1712]: time="2025-09-06T00:29:42.101817009Z" level=info msg="shim disconnected" id=d8cd6a7800b9b3397e20e36117f8dadb619b8f2eaf50db4db9f2b506d0ffc636 Sep 6 00:29:42.101872 env[1712]: time="2025-09-06T00:29:42.101869373Z" level=warning msg="cleaning up after shim disconnected" id=d8cd6a7800b9b3397e20e36117f8dadb619b8f2eaf50db4db9f2b506d0ffc636 namespace=k8s.io Sep 6 00:29:42.101872 env[1712]: time="2025-09-06T00:29:42.101879467Z" level=info msg="cleaning up dead shim" Sep 6 00:29:42.110645 env[1712]: time="2025-09-06T00:29:42.110588275Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:29:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4174 runtime=io.containerd.runc.v2\n" Sep 6 00:29:42.290512 kubelet[2088]: E0906 00:29:42.290436 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:42.401722 kubelet[2088]: E0906 00:29:42.401420 2088 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:29:42.422157 kubelet[2088]: W0906 00:29:42.422117 2088 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d33f1e8_febd_4d4d_a069_1f364282327b.slice/cri-containerd-466aefe2dd96e1e57a9b1fc53bea65751e049910e3def4a82d1e58914507f6f7.scope WatchSource:0}: container "466aefe2dd96e1e57a9b1fc53bea65751e049910e3def4a82d1e58914507f6f7" in namespace "k8s.io": not found Sep 6 00:29:42.809106 env[1712]: time="2025-09-06T00:29:42.808991423Z" level=info msg="CreateContainer within sandbox \"d6e4a3d39fabfff7009b2ddc68ffba1597944007fb417d1fe2ea58dff03ba558\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 00:29:42.827256 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3451398387.mount: Deactivated successfully. Sep 6 00:29:42.836739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1421009485.mount: Deactivated successfully. Sep 6 00:29:42.845089 env[1712]: time="2025-09-06T00:29:42.844711807Z" level=info msg="CreateContainer within sandbox \"d6e4a3d39fabfff7009b2ddc68ffba1597944007fb417d1fe2ea58dff03ba558\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"26e3ecf6af62535da9344b5122191d3358af8eb21fc8b1173d1c55aa3d494e06\"" Sep 6 00:29:42.846328 env[1712]: time="2025-09-06T00:29:42.846281639Z" level=info msg="StartContainer for \"26e3ecf6af62535da9344b5122191d3358af8eb21fc8b1173d1c55aa3d494e06\"" Sep 6 00:29:42.867488 systemd[1]: Started cri-containerd-26e3ecf6af62535da9344b5122191d3358af8eb21fc8b1173d1c55aa3d494e06.scope. Sep 6 00:29:42.906574 env[1712]: time="2025-09-06T00:29:42.906526255Z" level=info msg="StartContainer for \"26e3ecf6af62535da9344b5122191d3358af8eb21fc8b1173d1c55aa3d494e06\" returns successfully" Sep 6 00:29:42.976936 systemd[1]: cri-containerd-26e3ecf6af62535da9344b5122191d3358af8eb21fc8b1173d1c55aa3d494e06.scope: Deactivated successfully. Sep 6 00:29:43.016425 env[1712]: time="2025-09-06T00:29:43.016355324Z" level=info msg="shim disconnected" id=26e3ecf6af62535da9344b5122191d3358af8eb21fc8b1173d1c55aa3d494e06 Sep 6 00:29:43.016425 env[1712]: time="2025-09-06T00:29:43.016414887Z" level=warning msg="cleaning up after shim disconnected" id=26e3ecf6af62535da9344b5122191d3358af8eb21fc8b1173d1c55aa3d494e06 namespace=k8s.io Sep 6 00:29:43.016425 env[1712]: time="2025-09-06T00:29:43.016428417Z" level=info msg="cleaning up dead shim" Sep 6 00:29:43.026710 env[1712]: time="2025-09-06T00:29:43.026652264Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:29:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4232 runtime=io.containerd.runc.v2\n" Sep 6 00:29:43.290889 kubelet[2088]: E0906 00:29:43.290832 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:43.813092 env[1712]: time="2025-09-06T00:29:43.813050842Z" level=info msg="CreateContainer within sandbox \"d6e4a3d39fabfff7009b2ddc68ffba1597944007fb417d1fe2ea58dff03ba558\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 00:29:43.829887 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount33051418.mount: Deactivated successfully. Sep 6 00:29:43.836400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1551513949.mount: Deactivated successfully. Sep 6 00:29:43.844286 env[1712]: time="2025-09-06T00:29:43.843990035Z" level=info msg="CreateContainer within sandbox \"d6e4a3d39fabfff7009b2ddc68ffba1597944007fb417d1fe2ea58dff03ba558\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1b2002e54c361951cda47dc8443f1cd6ec20bb561a097f966e72b7453d5ba5cf\"" Sep 6 00:29:43.845031 env[1712]: time="2025-09-06T00:29:43.844629331Z" level=info msg="StartContainer for \"1b2002e54c361951cda47dc8443f1cd6ec20bb561a097f966e72b7453d5ba5cf\"" Sep 6 00:29:43.864926 systemd[1]: Started cri-containerd-1b2002e54c361951cda47dc8443f1cd6ec20bb561a097f966e72b7453d5ba5cf.scope. Sep 6 00:29:43.895502 systemd[1]: cri-containerd-1b2002e54c361951cda47dc8443f1cd6ec20bb561a097f966e72b7453d5ba5cf.scope: Deactivated successfully. Sep 6 00:29:43.899190 env[1712]: time="2025-09-06T00:29:43.899136913Z" level=info msg="StartContainer for \"1b2002e54c361951cda47dc8443f1cd6ec20bb561a097f966e72b7453d5ba5cf\" returns successfully" Sep 6 00:29:43.974457 env[1712]: time="2025-09-06T00:29:43.974405996Z" level=info msg="shim disconnected" id=1b2002e54c361951cda47dc8443f1cd6ec20bb561a097f966e72b7453d5ba5cf Sep 6 00:29:43.974457 env[1712]: time="2025-09-06T00:29:43.974455827Z" level=warning msg="cleaning up after shim disconnected" id=1b2002e54c361951cda47dc8443f1cd6ec20bb561a097f966e72b7453d5ba5cf namespace=k8s.io Sep 6 00:29:43.974457 env[1712]: time="2025-09-06T00:29:43.974465934Z" level=info msg="cleaning up dead shim" Sep 6 00:29:43.985014 env[1712]: time="2025-09-06T00:29:43.984956261Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:29:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4287 runtime=io.containerd.runc.v2\n" Sep 6 00:29:44.291769 kubelet[2088]: E0906 00:29:44.291708 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:44.817711 env[1712]: time="2025-09-06T00:29:44.817667094Z" level=info msg="CreateContainer within sandbox \"d6e4a3d39fabfff7009b2ddc68ffba1597944007fb417d1fe2ea58dff03ba558\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 00:29:44.841273 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2424211297.mount: Deactivated successfully. Sep 6 00:29:44.855190 env[1712]: time="2025-09-06T00:29:44.855121331Z" level=info msg="CreateContainer within sandbox \"d6e4a3d39fabfff7009b2ddc68ffba1597944007fb417d1fe2ea58dff03ba558\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"61611bc78e66084a5d4fbcea4029a0ef37390779f0dfa6f890c1341adaaf0dd6\"" Sep 6 00:29:44.856197 env[1712]: time="2025-09-06T00:29:44.856154040Z" level=info msg="StartContainer for \"61611bc78e66084a5d4fbcea4029a0ef37390779f0dfa6f890c1341adaaf0dd6\"" Sep 6 00:29:44.878490 systemd[1]: Started cri-containerd-61611bc78e66084a5d4fbcea4029a0ef37390779f0dfa6f890c1341adaaf0dd6.scope. Sep 6 00:29:44.917403 env[1712]: time="2025-09-06T00:29:44.917062107Z" level=info msg="StartContainer for \"61611bc78e66084a5d4fbcea4029a0ef37390779f0dfa6f890c1341adaaf0dd6\" returns successfully" Sep 6 00:29:45.291907 kubelet[2088]: E0906 00:29:45.291807 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:45.539165 kubelet[2088]: W0906 00:29:45.539129 2088 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46f0b164_7bfa_4087_8e19_0c9375e9344b.slice/cri-containerd-05fe5ad39a85abc4fb520df4f664d40dd84f9fcae8d686a820b1ff781108454e.scope WatchSource:0}: task 05fe5ad39a85abc4fb520df4f664d40dd84f9fcae8d686a820b1ff781108454e not found: not found Sep 6 00:29:46.292054 kubelet[2088]: E0906 00:29:46.292003 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:47.292786 kubelet[2088]: E0906 00:29:47.292728 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:47.413350 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 6 00:29:48.102710 systemd[1]: run-containerd-runc-k8s.io-61611bc78e66084a5d4fbcea4029a0ef37390779f0dfa6f890c1341adaaf0dd6-runc.YvLigG.mount: Deactivated successfully. Sep 6 00:29:48.293492 kubelet[2088]: E0906 00:29:48.293384 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:48.647836 kubelet[2088]: W0906 00:29:48.647783 2088 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46f0b164_7bfa_4087_8e19_0c9375e9344b.slice/cri-containerd-d8cd6a7800b9b3397e20e36117f8dadb619b8f2eaf50db4db9f2b506d0ffc636.scope WatchSource:0}: task d8cd6a7800b9b3397e20e36117f8dadb619b8f2eaf50db4db9f2b506d0ffc636 not found: not found Sep 6 00:29:49.294455 kubelet[2088]: E0906 00:29:49.294403 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:50.296004 kubelet[2088]: E0906 00:29:50.295960 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:50.853861 systemd-networkd[1444]: lxc_health: Link UP Sep 6 00:29:50.858336 (udev-worker)[4875]: Network interface NamePolicy= disabled on kernel command line. Sep 6 00:29:50.872096 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 6 00:29:50.872332 systemd-networkd[1444]: lxc_health: Gained carrier Sep 6 00:29:51.224871 kubelet[2088]: I0906 00:29:51.224707 2088 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-p9w4w" podStartSLOduration=11.224668702 podStartE2EDuration="11.224668702s" podCreationTimestamp="2025-09-06 00:29:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:29:46.848904712 +0000 UTC m=+70.203271268" watchObservedRunningTime="2025-09-06 00:29:51.224668702 +0000 UTC m=+74.579035261" Sep 6 00:29:51.296817 kubelet[2088]: E0906 00:29:51.296747 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:51.758279 kubelet[2088]: W0906 00:29:51.758237 2088 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46f0b164_7bfa_4087_8e19_0c9375e9344b.slice/cri-containerd-26e3ecf6af62535da9344b5122191d3358af8eb21fc8b1173d1c55aa3d494e06.scope WatchSource:0}: task 26e3ecf6af62535da9344b5122191d3358af8eb21fc8b1173d1c55aa3d494e06 not found: not found Sep 6 00:29:52.297937 kubelet[2088]: E0906 00:29:52.297882 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:52.370595 systemd-networkd[1444]: lxc_health: Gained IPv6LL Sep 6 00:29:53.298869 kubelet[2088]: E0906 00:29:53.298792 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:54.300000 kubelet[2088]: E0906 00:29:54.299948 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:54.779284 systemd[1]: run-containerd-runc-k8s.io-61611bc78e66084a5d4fbcea4029a0ef37390779f0dfa6f890c1341adaaf0dd6-runc.VaRFoA.mount: Deactivated successfully. Sep 6 00:29:54.878344 kubelet[2088]: W0906 00:29:54.876071 2088 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46f0b164_7bfa_4087_8e19_0c9375e9344b.slice/cri-containerd-1b2002e54c361951cda47dc8443f1cd6ec20bb561a097f966e72b7453d5ba5cf.scope WatchSource:0}: task 1b2002e54c361951cda47dc8443f1cd6ec20bb561a097f966e72b7453d5ba5cf not found: not found Sep 6 00:29:55.300354 kubelet[2088]: E0906 00:29:55.300292 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:55.536676 env[1712]: time="2025-09-06T00:29:55.536311429Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 6 00:29:56.301170 kubelet[2088]: E0906 00:29:56.301059 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:57.242693 kubelet[2088]: E0906 00:29:57.242650 2088 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:57.301720 kubelet[2088]: E0906 00:29:57.301651 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:57.680870 env[1712]: time="2025-09-06T00:29:57.680786811Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:57.685057 env[1712]: time="2025-09-06T00:29:57.685010211Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:57.688229 env[1712]: time="2025-09-06T00:29:57.688189845Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:29:57.689086 env[1712]: time="2025-09-06T00:29:57.689048003Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 6 00:29:57.691593 env[1712]: time="2025-09-06T00:29:57.691525460Z" level=info msg="CreateContainer within sandbox \"ea0ab41e41d7787eedd70d2236d72589f6acc49d42becf0d305520d92ed93060\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 6 00:29:57.713737 env[1712]: time="2025-09-06T00:29:57.713691277Z" level=info msg="CreateContainer within sandbox \"ea0ab41e41d7787eedd70d2236d72589f6acc49d42becf0d305520d92ed93060\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ef56c3ca9ee0b64848ff74d803a0625f3ed51fc01416d17ae9a9e710249a66b6\"" Sep 6 00:29:57.714446 env[1712]: time="2025-09-06T00:29:57.714416705Z" level=info msg="StartContainer for \"ef56c3ca9ee0b64848ff74d803a0625f3ed51fc01416d17ae9a9e710249a66b6\"" Sep 6 00:29:57.737227 systemd[1]: Started cri-containerd-ef56c3ca9ee0b64848ff74d803a0625f3ed51fc01416d17ae9a9e710249a66b6.scope. Sep 6 00:29:57.804712 env[1712]: time="2025-09-06T00:29:57.804622455Z" level=info msg="StartContainer for \"ef56c3ca9ee0b64848ff74d803a0625f3ed51fc01416d17ae9a9e710249a66b6\" returns successfully" Sep 6 00:29:57.861526 kubelet[2088]: I0906 00:29:57.861033 2088 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-jv6h4" podStartSLOduration=1.394727414 podStartE2EDuration="19.861012844s" podCreationTimestamp="2025-09-06 00:29:38 +0000 UTC" firstStartedPulling="2025-09-06 00:29:39.223986325 +0000 UTC m=+62.578352870" lastFinishedPulling="2025-09-06 00:29:57.690271768 +0000 UTC m=+81.044638300" observedRunningTime="2025-09-06 00:29:57.860499433 +0000 UTC m=+81.214866000" watchObservedRunningTime="2025-09-06 00:29:57.861012844 +0000 UTC m=+81.215379397" Sep 6 00:29:58.302356 kubelet[2088]: E0906 00:29:58.302283 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:29:59.302684 kubelet[2088]: E0906 00:29:59.302616 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:00.302886 kubelet[2088]: E0906 00:30:00.302827 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:01.304780 kubelet[2088]: E0906 00:30:01.304275 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:02.305913 kubelet[2088]: E0906 00:30:02.305850 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:03.306555 kubelet[2088]: E0906 00:30:03.306505 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:04.307608 kubelet[2088]: E0906 00:30:04.307544 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:05.307699 kubelet[2088]: E0906 00:30:05.307657 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:06.308599 kubelet[2088]: E0906 00:30:06.308550 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:07.309472 kubelet[2088]: E0906 00:30:07.309303 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:08.310628 kubelet[2088]: E0906 00:30:08.310570 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:09.311438 kubelet[2088]: E0906 00:30:09.311392 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:10.312285 kubelet[2088]: E0906 00:30:10.312143 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:11.313127 kubelet[2088]: E0906 00:30:11.313076 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:12.313272 kubelet[2088]: E0906 00:30:12.313193 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:13.313572 kubelet[2088]: E0906 00:30:13.313526 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:14.314596 kubelet[2088]: E0906 00:30:14.314518 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:15.315213 kubelet[2088]: E0906 00:30:15.315165 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:16.315365 kubelet[2088]: E0906 00:30:16.315307 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:17.242918 kubelet[2088]: E0906 00:30:17.242842 2088 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:17.316335 kubelet[2088]: E0906 00:30:17.316287 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:18.316724 kubelet[2088]: E0906 00:30:18.316646 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:19.091096 kubelet[2088]: E0906 00:30:19.090999 2088 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.18.181?timeout=10s\": context deadline exceeded" Sep 6 00:30:19.181250 kubelet[2088]: E0906 00:30:19.181188 2088 kubelet_node_status.go:548] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-06T00:30:09Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-06T00:30:09Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-06T00:30:09Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-06T00:30:09Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\\\"],\\\"sizeBytes\\\":166719855},{\\\"names\\\":[\\\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\\\",\\\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\\\"],\\\"sizeBytes\\\":91036984},{\\\"names\\\":[\\\"ghcr.io/flatcar/nginx@sha256:883ca821a91fc20bcde818eeee4e1ed55ef63a020d6198ecd5a03af5a4eac530\\\",\\\"ghcr.io/flatcar/nginx:latest\\\"],\\\"sizeBytes\\\":73307688},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\\\",\\\"registry.k8s.io/kube-proxy:v1.32.8\\\"],\\\"sizeBytes\\\":30896189},{\\\"names\\\":[\\\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\\\"],\\\"sizeBytes\\\":18897442},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db\\\",\\\"registry.k8s.io/pause:3.6\\\"],\\\"sizeBytes\\\":301773}]}}\" for node \"172.31.18.181\": Patch \"https://172.31.23.140:6443/api/v1/nodes/172.31.18.181/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 6 00:30:19.317163 kubelet[2088]: E0906 00:30:19.317103 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:20.317891 kubelet[2088]: E0906 00:30:20.317835 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:21.318454 kubelet[2088]: E0906 00:30:21.318254 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:22.319415 kubelet[2088]: E0906 00:30:22.319349 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:23.320760 kubelet[2088]: E0906 00:30:23.320689 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:24.321239 kubelet[2088]: E0906 00:30:24.321176 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:25.321994 kubelet[2088]: E0906 00:30:25.321937 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:26.323070 kubelet[2088]: E0906 00:30:26.323025 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:27.323245 kubelet[2088]: E0906 00:30:27.323169 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:28.323652 kubelet[2088]: E0906 00:30:28.323608 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:29.091427 kubelet[2088]: E0906 00:30:29.091280 2088 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.18.181?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 6 00:30:29.182895 kubelet[2088]: E0906 00:30:29.182620 2088 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"172.31.18.181\": Get \"https://172.31.23.140:6443/api/v1/nodes/172.31.18.181?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 6 00:30:29.324375 kubelet[2088]: E0906 00:30:29.324309 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:30.325261 kubelet[2088]: E0906 00:30:30.325204 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:31.326149 kubelet[2088]: E0906 00:30:31.326102 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:32.326749 kubelet[2088]: E0906 00:30:32.326663 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:33.327457 kubelet[2088]: E0906 00:30:33.327384 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:34.327949 kubelet[2088]: E0906 00:30:34.327872 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:35.328191 kubelet[2088]: E0906 00:30:35.328082 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:36.328497 kubelet[2088]: E0906 00:30:36.328438 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:37.242974 kubelet[2088]: E0906 00:30:37.242931 2088 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:37.294000 env[1712]: time="2025-09-06T00:30:37.293952080Z" level=info msg="StopPodSandbox for \"7ac23779894f014aeb192d9dbc337db623ce11ae0066e1c4b73c68c09e07479e\"" Sep 6 00:30:37.294849 env[1712]: time="2025-09-06T00:30:37.294069429Z" level=info msg="TearDown network for sandbox \"7ac23779894f014aeb192d9dbc337db623ce11ae0066e1c4b73c68c09e07479e\" successfully" Sep 6 00:30:37.294849 env[1712]: time="2025-09-06T00:30:37.294134872Z" level=info msg="StopPodSandbox for \"7ac23779894f014aeb192d9dbc337db623ce11ae0066e1c4b73c68c09e07479e\" returns successfully" Sep 6 00:30:37.294849 env[1712]: time="2025-09-06T00:30:37.294588743Z" level=info msg="RemovePodSandbox for \"7ac23779894f014aeb192d9dbc337db623ce11ae0066e1c4b73c68c09e07479e\"" Sep 6 00:30:37.294849 env[1712]: time="2025-09-06T00:30:37.294628145Z" level=info msg="Forcibly stopping sandbox \"7ac23779894f014aeb192d9dbc337db623ce11ae0066e1c4b73c68c09e07479e\"" Sep 6 00:30:37.294849 env[1712]: time="2025-09-06T00:30:37.294725227Z" level=info msg="TearDown network for sandbox \"7ac23779894f014aeb192d9dbc337db623ce11ae0066e1c4b73c68c09e07479e\" successfully" Sep 6 00:30:37.301554 env[1712]: time="2025-09-06T00:30:37.301503861Z" level=info msg="RemovePodSandbox \"7ac23779894f014aeb192d9dbc337db623ce11ae0066e1c4b73c68c09e07479e\" returns successfully" Sep 6 00:30:37.329070 kubelet[2088]: E0906 00:30:37.329025 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:38.329996 kubelet[2088]: E0906 00:30:38.329958 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:39.092410 kubelet[2088]: E0906 00:30:39.092272 2088 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.18.181?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 6 00:30:39.184130 kubelet[2088]: E0906 00:30:39.184071 2088 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"172.31.18.181\": Get \"https://172.31.23.140:6443/api/v1/nodes/172.31.18.181?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 6 00:30:39.330742 kubelet[2088]: E0906 00:30:39.330525 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:40.331289 kubelet[2088]: E0906 00:30:40.331212 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:40.590488 kubelet[2088]: E0906 00:30:40.589588 2088 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.18.181?timeout=10s\": unexpected EOF" Sep 6 00:30:40.597750 kubelet[2088]: E0906 00:30:40.597697 2088 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.18.181?timeout=10s\": read tcp 172.31.18.181:43614->172.31.23.140:6443: read: connection reset by peer" Sep 6 00:30:40.597750 kubelet[2088]: I0906 00:30:40.597736 2088 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Sep 6 00:30:40.598218 kubelet[2088]: E0906 00:30:40.598180 2088 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.18.181?timeout=10s\": dial tcp 172.31.23.140:6443: connect: connection refused" interval="200ms" Sep 6 00:30:40.799705 kubelet[2088]: E0906 00:30:40.799646 2088 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.18.181?timeout=10s\": dial tcp 172.31.23.140:6443: connect: connection refused" interval="400ms" Sep 6 00:30:41.200749 kubelet[2088]: E0906 00:30:41.200706 2088 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.18.181?timeout=10s\": dial tcp 172.31.23.140:6443: connect: connection refused" interval="800ms" Sep 6 00:30:41.332086 kubelet[2088]: E0906 00:30:41.332025 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:41.590539 kubelet[2088]: E0906 00:30:41.590501 2088 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"172.31.18.181\": Get \"https://172.31.23.140:6443/api/v1/nodes/172.31.18.181?timeout=10s\": dial tcp 172.31.23.140:6443: connect: connection refused - error from a previous attempt: unexpected EOF" Sep 6 00:30:41.591347 kubelet[2088]: E0906 00:30:41.591281 2088 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"172.31.18.181\": Get \"https://172.31.23.140:6443/api/v1/nodes/172.31.18.181?timeout=10s\": dial tcp 172.31.23.140:6443: connect: connection refused" Sep 6 00:30:41.591347 kubelet[2088]: E0906 00:30:41.591306 2088 kubelet_node_status.go:535] "Unable to update node status" err="update node status exceeds retry count" Sep 6 00:30:42.332550 kubelet[2088]: E0906 00:30:42.332481 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:43.332978 kubelet[2088]: E0906 00:30:43.332921 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:44.333221 kubelet[2088]: E0906 00:30:44.333108 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:45.333807 kubelet[2088]: E0906 00:30:45.333723 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:46.334919 kubelet[2088]: E0906 00:30:46.334860 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:47.335750 kubelet[2088]: E0906 00:30:47.335693 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:48.336770 kubelet[2088]: E0906 00:30:48.336729 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:49.337272 kubelet[2088]: E0906 00:30:49.337224 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:50.338048 kubelet[2088]: E0906 00:30:50.337976 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:51.338707 kubelet[2088]: E0906 00:30:51.338653 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:52.001675 kubelet[2088]: E0906 00:30:52.001616 2088 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.18.181?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Sep 6 00:30:52.339837 kubelet[2088]: E0906 00:30:52.339788 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:53.340782 kubelet[2088]: E0906 00:30:53.340704 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:54.341629 kubelet[2088]: E0906 00:30:54.341549 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:55.342448 kubelet[2088]: E0906 00:30:55.342401 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:30:56.343128 kubelet[2088]: E0906 00:30:56.342846 2088 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"