Sep 13 00:55:55.984525 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Sep 12 23:13:49 -00 2025 Sep 13 00:55:55.984555 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:55:55.984574 kernel: BIOS-provided physical RAM map: Sep 13 00:55:55.984586 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 13 00:55:55.984596 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Sep 13 00:55:55.984607 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Sep 13 00:55:55.984621 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Sep 13 00:55:55.984633 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Sep 13 00:55:55.984647 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Sep 13 00:55:55.984658 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Sep 13 00:55:55.984670 kernel: NX (Execute Disable) protection: active Sep 13 00:55:55.984681 kernel: e820: update [mem 0x76813018-0x7681be57] usable ==> usable Sep 13 00:55:55.984693 kernel: e820: update [mem 0x76813018-0x7681be57] usable ==> usable Sep 13 00:55:55.984705 kernel: extended physical RAM map: Sep 13 00:55:55.984722 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 13 00:55:55.984735 kernel: reserve setup_data: [mem 0x0000000000100000-0x0000000076813017] usable Sep 13 00:55:55.984747 kernel: reserve setup_data: [mem 0x0000000076813018-0x000000007681be57] usable Sep 13 00:55:55.984759 kernel: reserve setup_data: [mem 0x000000007681be58-0x00000000786cdfff] usable Sep 13 00:55:55.984772 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Sep 13 00:55:55.984784 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Sep 13 00:55:55.984796 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Sep 13 00:55:55.984809 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Sep 13 00:55:55.984821 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Sep 13 00:55:55.984834 kernel: efi: EFI v2.70 by EDK II Sep 13 00:55:55.984848 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77004a98 Sep 13 00:55:55.984861 kernel: SMBIOS 2.7 present. Sep 13 00:55:55.984873 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Sep 13 00:55:55.984885 kernel: Hypervisor detected: KVM Sep 13 00:55:55.984897 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 13 00:55:55.984910 kernel: kvm-clock: cpu 0, msr 4a19f001, primary cpu clock Sep 13 00:55:55.984922 kernel: kvm-clock: using sched offset of 3992808115 cycles Sep 13 00:55:55.984935 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 13 00:55:55.984948 kernel: tsc: Detected 2499.996 MHz processor Sep 13 00:55:55.984961 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:55:55.984974 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:55:55.984989 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Sep 13 00:55:55.985002 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:55:55.985014 kernel: Using GB pages for direct mapping Sep 13 00:55:55.985027 kernel: Secure boot disabled Sep 13 00:55:55.985040 kernel: ACPI: Early table checksum verification disabled Sep 13 00:55:55.985058 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Sep 13 00:55:55.985072 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Sep 13 00:55:55.985088 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 13 00:55:55.985119 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Sep 13 00:55:55.985140 kernel: ACPI: FACS 0x00000000789D0000 000040 Sep 13 00:55:55.985151 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Sep 13 00:55:55.985162 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 13 00:55:55.985173 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 13 00:55:55.985185 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Sep 13 00:55:55.985201 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Sep 13 00:55:55.985214 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 13 00:55:55.985225 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Sep 13 00:55:55.985237 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Sep 13 00:55:55.985248 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Sep 13 00:55:55.985258 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Sep 13 00:55:55.985270 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Sep 13 00:55:55.985281 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Sep 13 00:55:55.985295 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Sep 13 00:55:55.985311 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Sep 13 00:55:55.985324 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Sep 13 00:55:55.985337 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Sep 13 00:55:55.985350 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Sep 13 00:55:55.985363 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Sep 13 00:55:55.985376 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Sep 13 00:55:55.985389 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 13 00:55:55.985402 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 13 00:55:55.985415 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Sep 13 00:55:55.985430 kernel: NUMA: Initialized distance table, cnt=1 Sep 13 00:55:55.985443 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] Sep 13 00:55:55.985456 kernel: Zone ranges: Sep 13 00:55:55.985469 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:55:55.985482 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Sep 13 00:55:55.985495 kernel: Normal empty Sep 13 00:55:55.985506 kernel: Movable zone start for each node Sep 13 00:55:55.985518 kernel: Early memory node ranges Sep 13 00:55:55.985530 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 13 00:55:55.985546 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Sep 13 00:55:55.985558 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Sep 13 00:55:55.985570 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Sep 13 00:55:55.985583 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:55:55.985595 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 13 00:55:55.985608 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 13 00:55:55.985622 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Sep 13 00:55:55.985636 kernel: ACPI: PM-Timer IO Port: 0xb008 Sep 13 00:55:55.985650 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 13 00:55:55.985667 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Sep 13 00:55:55.985680 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 13 00:55:55.985693 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 00:55:55.985706 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 13 00:55:55.985721 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 13 00:55:55.985735 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:55:55.985748 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 13 00:55:55.985760 kernel: TSC deadline timer available Sep 13 00:55:55.985771 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 13 00:55:55.985785 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Sep 13 00:55:55.985797 kernel: Booting paravirtualized kernel on KVM Sep 13 00:55:55.985809 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:55:55.985820 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Sep 13 00:55:55.985832 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Sep 13 00:55:55.985843 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Sep 13 00:55:55.985855 kernel: pcpu-alloc: [0] 0 1 Sep 13 00:55:55.985867 kernel: kvm-guest: stealtime: cpu 0, msr 7a41c0c0 Sep 13 00:55:55.985878 kernel: kvm-guest: PV spinlocks enabled Sep 13 00:55:55.985894 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 13 00:55:55.985905 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Sep 13 00:55:55.985918 kernel: Policy zone: DMA32 Sep 13 00:55:55.985932 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:55:55.985944 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:55:55.985956 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:55:55.985968 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 13 00:55:55.985980 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:55:55.985997 kernel: Memory: 1876640K/2037804K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47492K init, 4088K bss, 160904K reserved, 0K cma-reserved) Sep 13 00:55:55.986010 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 13 00:55:55.986022 kernel: Kernel/User page tables isolation: enabled Sep 13 00:55:55.986033 kernel: ftrace: allocating 34614 entries in 136 pages Sep 13 00:55:55.992688 kernel: ftrace: allocated 136 pages with 2 groups Sep 13 00:55:55.992719 kernel: rcu: Hierarchical RCU implementation. Sep 13 00:55:55.992736 kernel: rcu: RCU event tracing is enabled. Sep 13 00:55:55.992767 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 13 00:55:55.992783 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:55:55.992798 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:55:55.992813 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:55:55.992828 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 13 00:55:55.992846 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 13 00:55:55.992862 kernel: random: crng init done Sep 13 00:55:55.992877 kernel: Console: colour dummy device 80x25 Sep 13 00:55:55.992891 kernel: printk: console [tty0] enabled Sep 13 00:55:55.992906 kernel: printk: console [ttyS0] enabled Sep 13 00:55:55.992921 kernel: ACPI: Core revision 20210730 Sep 13 00:55:55.992936 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Sep 13 00:55:55.992955 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:55:55.992969 kernel: x2apic enabled Sep 13 00:55:55.992984 kernel: Switched APIC routing to physical x2apic. Sep 13 00:55:55.992999 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Sep 13 00:55:55.993014 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Sep 13 00:55:55.993029 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 13 00:55:55.993044 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 13 00:55:55.993062 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:55:55.993076 kernel: Spectre V2 : Mitigation: Retpolines Sep 13 00:55:55.993091 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 00:55:55.993126 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 13 00:55:55.993137 kernel: RETBleed: Vulnerable Sep 13 00:55:55.993148 kernel: Speculative Store Bypass: Vulnerable Sep 13 00:55:55.993159 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Sep 13 00:55:55.993172 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 13 00:55:55.993184 kernel: GDS: Unknown: Dependent on hypervisor status Sep 13 00:55:55.993198 kernel: active return thunk: its_return_thunk Sep 13 00:55:55.993212 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 13 00:55:55.993230 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:55:55.993245 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:55:55.993259 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:55:55.993274 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Sep 13 00:55:55.993288 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Sep 13 00:55:55.993303 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 13 00:55:55.993318 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 13 00:55:55.993333 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 13 00:55:55.993348 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Sep 13 00:55:55.993362 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:55:55.993377 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Sep 13 00:55:55.993395 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Sep 13 00:55:55.993410 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Sep 13 00:55:55.993424 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Sep 13 00:55:55.993439 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Sep 13 00:55:55.993453 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Sep 13 00:55:55.993469 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Sep 13 00:55:55.993483 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:55:55.993498 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:55:55.993513 kernel: LSM: Security Framework initializing Sep 13 00:55:55.993527 kernel: SELinux: Initializing. Sep 13 00:55:55.993541 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 00:55:55.993559 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 00:55:55.993574 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Sep 13 00:55:55.993589 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Sep 13 00:55:55.993604 kernel: signal: max sigframe size: 3632 Sep 13 00:55:55.993620 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:55:55.993635 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 13 00:55:55.993650 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:55:55.993665 kernel: x86: Booting SMP configuration: Sep 13 00:55:55.993680 kernel: .... node #0, CPUs: #1 Sep 13 00:55:55.993695 kernel: kvm-clock: cpu 1, msr 4a19f041, secondary cpu clock Sep 13 00:55:55.993713 kernel: kvm-guest: stealtime: cpu 1, msr 7a51c0c0 Sep 13 00:55:55.993730 kernel: Transient Scheduler Attacks: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 13 00:55:55.993746 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 13 00:55:55.993762 kernel: smp: Brought up 1 node, 2 CPUs Sep 13 00:55:55.993776 kernel: smpboot: Max logical packages: 1 Sep 13 00:55:55.993792 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Sep 13 00:55:55.993807 kernel: devtmpfs: initialized Sep 13 00:55:55.993822 kernel: x86/mm: Memory block size: 128MB Sep 13 00:55:55.993840 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Sep 13 00:55:55.993855 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:55:55.993870 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 13 00:55:55.993885 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:55:55.993900 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:55:55.993914 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:55:55.993930 kernel: audit: type=2000 audit(1757724955.576:1): state=initialized audit_enabled=0 res=1 Sep 13 00:55:55.993944 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:55:55.993960 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:55:55.993978 kernel: cpuidle: using governor menu Sep 13 00:55:55.993993 kernel: ACPI: bus type PCI registered Sep 13 00:55:55.994008 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:55:55.994023 kernel: dca service started, version 1.12.1 Sep 13 00:55:55.994038 kernel: PCI: Using configuration type 1 for base access Sep 13 00:55:55.994053 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:55:55.994067 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:55:55.994081 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:55:55.994093 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:55:55.994129 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:55:55.994143 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:55:55.994156 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 13 00:55:55.994170 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 13 00:55:55.994184 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 13 00:55:55.994197 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Sep 13 00:55:55.994212 kernel: ACPI: Interpreter enabled Sep 13 00:55:55.994225 kernel: ACPI: PM: (supports S0 S5) Sep 13 00:55:55.994252 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:55:55.994270 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:55:55.994281 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 13 00:55:55.994294 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:55:55.994520 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:55:55.994659 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Sep 13 00:55:55.994677 kernel: acpiphp: Slot [3] registered Sep 13 00:55:55.994692 kernel: acpiphp: Slot [4] registered Sep 13 00:55:55.994711 kernel: acpiphp: Slot [5] registered Sep 13 00:55:55.994724 kernel: acpiphp: Slot [6] registered Sep 13 00:55:55.994739 kernel: acpiphp: Slot [7] registered Sep 13 00:55:55.994752 kernel: acpiphp: Slot [8] registered Sep 13 00:55:55.994765 kernel: acpiphp: Slot [9] registered Sep 13 00:55:55.994780 kernel: acpiphp: Slot [10] registered Sep 13 00:55:55.994793 kernel: acpiphp: Slot [11] registered Sep 13 00:55:55.994806 kernel: acpiphp: Slot [12] registered Sep 13 00:55:55.994825 kernel: acpiphp: Slot [13] registered Sep 13 00:55:55.994840 kernel: acpiphp: Slot [14] registered Sep 13 00:55:55.994857 kernel: acpiphp: Slot [15] registered Sep 13 00:55:55.994870 kernel: acpiphp: Slot [16] registered Sep 13 00:55:55.994884 kernel: acpiphp: Slot [17] registered Sep 13 00:55:55.994898 kernel: acpiphp: Slot [18] registered Sep 13 00:55:55.994912 kernel: acpiphp: Slot [19] registered Sep 13 00:55:55.994926 kernel: acpiphp: Slot [20] registered Sep 13 00:55:55.994941 kernel: acpiphp: Slot [21] registered Sep 13 00:55:55.994956 kernel: acpiphp: Slot [22] registered Sep 13 00:55:55.994970 kernel: acpiphp: Slot [23] registered Sep 13 00:55:55.994988 kernel: acpiphp: Slot [24] registered Sep 13 00:55:55.995003 kernel: acpiphp: Slot [25] registered Sep 13 00:55:55.995017 kernel: acpiphp: Slot [26] registered Sep 13 00:55:55.995032 kernel: acpiphp: Slot [27] registered Sep 13 00:55:55.995047 kernel: acpiphp: Slot [28] registered Sep 13 00:55:55.995062 kernel: acpiphp: Slot [29] registered Sep 13 00:55:55.995076 kernel: acpiphp: Slot [30] registered Sep 13 00:55:55.995091 kernel: acpiphp: Slot [31] registered Sep 13 00:55:55.995131 kernel: PCI host bridge to bus 0000:00 Sep 13 00:55:55.995280 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 00:55:55.995412 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 00:55:55.995536 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 00:55:55.995650 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 13 00:55:55.995767 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Sep 13 00:55:55.995897 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:55:55.996055 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 13 00:55:55.996279 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Sep 13 00:55:55.996431 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Sep 13 00:55:55.996566 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Sep 13 00:55:55.996698 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Sep 13 00:55:55.996833 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Sep 13 00:55:55.996965 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Sep 13 00:55:55.997127 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Sep 13 00:55:55.997272 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Sep 13 00:55:55.997404 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Sep 13 00:55:55.997545 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Sep 13 00:55:55.997680 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Sep 13 00:55:55.997818 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 13 00:55:55.997961 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Sep 13 00:55:55.998093 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 00:55:55.998279 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Sep 13 00:55:55.998413 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Sep 13 00:55:55.998564 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Sep 13 00:55:55.998696 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Sep 13 00:55:55.998716 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 13 00:55:55.998732 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 13 00:55:55.998752 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 00:55:55.998767 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 13 00:55:55.998783 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 13 00:55:55.998799 kernel: iommu: Default domain type: Translated Sep 13 00:55:55.998816 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:55:55.998955 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Sep 13 00:55:55.999090 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 00:55:55.999237 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Sep 13 00:55:55.999256 kernel: vgaarb: loaded Sep 13 00:55:55.999276 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 13 00:55:55.999293 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 13 00:55:55.999308 kernel: PTP clock support registered Sep 13 00:55:55.999323 kernel: Registered efivars operations Sep 13 00:55:55.999338 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:55:55.999354 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 00:55:55.999369 kernel: e820: reserve RAM buffer [mem 0x76813018-0x77ffffff] Sep 13 00:55:55.999384 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Sep 13 00:55:55.999399 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Sep 13 00:55:55.999417 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Sep 13 00:55:55.999432 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Sep 13 00:55:55.999447 kernel: clocksource: Switched to clocksource kvm-clock Sep 13 00:55:55.999463 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:55:55.999478 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:55:55.999494 kernel: pnp: PnP ACPI init Sep 13 00:55:55.999509 kernel: pnp: PnP ACPI: found 5 devices Sep 13 00:55:55.999522 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:55:55.999536 kernel: NET: Registered PF_INET protocol family Sep 13 00:55:55.999554 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:55:55.999570 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 13 00:55:55.999585 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:55:55.999600 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 13 00:55:55.999616 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Sep 13 00:55:55.999631 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 13 00:55:55.999646 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 00:55:55.999661 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 00:55:55.999675 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:55:55.999693 kernel: NET: Registered PF_XDP protocol family Sep 13 00:55:55.999814 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 00:55:55.999925 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 00:55:56.000035 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 00:55:56.000236 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 13 00:55:56.000462 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Sep 13 00:55:56.003808 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 13 00:55:56.003973 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Sep 13 00:55:56.004001 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:55:56.004017 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 13 00:55:56.004032 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Sep 13 00:55:56.004047 kernel: clocksource: Switched to clocksource tsc Sep 13 00:55:56.004062 kernel: Initialise system trusted keyrings Sep 13 00:55:56.004078 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 13 00:55:56.004093 kernel: Key type asymmetric registered Sep 13 00:55:56.004125 kernel: Asymmetric key parser 'x509' registered Sep 13 00:55:56.004143 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 13 00:55:56.004159 kernel: io scheduler mq-deadline registered Sep 13 00:55:56.004174 kernel: io scheduler kyber registered Sep 13 00:55:56.004189 kernel: io scheduler bfq registered Sep 13 00:55:56.004204 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:55:56.004219 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:55:56.004233 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:55:56.004249 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 13 00:55:56.004264 kernel: i8042: Warning: Keylock active Sep 13 00:55:56.004282 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 00:55:56.004297 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 00:55:56.004441 kernel: rtc_cmos 00:00: RTC can wake from S4 Sep 13 00:55:56.004553 kernel: rtc_cmos 00:00: registered as rtc0 Sep 13 00:55:56.004664 kernel: rtc_cmos 00:00: setting system clock to 2025-09-13T00:55:55 UTC (1757724955) Sep 13 00:55:56.004772 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Sep 13 00:55:56.004788 kernel: intel_pstate: CPU model not supported Sep 13 00:55:56.004802 kernel: efifb: probing for efifb Sep 13 00:55:56.004817 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Sep 13 00:55:56.004830 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Sep 13 00:55:56.004843 kernel: efifb: scrolling: redraw Sep 13 00:55:56.004855 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 13 00:55:56.004868 kernel: Console: switching to colour frame buffer device 100x37 Sep 13 00:55:56.004881 kernel: fb0: EFI VGA frame buffer device Sep 13 00:55:56.004920 kernel: pstore: Registered efi as persistent store backend Sep 13 00:55:56.004938 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:55:56.004951 kernel: Segment Routing with IPv6 Sep 13 00:55:56.004967 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:55:56.004980 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:55:56.004994 kernel: Key type dns_resolver registered Sep 13 00:55:56.005007 kernel: IPI shorthand broadcast: enabled Sep 13 00:55:56.005020 kernel: sched_clock: Marking stable (378137984, 133761474)->(575510720, -63611262) Sep 13 00:55:56.005033 kernel: registered taskstats version 1 Sep 13 00:55:56.005047 kernel: Loading compiled-in X.509 certificates Sep 13 00:55:56.005060 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: d4931373bb0d9b9f95da11f02ae07d3649cc6c37' Sep 13 00:55:56.005073 kernel: Key type .fscrypt registered Sep 13 00:55:56.005090 kernel: Key type fscrypt-provisioning registered Sep 13 00:55:56.005117 kernel: pstore: Using crash dump compression: deflate Sep 13 00:55:56.005131 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:55:56.005144 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:55:56.005161 kernel: ima: No architecture policies found Sep 13 00:55:56.005175 kernel: clk: Disabling unused clocks Sep 13 00:55:56.005188 kernel: Freeing unused kernel image (initmem) memory: 47492K Sep 13 00:55:56.005202 kernel: Write protecting the kernel read-only data: 28672k Sep 13 00:55:56.005216 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Sep 13 00:55:56.005233 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Sep 13 00:55:56.005247 kernel: Run /init as init process Sep 13 00:55:56.005260 kernel: with arguments: Sep 13 00:55:56.005274 kernel: /init Sep 13 00:55:56.005287 kernel: with environment: Sep 13 00:55:56.005301 kernel: HOME=/ Sep 13 00:55:56.005314 kernel: TERM=linux Sep 13 00:55:56.005329 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:55:56.005347 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:55:56.005366 systemd[1]: Detected virtualization amazon. Sep 13 00:55:56.005380 systemd[1]: Detected architecture x86-64. Sep 13 00:55:56.005394 systemd[1]: Running in initrd. Sep 13 00:55:56.005408 systemd[1]: No hostname configured, using default hostname. Sep 13 00:55:56.005422 systemd[1]: Hostname set to . Sep 13 00:55:56.005437 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:55:56.005451 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:55:56.005468 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:55:56.005489 systemd[1]: Reached target cryptsetup.target. Sep 13 00:55:56.005502 systemd[1]: Reached target paths.target. Sep 13 00:55:56.005515 systemd[1]: Reached target slices.target. Sep 13 00:55:56.005531 systemd[1]: Reached target swap.target. Sep 13 00:55:56.005550 systemd[1]: Reached target timers.target. Sep 13 00:55:56.005566 systemd[1]: Listening on iscsid.socket. Sep 13 00:55:56.005582 systemd[1]: Listening on iscsiuio.socket. Sep 13 00:55:56.005598 systemd[1]: Listening on systemd-journald-audit.socket. Sep 13 00:55:56.005614 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 13 00:55:56.005631 systemd[1]: Listening on systemd-journald.socket. Sep 13 00:55:56.005647 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:55:56.005664 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:55:56.005683 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:55:56.005699 systemd[1]: Reached target sockets.target. Sep 13 00:55:56.005716 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:55:56.005732 systemd[1]: Finished network-cleanup.service. Sep 13 00:55:56.005748 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:55:56.005765 systemd[1]: Starting systemd-journald.service... Sep 13 00:55:56.005782 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:55:56.005798 systemd[1]: Starting systemd-resolved.service... Sep 13 00:55:56.005814 systemd[1]: Starting systemd-vconsole-setup.service... Sep 13 00:55:56.005833 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 13 00:55:56.005850 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:55:56.005873 systemd-journald[185]: Journal started Sep 13 00:55:56.005950 systemd-journald[185]: Runtime Journal (/run/log/journal/ec2b3fbc81447d96ae565ada1dd7da32) is 4.8M, max 38.3M, 33.5M free. Sep 13 00:55:56.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:56.019901 kernel: audit: type=1130 audit(1757724956.009:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:56.019964 systemd[1]: Started systemd-journald.service. Sep 13 00:55:56.024149 systemd-modules-load[186]: Inserted module 'overlay' Sep 13 00:55:56.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:56.032933 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:55:56.034306 kernel: audit: type=1130 audit(1757724956.025:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:56.035709 systemd[1]: Finished systemd-vconsole-setup.service. Sep 13 00:55:56.039807 systemd[1]: Starting dracut-cmdline-ask.service... Sep 13 00:55:56.049692 kernel: audit: type=1130 audit(1757724956.034:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:56.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:56.056544 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 00:55:56.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:56.065616 systemd-resolved[187]: Positive Trust Anchors: Sep 13 00:55:56.072262 kernel: audit: type=1130 audit(1757724956.037:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:56.065628 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:55:56.065688 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:55:56.099969 kernel: audit: type=1130 audit(1757724956.074:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:56.100002 kernel: audit: type=1130 audit(1757724956.093:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:56.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:56.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:56.069452 systemd-resolved[187]: Defaulting to hostname 'linux'. Sep 13 00:55:56.121010 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:55:56.121045 kernel: audit: type=1130 audit(1757724956.095:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:56.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:56.070500 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 00:55:56.083865 systemd[1]: Started systemd-resolved.service. Sep 13 00:55:56.095237 systemd[1]: Finished dracut-cmdline-ask.service. Sep 13 00:55:56.096482 systemd[1]: Reached target nss-lookup.target. Sep 13 00:55:56.098719 systemd[1]: Starting dracut-cmdline.service... Sep 13 00:55:56.127926 dracut-cmdline[203]: dracut-dracut-053 Sep 13 00:55:56.132452 dracut-cmdline[203]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:55:56.142743 kernel: Bridge firewalling registered Sep 13 00:55:56.134944 systemd-modules-load[186]: Inserted module 'br_netfilter' Sep 13 00:55:56.165126 kernel: SCSI subsystem initialized Sep 13 00:55:56.186764 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:55:56.186832 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:55:56.189681 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 13 00:55:56.194223 systemd-modules-load[186]: Inserted module 'dm_multipath' Sep 13 00:55:56.195165 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:55:56.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:56.207645 kernel: audit: type=1130 audit(1757724956.197:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:56.206648 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:55:56.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:56.217895 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:55:56.229445 kernel: audit: type=1130 audit(1757724956.218:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:56.229480 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:55:56.247135 kernel: iscsi: registered transport (tcp) Sep 13 00:55:56.272232 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:55:56.272317 kernel: QLogic iSCSI HBA Driver Sep 13 00:55:56.304485 systemd[1]: Finished dracut-cmdline.service. Sep 13 00:55:56.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:56.306491 systemd[1]: Starting dracut-pre-udev.service... Sep 13 00:55:56.358135 kernel: raid6: avx512x4 gen() 15319 MB/s Sep 13 00:55:56.376132 kernel: raid6: avx512x4 xor() 7885 MB/s Sep 13 00:55:56.394126 kernel: raid6: avx512x2 gen() 15329 MB/s Sep 13 00:55:56.412130 kernel: raid6: avx512x2 xor() 24120 MB/s Sep 13 00:55:56.430123 kernel: raid6: avx512x1 gen() 15385 MB/s Sep 13 00:55:56.448130 kernel: raid6: avx512x1 xor() 21692 MB/s Sep 13 00:55:56.466124 kernel: raid6: avx2x4 gen() 15216 MB/s Sep 13 00:55:56.484131 kernel: raid6: avx2x4 xor() 7393 MB/s Sep 13 00:55:56.502123 kernel: raid6: avx2x2 gen() 15268 MB/s Sep 13 00:55:56.520128 kernel: raid6: avx2x2 xor() 18021 MB/s Sep 13 00:55:56.538129 kernel: raid6: avx2x1 gen() 11661 MB/s Sep 13 00:55:56.556131 kernel: raid6: avx2x1 xor() 15749 MB/s Sep 13 00:55:56.574124 kernel: raid6: sse2x4 gen() 9534 MB/s Sep 13 00:55:56.592136 kernel: raid6: sse2x4 xor() 6039 MB/s Sep 13 00:55:56.610124 kernel: raid6: sse2x2 gen() 10456 MB/s Sep 13 00:55:56.628129 kernel: raid6: sse2x2 xor() 6112 MB/s Sep 13 00:55:56.646123 kernel: raid6: sse2x1 gen() 9443 MB/s Sep 13 00:55:56.664420 kernel: raid6: sse2x1 xor() 4815 MB/s Sep 13 00:55:56.664465 kernel: raid6: using algorithm avx512x1 gen() 15385 MB/s Sep 13 00:55:56.664484 kernel: raid6: .... xor() 21692 MB/s, rmw enabled Sep 13 00:55:56.665533 kernel: raid6: using avx512x2 recovery algorithm Sep 13 00:55:56.680136 kernel: xor: automatically using best checksumming function avx Sep 13 00:55:56.784138 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Sep 13 00:55:56.792987 systemd[1]: Finished dracut-pre-udev.service. Sep 13 00:55:56.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:56.792000 audit: BPF prog-id=7 op=LOAD Sep 13 00:55:56.792000 audit: BPF prog-id=8 op=LOAD Sep 13 00:55:56.794608 systemd[1]: Starting systemd-udevd.service... Sep 13 00:55:56.807747 systemd-udevd[386]: Using default interface naming scheme 'v252'. Sep 13 00:55:56.813013 systemd[1]: Started systemd-udevd.service. Sep 13 00:55:56.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:56.818082 systemd[1]: Starting dracut-pre-trigger.service... Sep 13 00:55:56.836071 dracut-pre-trigger[395]: rd.md=0: removing MD RAID activation Sep 13 00:55:56.867372 systemd[1]: Finished dracut-pre-trigger.service. Sep 13 00:55:56.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:56.868841 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:55:56.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:56.912604 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:55:56.972129 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:55:56.995077 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 13 00:55:57.024083 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 13 00:55:57.024276 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 00:55:57.024298 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Sep 13 00:55:57.024436 kernel: AES CTR mode by8 optimization enabled Sep 13 00:55:57.024461 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 13 00:55:57.024620 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:94:c7:2a:53:05 Sep 13 00:55:57.024765 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 13 00:55:57.036130 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 13 00:55:57.047135 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:55:57.047215 kernel: GPT:9289727 != 16777215 Sep 13 00:55:57.047236 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:55:57.047253 kernel: GPT:9289727 != 16777215 Sep 13 00:55:57.047475 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:55:57.049903 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 13 00:55:57.052547 (udev-worker)[430]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:55:57.118128 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (434) Sep 13 00:55:57.169783 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 13 00:55:57.175168 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 13 00:55:57.178574 systemd[1]: Starting disk-uuid.service... Sep 13 00:55:57.186577 disk-uuid[587]: Primary Header is updated. Sep 13 00:55:57.186577 disk-uuid[587]: Secondary Entries is updated. Sep 13 00:55:57.186577 disk-uuid[587]: Secondary Header is updated. Sep 13 00:55:57.215862 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:55:57.221707 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 13 00:55:57.228826 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 13 00:55:58.203912 disk-uuid[588]: The operation has completed successfully. Sep 13 00:55:58.204722 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 13 00:55:58.315639 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:55:58.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:58.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:58.315746 systemd[1]: Finished disk-uuid.service. Sep 13 00:55:58.317283 systemd[1]: Starting verity-setup.service... Sep 13 00:55:58.335272 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 13 00:55:58.433497 systemd[1]: Found device dev-mapper-usr.device. Sep 13 00:55:58.434838 systemd[1]: Mounting sysusr-usr.mount... Sep 13 00:55:58.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:58.436605 systemd[1]: Finished verity-setup.service. Sep 13 00:55:58.530128 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 13 00:55:58.530835 systemd[1]: Mounted sysusr-usr.mount. Sep 13 00:55:58.531621 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 13 00:55:58.532370 systemd[1]: Starting ignition-setup.service... Sep 13 00:55:58.535879 systemd[1]: Starting parse-ip-for-networkd.service... Sep 13 00:55:58.558068 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:55:58.558143 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 13 00:55:58.558156 kernel: BTRFS info (device nvme0n1p6): has skinny extents Sep 13 00:55:58.569128 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 13 00:55:58.581558 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:55:58.592304 systemd[1]: Finished ignition-setup.service. Sep 13 00:55:58.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:58.594570 systemd[1]: Starting ignition-fetch-offline.service... Sep 13 00:55:58.626382 systemd[1]: Finished parse-ip-for-networkd.service. Sep 13 00:55:58.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:58.627000 audit: BPF prog-id=9 op=LOAD Sep 13 00:55:58.629612 systemd[1]: Starting systemd-networkd.service... Sep 13 00:55:58.655515 systemd-networkd[1106]: lo: Link UP Sep 13 00:55:58.656616 systemd-networkd[1106]: lo: Gained carrier Sep 13 00:55:58.657703 systemd-networkd[1106]: Enumeration completed Sep 13 00:55:58.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:58.657824 systemd[1]: Started systemd-networkd.service. Sep 13 00:55:58.658653 systemd[1]: Reached target network.target. Sep 13 00:55:58.660976 systemd[1]: Starting iscsiuio.service... Sep 13 00:55:58.661119 systemd-networkd[1106]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:55:58.665429 systemd-networkd[1106]: eth0: Link UP Sep 13 00:55:58.665438 systemd-networkd[1106]: eth0: Gained carrier Sep 13 00:55:58.673593 systemd[1]: Started iscsiuio.service. Sep 13 00:55:58.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:58.675129 systemd[1]: Starting iscsid.service... Sep 13 00:55:58.678513 iscsid[1111]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:55:58.678513 iscsid[1111]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 13 00:55:58.678513 iscsid[1111]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 13 00:55:58.678513 iscsid[1111]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 13 00:55:58.678513 iscsid[1111]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:55:58.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:58.688435 iscsid[1111]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 13 00:55:58.680013 systemd[1]: Started iscsid.service. Sep 13 00:55:58.681226 systemd-networkd[1106]: eth0: DHCPv4 address 172.31.27.34/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 13 00:55:58.684327 systemd[1]: Starting dracut-initqueue.service... Sep 13 00:55:58.696261 systemd[1]: Finished dracut-initqueue.service. Sep 13 00:55:58.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:58.697018 systemd[1]: Reached target remote-fs-pre.target. Sep 13 00:55:58.698034 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:55:58.698696 systemd[1]: Reached target remote-fs.target. Sep 13 00:55:58.700138 systemd[1]: Starting dracut-pre-mount.service... Sep 13 00:55:58.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:58.709181 systemd[1]: Finished dracut-pre-mount.service. Sep 13 00:55:58.879730 ignition[1062]: Ignition 2.14.0 Sep 13 00:55:58.880683 ignition[1062]: Stage: fetch-offline Sep 13 00:55:58.880819 ignition[1062]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:55:58.880852 ignition[1062]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:55:58.898521 ignition[1062]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:55:58.899234 ignition[1062]: Ignition finished successfully Sep 13 00:55:58.901071 systemd[1]: Finished ignition-fetch-offline.service. Sep 13 00:55:58.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:58.903145 systemd[1]: Starting ignition-fetch.service... Sep 13 00:55:58.912439 ignition[1130]: Ignition 2.14.0 Sep 13 00:55:58.912452 ignition[1130]: Stage: fetch Sep 13 00:55:58.912656 ignition[1130]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:55:58.912690 ignition[1130]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:55:58.920136 ignition[1130]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:55:58.921187 ignition[1130]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:55:58.937809 ignition[1130]: INFO : PUT result: OK Sep 13 00:55:58.940665 ignition[1130]: DEBUG : parsed url from cmdline: "" Sep 13 00:55:58.940665 ignition[1130]: INFO : no config URL provided Sep 13 00:55:58.940665 ignition[1130]: INFO : reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:55:58.940665 ignition[1130]: INFO : no config at "/usr/lib/ignition/user.ign" Sep 13 00:55:58.943089 ignition[1130]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:55:58.943089 ignition[1130]: INFO : PUT result: OK Sep 13 00:55:58.943089 ignition[1130]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 13 00:55:58.943089 ignition[1130]: INFO : GET result: OK Sep 13 00:55:58.943089 ignition[1130]: DEBUG : parsing config with SHA512: d0aa609d2f8e84f9b94fd0c94c862bda1bbdf3bfd774741842fec325ff3f55653a456d011fd8b449673e789ab18547faa279c8c0e723b1ceb55c142717af2b07 Sep 13 00:55:58.947324 unknown[1130]: fetched base config from "system" Sep 13 00:55:58.947335 unknown[1130]: fetched base config from "system" Sep 13 00:55:58.947792 ignition[1130]: fetch: fetch complete Sep 13 00:55:58.947341 unknown[1130]: fetched user config from "aws" Sep 13 00:55:58.947797 ignition[1130]: fetch: fetch passed Sep 13 00:55:58.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:58.949194 systemd[1]: Finished ignition-fetch.service. Sep 13 00:55:58.947845 ignition[1130]: Ignition finished successfully Sep 13 00:55:58.950863 systemd[1]: Starting ignition-kargs.service... Sep 13 00:55:58.961162 ignition[1136]: Ignition 2.14.0 Sep 13 00:55:58.961172 ignition[1136]: Stage: kargs Sep 13 00:55:58.961322 ignition[1136]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:55:58.961344 ignition[1136]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:55:58.969776 ignition[1136]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:55:58.970553 ignition[1136]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:55:58.971122 ignition[1136]: INFO : PUT result: OK Sep 13 00:55:58.974062 ignition[1136]: kargs: kargs passed Sep 13 00:55:58.974141 ignition[1136]: Ignition finished successfully Sep 13 00:55:58.975672 systemd[1]: Finished ignition-kargs.service. Sep 13 00:55:58.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:58.977119 systemd[1]: Starting ignition-disks.service... Sep 13 00:55:58.985909 ignition[1142]: Ignition 2.14.0 Sep 13 00:55:58.985917 ignition[1142]: Stage: disks Sep 13 00:55:58.986069 ignition[1142]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:55:58.986095 ignition[1142]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:55:58.991788 ignition[1142]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:55:58.992463 ignition[1142]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:55:58.993024 ignition[1142]: INFO : PUT result: OK Sep 13 00:55:58.995696 ignition[1142]: disks: disks passed Sep 13 00:55:58.995747 ignition[1142]: Ignition finished successfully Sep 13 00:55:58.996651 systemd[1]: Finished ignition-disks.service. Sep 13 00:55:58.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:58.997346 systemd[1]: Reached target initrd-root-device.target. Sep 13 00:55:58.998125 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:55:58.999059 systemd[1]: Reached target local-fs.target. Sep 13 00:55:58.999968 systemd[1]: Reached target sysinit.target. Sep 13 00:55:59.000838 systemd[1]: Reached target basic.target. Sep 13 00:55:59.002719 systemd[1]: Starting systemd-fsck-root.service... Sep 13 00:55:59.025337 systemd-fsck[1150]: ROOT: clean, 629/553520 files, 56028/553472 blocks Sep 13 00:55:59.028438 systemd[1]: Finished systemd-fsck-root.service. Sep 13 00:55:59.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:59.030170 systemd[1]: Mounting sysroot.mount... Sep 13 00:55:59.049918 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 13 00:55:59.051775 systemd[1]: Mounted sysroot.mount. Sep 13 00:55:59.053161 systemd[1]: Reached target initrd-root-fs.target. Sep 13 00:55:59.055863 systemd[1]: Mounting sysroot-usr.mount... Sep 13 00:55:59.057690 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Sep 13 00:55:59.058946 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:55:59.058986 systemd[1]: Reached target ignition-diskful.target. Sep 13 00:55:59.060560 systemd[1]: Mounted sysroot-usr.mount. Sep 13 00:55:59.063718 systemd[1]: Starting initrd-setup-root.service... Sep 13 00:55:59.069480 initrd-setup-root[1171]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:55:59.083243 initrd-setup-root[1179]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:55:59.087751 initrd-setup-root[1187]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:55:59.093470 initrd-setup-root[1195]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:55:59.174478 systemd[1]: Finished initrd-setup-root.service. Sep 13 00:55:59.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:59.176261 systemd[1]: Starting ignition-mount.service... Sep 13 00:55:59.178409 systemd[1]: Starting sysroot-boot.service... Sep 13 00:55:59.188140 bash[1212]: umount: /sysroot/usr/share/oem: not mounted. Sep 13 00:55:59.200005 ignition[1213]: INFO : Ignition 2.14.0 Sep 13 00:55:59.200005 ignition[1213]: INFO : Stage: mount Sep 13 00:55:59.202384 ignition[1213]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:55:59.202384 ignition[1213]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:55:59.215868 ignition[1213]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:55:59.216784 ignition[1213]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:55:59.217564 ignition[1213]: INFO : PUT result: OK Sep 13 00:55:59.219503 ignition[1213]: INFO : mount: mount passed Sep 13 00:55:59.220449 ignition[1213]: INFO : Ignition finished successfully Sep 13 00:55:59.222798 systemd[1]: Finished ignition-mount.service. Sep 13 00:55:59.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:59.228567 systemd[1]: Finished sysroot-boot.service. Sep 13 00:55:59.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:55:59.468013 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 13 00:55:59.497134 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1222) Sep 13 00:55:59.501610 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:55:59.501696 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 13 00:55:59.501731 kernel: BTRFS info (device nvme0n1p6): has skinny extents Sep 13 00:55:59.512441 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 13 00:55:59.516205 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 13 00:55:59.518624 systemd[1]: Starting ignition-files.service... Sep 13 00:55:59.541168 ignition[1242]: INFO : Ignition 2.14.0 Sep 13 00:55:59.541168 ignition[1242]: INFO : Stage: files Sep 13 00:55:59.543536 ignition[1242]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:55:59.543536 ignition[1242]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:55:59.555180 ignition[1242]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:55:59.556214 ignition[1242]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:55:59.557139 ignition[1242]: INFO : PUT result: OK Sep 13 00:55:59.559588 ignition[1242]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:55:59.565952 ignition[1242]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:55:59.565952 ignition[1242]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:55:59.574201 ignition[1242]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:55:59.576229 ignition[1242]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:55:59.577955 ignition[1242]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:55:59.577418 unknown[1242]: wrote ssh authorized keys file for user: core Sep 13 00:55:59.581078 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Sep 13 00:55:59.581078 ignition[1242]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 13 00:55:59.589492 ignition[1242]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4180495208" Sep 13 00:55:59.589492 ignition[1242]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4180495208": device or resource busy Sep 13 00:55:59.589492 ignition[1242]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4180495208", trying btrfs: device or resource busy Sep 13 00:55:59.589492 ignition[1242]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4180495208" Sep 13 00:55:59.589492 ignition[1242]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4180495208" Sep 13 00:55:59.605675 ignition[1242]: INFO : op(3): [started] unmounting "/mnt/oem4180495208" Sep 13 00:55:59.605675 ignition[1242]: INFO : op(3): [finished] unmounting "/mnt/oem4180495208" Sep 13 00:55:59.605675 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Sep 13 00:55:59.605675 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:55:59.605675 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:55:59.605675 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:55:59.605675 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:55:59.605675 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:55:59.605675 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:55:59.605675 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Sep 13 00:55:59.605675 ignition[1242]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 13 00:55:59.595899 systemd[1]: mnt-oem4180495208.mount: Deactivated successfully. Sep 13 00:55:59.642642 ignition[1242]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1160799226" Sep 13 00:55:59.642642 ignition[1242]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1160799226": device or resource busy Sep 13 00:55:59.642642 ignition[1242]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1160799226", trying btrfs: device or resource busy Sep 13 00:55:59.642642 ignition[1242]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1160799226" Sep 13 00:55:59.642642 ignition[1242]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1160799226" Sep 13 00:55:59.642642 ignition[1242]: INFO : op(6): [started] unmounting "/mnt/oem1160799226" Sep 13 00:55:59.642642 ignition[1242]: INFO : op(6): [finished] unmounting "/mnt/oem1160799226" Sep 13 00:55:59.642642 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Sep 13 00:55:59.642642 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Sep 13 00:55:59.642642 ignition[1242]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 13 00:55:59.642642 ignition[1242]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1426886354" Sep 13 00:55:59.642642 ignition[1242]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1426886354": device or resource busy Sep 13 00:55:59.642642 ignition[1242]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1426886354", trying btrfs: device or resource busy Sep 13 00:55:59.642642 ignition[1242]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1426886354" Sep 13 00:55:59.642642 ignition[1242]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1426886354" Sep 13 00:55:59.642642 ignition[1242]: INFO : op(9): [started] unmounting "/mnt/oem1426886354" Sep 13 00:55:59.642642 ignition[1242]: INFO : op(9): [finished] unmounting "/mnt/oem1426886354" Sep 13 00:55:59.642642 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Sep 13 00:55:59.642642 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Sep 13 00:55:59.642642 ignition[1242]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Sep 13 00:55:59.672327 ignition[1242]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3962607479" Sep 13 00:55:59.672327 ignition[1242]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3962607479": device or resource busy Sep 13 00:55:59.672327 ignition[1242]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3962607479", trying btrfs: device or resource busy Sep 13 00:55:59.672327 ignition[1242]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3962607479" Sep 13 00:55:59.672327 ignition[1242]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3962607479" Sep 13 00:55:59.672327 ignition[1242]: INFO : op(c): [started] unmounting "/mnt/oem3962607479" Sep 13 00:55:59.672327 ignition[1242]: INFO : op(c): [finished] unmounting "/mnt/oem3962607479" Sep 13 00:55:59.672327 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Sep 13 00:55:59.672327 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:55:59.672327 ignition[1242]: INFO : GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 13 00:55:59.760325 systemd-networkd[1106]: eth0: Gained IPv6LL Sep 13 00:56:00.105523 ignition[1242]: INFO : GET result: OK Sep 13 00:56:00.471932 systemd[1]: mnt-oem1160799226.mount: Deactivated successfully. Sep 13 00:56:00.596320 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:56:00.596320 ignition[1242]: INFO : files: op(b): [started] processing unit "coreos-metadata-sshkeys@.service" Sep 13 00:56:00.596320 ignition[1242]: INFO : files: op(b): [finished] processing unit "coreos-metadata-sshkeys@.service" Sep 13 00:56:00.596320 ignition[1242]: INFO : files: op(c): [started] processing unit "amazon-ssm-agent.service" Sep 13 00:56:00.611266 ignition[1242]: INFO : files: op(c): op(d): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Sep 13 00:56:00.611266 ignition[1242]: INFO : files: op(c): op(d): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Sep 13 00:56:00.611266 ignition[1242]: INFO : files: op(c): [finished] processing unit "amazon-ssm-agent.service" Sep 13 00:56:00.611266 ignition[1242]: INFO : files: op(e): [started] processing unit "nvidia.service" Sep 13 00:56:00.611266 ignition[1242]: INFO : files: op(e): [finished] processing unit "nvidia.service" Sep 13 00:56:00.611266 ignition[1242]: INFO : files: op(f): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 13 00:56:00.611266 ignition[1242]: INFO : files: op(f): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 13 00:56:00.611266 ignition[1242]: INFO : files: op(10): [started] setting preset to enabled for "amazon-ssm-agent.service" Sep 13 00:56:00.611266 ignition[1242]: INFO : files: op(10): [finished] setting preset to enabled for "amazon-ssm-agent.service" Sep 13 00:56:00.611266 ignition[1242]: INFO : files: op(11): [started] setting preset to enabled for "nvidia.service" Sep 13 00:56:00.611266 ignition[1242]: INFO : files: op(11): [finished] setting preset to enabled for "nvidia.service" Sep 13 00:56:00.611266 ignition[1242]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:56:00.611266 ignition[1242]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:56:00.611266 ignition[1242]: INFO : files: files passed Sep 13 00:56:00.611266 ignition[1242]: INFO : Ignition finished successfully Sep 13 00:56:00.706688 kernel: kauditd_printk_skb: 26 callbacks suppressed Sep 13 00:56:00.706731 kernel: audit: type=1130 audit(1757724960.614:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:00.706852 kernel: audit: type=1130 audit(1757724960.657:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:00.706877 kernel: audit: type=1130 audit(1757724960.669:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:00.706897 kernel: audit: type=1131 audit(1757724960.670:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:00.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:00.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:00.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:00.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:00.613892 systemd[1]: Finished ignition-files.service. Sep 13 00:56:00.627848 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 13 00:56:00.714321 initrd-setup-root-after-ignition[1265]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:56:00.648358 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 13 00:56:00.650069 systemd[1]: Starting ignition-quench.service... Sep 13 00:56:00.656517 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 13 00:56:00.659066 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:56:00.770469 kernel: audit: type=1130 audit(1757724960.725:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:00.770508 kernel: audit: type=1131 audit(1757724960.725:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:00.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:00.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:00.659468 systemd[1]: Finished ignition-quench.service. Sep 13 00:56:00.671507 systemd[1]: Reached target ignition-complete.target. Sep 13 00:56:00.689126 systemd[1]: Starting initrd-parse-etc.service... Sep 13 00:56:00.725054 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:56:00.725194 systemd[1]: Finished initrd-parse-etc.service. Sep 13 00:56:00.727015 systemd[1]: Reached target initrd-fs.target. Sep 13 00:56:00.772363 systemd[1]: Reached target initrd.target. Sep 13 00:56:00.778335 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 13 00:56:00.784632 systemd[1]: Starting dracut-pre-pivot.service... Sep 13 00:56:00.848545 systemd[1]: Finished dracut-pre-pivot.service. Sep 13 00:56:00.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:00.875189 kernel: audit: type=1130 audit(1757724960.850:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:00.859328 systemd[1]: Starting initrd-cleanup.service... Sep 13 00:56:00.934381 systemd[1]: Stopped target nss-lookup.target. Sep 13 00:56:00.935348 systemd[1]: Stopped target remote-cryptsetup.target. Sep 13 00:56:00.938617 systemd[1]: Stopped target timers.target. Sep 13 00:56:00.946877 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:56:00.976794 kernel: audit: type=1131 audit(1757724960.951:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:00.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:00.950957 systemd[1]: Stopped dracut-pre-pivot.service. Sep 13 00:56:00.953088 systemd[1]: Stopped target initrd.target. Sep 13 00:56:00.978530 systemd[1]: Stopped target basic.target. Sep 13 00:56:00.986524 systemd[1]: Stopped target ignition-complete.target. Sep 13 00:56:00.988608 systemd[1]: Stopped target ignition-diskful.target. Sep 13 00:56:00.993339 systemd[1]: Stopped target initrd-root-device.target. Sep 13 00:56:01.001480 systemd[1]: Stopped target remote-fs.target. Sep 13 00:56:01.003681 systemd[1]: Stopped target remote-fs-pre.target. Sep 13 00:56:01.006991 systemd[1]: Stopped target sysinit.target. Sep 13 00:56:01.008496 systemd[1]: Stopped target local-fs.target. Sep 13 00:56:01.014958 systemd[1]: Stopped target local-fs-pre.target. Sep 13 00:56:01.017887 systemd[1]: Stopped target swap.target. Sep 13 00:56:01.045041 kernel: audit: type=1131 audit(1757724961.029:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:01.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:01.025863 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:56:01.026120 systemd[1]: Stopped dracut-pre-mount.service. Sep 13 00:56:01.072536 kernel: audit: type=1131 audit(1757724961.051:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:01.051000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:01.030935 systemd[1]: Stopped target cryptsetup.target. Sep 13 00:56:01.074000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:01.046492 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:56:01.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:01.046717 systemd[1]: Stopped dracut-initqueue.service. Sep 13 00:56:01.052906 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:56:01.053179 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 13 00:56:01.075598 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:56:01.075838 systemd[1]: Stopped ignition-files.service. Sep 13 00:56:01.090329 systemd[1]: Stopping ignition-mount.service... Sep 13 00:56:01.112136 iscsid[1111]: iscsid shutting down. Sep 13 00:56:01.110779 systemd[1]: Stopping iscsid.service... Sep 13 00:56:01.126000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:01.119403 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:56:01.119644 systemd[1]: Stopped kmod-static-nodes.service. Sep 13 00:56:01.161000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:01.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:01.142994 systemd[1]: Stopping sysroot-boot.service... Sep 13 00:56:01.195000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:01.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:01.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:01.149380 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:56:01.220653 ignition[1280]: INFO : Ignition 2.14.0 Sep 13 00:56:01.220653 ignition[1280]: INFO : Stage: umount Sep 13 00:56:01.220653 ignition[1280]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:56:01.220653 ignition[1280]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Sep 13 00:56:01.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:01.149684 systemd[1]: Stopped systemd-udev-trigger.service. Sep 13 00:56:01.162793 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:56:01.163022 systemd[1]: Stopped dracut-pre-trigger.service. Sep 13 00:56:01.185739 systemd[1]: iscsid.service: Deactivated successfully. Sep 13 00:56:01.185895 systemd[1]: Stopped iscsid.service. Sep 13 00:56:01.197846 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:56:01.292195 ignition[1280]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 13 00:56:01.292195 ignition[1280]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 13 00:56:01.292195 ignition[1280]: INFO : PUT result: OK Sep 13 00:56:01.199174 systemd[1]: Finished initrd-cleanup.service. Sep 13 00:56:01.309595 ignition[1280]: INFO : umount: umount passed Sep 13 00:56:01.309595 ignition[1280]: INFO : Ignition finished successfully Sep 13 00:56:01.233704 systemd[1]: Stopping iscsiuio.service... Sep 13 00:56:01.247552 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:56:01.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:01.253689 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 13 00:56:01.322000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:01.253913 systemd[1]: Stopped iscsiuio.service. Sep 13 00:56:01.326000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:01.311865 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:56:01.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:01.312001 systemd[1]: Stopped ignition-mount.service. Sep 13 00:56:01.320636 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:56:01.321144 systemd[1]: Stopped ignition-disks.service. Sep 13 00:56:01.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:01.323880 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:56:01.323956 systemd[1]: Stopped ignition-kargs.service. Sep 13 00:56:01.328227 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 13 00:56:01.328302 systemd[1]: Stopped ignition-fetch.service. Sep 13 00:56:01.334274 systemd[1]: Stopped target network.target. Sep 13 00:56:01.336938 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:56:01.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:01.337032 systemd[1]: Stopped ignition-fetch-offline.service. Sep 13 00:56:01.344250 systemd[1]: Stopped target paths.target. Sep 13 00:56:01.344976 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:56:01.351908 systemd[1]: Stopped systemd-ask-password-console.path. Sep 13 00:56:01.354241 systemd[1]: Stopped target slices.target. Sep 13 00:56:01.357674 systemd[1]: Stopped target sockets.target. Sep 13 00:56:01.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:01.359754 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:56:01.359830 systemd[1]: Closed iscsid.socket. Sep 13 00:56:01.390000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:01.391000 audit: BPF prog-id=6 op=UNLOAD Sep 13 00:56:01.364760 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:56:01.364825 systemd[1]: Closed iscsiuio.socket. Sep 13 00:56:01.365718 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:56:01.365788 systemd[1]: Stopped ignition-setup.service. Sep 13 00:56:01.367351 systemd[1]: Stopping systemd-networkd.service... Sep 13 00:56:01.374798 systemd-networkd[1106]: eth0: DHCPv6 lease lost Sep 13 00:56:01.410000 audit: BPF prog-id=9 op=UNLOAD Sep 13 00:56:01.410000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:01.375986 systemd[1]: Stopping systemd-resolved.service... Sep 13 00:56:01.413000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:01.381079 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:56:01.415000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:01.381264 systemd[1]: Stopped systemd-networkd.service. Sep 13 00:56:01.389236 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:56:01.389376 systemd[1]: Stopped systemd-resolved.service. Sep 13 00:56:01.392006 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:56:01.392057 systemd[1]: Closed systemd-networkd.socket. Sep 13 00:56:01.396955 systemd[1]: Stopping network-cleanup.service... Sep 13 00:56:01.410808 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:56:01.410917 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 13 00:56:01.412165 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:56:01.412236 systemd[1]: Stopped systemd-sysctl.service. Sep 13 00:56:01.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:01.414991 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:56:01.415065 systemd[1]: Stopped systemd-modules-load.service. Sep 13 00:56:01.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:01.428761 systemd[1]: Stopping systemd-udevd.service... Sep 13 00:56:01.439278 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 13 00:56:01.474000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:01.440295 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:56:01.480000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:01.440488 systemd[1]: Stopped systemd-udevd.service. Sep 13 00:56:01.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:01.444854 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:56:01.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:01.444986 systemd[1]: Stopped sysroot-boot.service. Sep 13 00:56:01.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:01.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:01.457054 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:56:01.457167 systemd[1]: Closed systemd-udevd-control.socket. Sep 13 00:56:01.461956 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:56:01.462026 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 13 00:56:01.463018 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:56:01.463096 systemd[1]: Stopped dracut-pre-udev.service. Sep 13 00:56:01.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:01.514000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:01.475425 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:56:01.475499 systemd[1]: Stopped dracut-cmdline.service. Sep 13 00:56:01.482136 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:56:01.482226 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 13 00:56:01.484659 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:56:01.484795 systemd[1]: Stopped initrd-setup-root.service. Sep 13 00:56:01.492281 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 13 00:56:01.493951 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:56:01.494041 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 13 00:56:01.499672 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:56:01.499970 systemd[1]: Stopped network-cleanup.service. Sep 13 00:56:01.511067 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:56:01.511215 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 13 00:56:01.515893 systemd[1]: Reached target initrd-switch-root.target. Sep 13 00:56:01.520222 systemd[1]: Starting initrd-switch-root.service... Sep 13 00:56:01.556825 systemd[1]: Switching root. Sep 13 00:56:01.587638 systemd-journald[185]: Journal stopped Sep 13 00:56:06.612613 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Sep 13 00:56:06.612700 kernel: SELinux: Class mctp_socket not defined in policy. Sep 13 00:56:06.612724 kernel: SELinux: Class anon_inode not defined in policy. Sep 13 00:56:06.612747 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 13 00:56:06.612767 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:56:06.612786 kernel: SELinux: policy capability open_perms=1 Sep 13 00:56:06.612810 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:56:06.612828 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:56:06.612844 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:56:06.612861 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:56:06.612876 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:56:06.612892 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:56:06.612915 systemd[1]: Successfully loaded SELinux policy in 146.914ms. Sep 13 00:56:06.612944 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 36ms. Sep 13 00:56:06.612968 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:56:06.612988 systemd[1]: Detected virtualization amazon. Sep 13 00:56:06.613009 systemd[1]: Detected architecture x86-64. Sep 13 00:56:06.613034 systemd[1]: Detected first boot. Sep 13 00:56:06.613054 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:56:06.613075 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 13 00:56:06.613094 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:56:06.613164 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:56:06.613196 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:56:06.613220 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:56:06.613241 kernel: kauditd_printk_skb: 48 callbacks suppressed Sep 13 00:56:06.613260 kernel: audit: type=1334 audit(1757724966.363:88): prog-id=12 op=LOAD Sep 13 00:56:06.613280 kernel: audit: type=1334 audit(1757724966.363:89): prog-id=3 op=UNLOAD Sep 13 00:56:06.613300 kernel: audit: type=1334 audit(1757724966.365:90): prog-id=13 op=LOAD Sep 13 00:56:06.613321 kernel: audit: type=1334 audit(1757724966.367:91): prog-id=14 op=LOAD Sep 13 00:56:06.613340 kernel: audit: type=1334 audit(1757724966.367:92): prog-id=4 op=UNLOAD Sep 13 00:56:06.613360 kernel: audit: type=1334 audit(1757724966.367:93): prog-id=5 op=UNLOAD Sep 13 00:56:06.613378 kernel: audit: type=1334 audit(1757724966.369:94): prog-id=15 op=LOAD Sep 13 00:56:06.613398 kernel: audit: type=1334 audit(1757724966.369:95): prog-id=12 op=UNLOAD Sep 13 00:56:06.613419 kernel: audit: type=1334 audit(1757724966.377:96): prog-id=16 op=LOAD Sep 13 00:56:06.613437 kernel: audit: type=1334 audit(1757724966.379:97): prog-id=17 op=LOAD Sep 13 00:56:06.613457 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 00:56:06.613478 systemd[1]: Stopped initrd-switch-root.service. Sep 13 00:56:06.613502 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 00:56:06.613523 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 13 00:56:06.613543 systemd[1]: Created slice system-addon\x2drun.slice. Sep 13 00:56:06.613565 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Sep 13 00:56:06.613586 systemd[1]: Created slice system-getty.slice. Sep 13 00:56:06.613607 systemd[1]: Created slice system-modprobe.slice. Sep 13 00:56:06.613628 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 13 00:56:06.613653 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 13 00:56:06.613673 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 13 00:56:06.613694 systemd[1]: Created slice user.slice. Sep 13 00:56:06.613714 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:56:06.613735 systemd[1]: Started systemd-ask-password-wall.path. Sep 13 00:56:06.613755 systemd[1]: Set up automount boot.automount. Sep 13 00:56:06.613776 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 13 00:56:06.613796 systemd[1]: Stopped target initrd-switch-root.target. Sep 13 00:56:06.613817 systemd[1]: Stopped target initrd-fs.target. Sep 13 00:56:06.613841 systemd[1]: Stopped target initrd-root-fs.target. Sep 13 00:56:06.613863 systemd[1]: Reached target integritysetup.target. Sep 13 00:56:06.613884 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:56:06.613905 systemd[1]: Reached target remote-fs.target. Sep 13 00:56:06.613926 systemd[1]: Reached target slices.target. Sep 13 00:56:06.613946 systemd[1]: Reached target swap.target. Sep 13 00:56:06.613968 systemd[1]: Reached target torcx.target. Sep 13 00:56:06.613989 systemd[1]: Reached target veritysetup.target. Sep 13 00:56:06.614009 systemd[1]: Listening on systemd-coredump.socket. Sep 13 00:56:06.614032 systemd[1]: Listening on systemd-initctl.socket. Sep 13 00:56:06.614052 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:56:06.614070 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:56:06.614090 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:56:06.614122 systemd[1]: Listening on systemd-userdbd.socket. Sep 13 00:56:06.614140 systemd[1]: Mounting dev-hugepages.mount... Sep 13 00:56:06.614159 systemd[1]: Mounting dev-mqueue.mount... Sep 13 00:56:06.614192 systemd[1]: Mounting media.mount... Sep 13 00:56:06.614211 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:56:06.614229 systemd[1]: Mounting sys-kernel-debug.mount... Sep 13 00:56:06.614250 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 13 00:56:06.614268 systemd[1]: Mounting tmp.mount... Sep 13 00:56:06.614286 systemd[1]: Starting flatcar-tmpfiles.service... Sep 13 00:56:06.614306 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:56:06.614324 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:56:06.614343 systemd[1]: Starting modprobe@configfs.service... Sep 13 00:56:06.614361 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:56:06.614380 systemd[1]: Starting modprobe@drm.service... Sep 13 00:56:06.614411 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:56:06.614432 systemd[1]: Starting modprobe@fuse.service... Sep 13 00:56:06.614452 systemd[1]: Starting modprobe@loop.service... Sep 13 00:56:06.614472 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:56:06.614490 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 00:56:06.614509 systemd[1]: Stopped systemd-fsck-root.service. Sep 13 00:56:06.614530 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 00:56:06.614550 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 00:56:06.614568 systemd[1]: Stopped systemd-journald.service. Sep 13 00:56:06.614589 systemd[1]: Starting systemd-journald.service... Sep 13 00:56:06.614608 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:56:06.614626 systemd[1]: Starting systemd-network-generator.service... Sep 13 00:56:06.614644 systemd[1]: Starting systemd-remount-fs.service... Sep 13 00:56:06.614662 kernel: loop: module loaded Sep 13 00:56:06.614680 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:56:06.614699 kernel: fuse: init (API version 7.34) Sep 13 00:56:06.614722 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 00:56:06.614740 systemd[1]: Stopped verity-setup.service. Sep 13 00:56:06.614759 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:56:06.614777 systemd[1]: Mounted dev-hugepages.mount. Sep 13 00:56:06.614795 systemd[1]: Mounted dev-mqueue.mount. Sep 13 00:56:06.614814 systemd[1]: Mounted media.mount. Sep 13 00:56:06.614838 systemd-journald[1406]: Journal started Sep 13 00:56:06.614908 systemd-journald[1406]: Runtime Journal (/run/log/journal/ec2b3fbc81447d96ae565ada1dd7da32) is 4.8M, max 38.3M, 33.5M free. Sep 13 00:56:02.206000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 00:56:02.452000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 00:56:02.452000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 00:56:02.452000 audit: BPF prog-id=10 op=LOAD Sep 13 00:56:02.452000 audit: BPF prog-id=10 op=UNLOAD Sep 13 00:56:02.452000 audit: BPF prog-id=11 op=LOAD Sep 13 00:56:02.452000 audit: BPF prog-id=11 op=UNLOAD Sep 13 00:56:02.799000 audit[1315]: AVC avc: denied { associate } for pid=1315 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 13 00:56:02.799000 audit[1315]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001178e4 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=1298 pid=1315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:02.799000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 13 00:56:02.803000 audit[1315]: AVC avc: denied { associate } for pid=1315 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 13 00:56:02.803000 audit[1315]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001179c9 a2=1ed a3=0 items=2 ppid=1298 pid=1315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:02.803000 audit: CWD cwd="/" Sep 13 00:56:02.803000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:02.803000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:02.803000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 13 00:56:06.363000 audit: BPF prog-id=12 op=LOAD Sep 13 00:56:06.363000 audit: BPF prog-id=3 op=UNLOAD Sep 13 00:56:06.365000 audit: BPF prog-id=13 op=LOAD Sep 13 00:56:06.367000 audit: BPF prog-id=14 op=LOAD Sep 13 00:56:06.367000 audit: BPF prog-id=4 op=UNLOAD Sep 13 00:56:06.367000 audit: BPF prog-id=5 op=UNLOAD Sep 13 00:56:06.369000 audit: BPF prog-id=15 op=LOAD Sep 13 00:56:06.369000 audit: BPF prog-id=12 op=UNLOAD Sep 13 00:56:06.377000 audit: BPF prog-id=16 op=LOAD Sep 13 00:56:06.379000 audit: BPF prog-id=17 op=LOAD Sep 13 00:56:06.379000 audit: BPF prog-id=13 op=UNLOAD Sep 13 00:56:06.379000 audit: BPF prog-id=14 op=UNLOAD Sep 13 00:56:06.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:06.618739 systemd[1]: Started systemd-journald.service. Sep 13 00:56:06.387000 audit: BPF prog-id=15 op=UNLOAD Sep 13 00:56:06.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:06.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:06.534000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:06.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:06.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:06.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:06.547000 audit: BPF prog-id=18 op=LOAD Sep 13 00:56:06.548000 audit: BPF prog-id=19 op=LOAD Sep 13 00:56:06.548000 audit: BPF prog-id=20 op=LOAD Sep 13 00:56:06.548000 audit: BPF prog-id=16 op=UNLOAD Sep 13 00:56:06.548000 audit: BPF prog-id=17 op=UNLOAD Sep 13 00:56:06.592000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:06.609000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 13 00:56:06.609000 audit[1406]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffdb23bdd40 a2=4000 a3=7ffdb23bdddc items=0 ppid=1 pid=1406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:06.609000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 13 00:56:06.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:02.785764 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2025-09-13T00:56:02Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:56:06.362746 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:56:02.788148 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2025-09-13T00:56:02Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 13 00:56:06.362759 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device. Sep 13 00:56:02.788183 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2025-09-13T00:56:02Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 13 00:56:06.381492 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 00:56:02.788234 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2025-09-13T00:56:02Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Sep 13 00:56:06.619821 systemd[1]: Mounted sys-kernel-debug.mount. Sep 13 00:56:02.788251 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2025-09-13T00:56:02Z" level=debug msg="skipped missing lower profile" missing profile=oem Sep 13 00:56:06.622084 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 13 00:56:02.788303 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2025-09-13T00:56:02Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Sep 13 00:56:06.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:06.623283 systemd[1]: Mounted tmp.mount. Sep 13 00:56:02.788327 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2025-09-13T00:56:02Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Sep 13 00:56:06.624541 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:56:02.788628 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2025-09-13T00:56:02Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Sep 13 00:56:06.625869 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:56:02.788680 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2025-09-13T00:56:02Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 13 00:56:06.626056 systemd[1]: Finished modprobe@configfs.service. Sep 13 00:56:02.788699 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2025-09-13T00:56:02Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 13 00:56:02.795946 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2025-09-13T00:56:02Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Sep 13 00:56:02.796008 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2025-09-13T00:56:02Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Sep 13 00:56:02.796042 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2025-09-13T00:56:02Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Sep 13 00:56:02.796066 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2025-09-13T00:56:02Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Sep 13 00:56:02.796096 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2025-09-13T00:56:02Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Sep 13 00:56:02.796170 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2025-09-13T00:56:02Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Sep 13 00:56:05.832668 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2025-09-13T00:56:05Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 00:56:05.832929 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2025-09-13T00:56:05Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 00:56:05.833032 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2025-09-13T00:56:05Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 00:56:05.833239 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2025-09-13T00:56:05Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 00:56:05.833290 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2025-09-13T00:56:05Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Sep 13 00:56:05.833349 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2025-09-13T00:56:05Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Sep 13 00:56:06.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:06.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:06.629935 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:56:06.630175 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:56:06.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:06.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:06.631413 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:56:06.631582 systemd[1]: Finished modprobe@drm.service. Sep 13 00:56:06.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:06.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:06.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:06.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:06.633759 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:56:06.633926 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:56:06.635224 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:56:06.635406 systemd[1]: Finished modprobe@fuse.service. Sep 13 00:56:06.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:06.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:06.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:06.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:06.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:06.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:06.638684 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:56:06.638849 systemd[1]: Finished modprobe@loop.service. Sep 13 00:56:06.639987 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:56:06.641201 systemd[1]: Finished systemd-network-generator.service. Sep 13 00:56:06.642464 systemd[1]: Finished systemd-remount-fs.service. Sep 13 00:56:06.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:06.644423 systemd[1]: Finished flatcar-tmpfiles.service. Sep 13 00:56:06.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:06.646458 systemd[1]: Reached target network-pre.target. Sep 13 00:56:06.649309 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 13 00:56:06.651908 systemd[1]: Mounting sys-kernel-config.mount... Sep 13 00:56:06.657377 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:56:06.668860 systemd[1]: Starting systemd-hwdb-update.service... Sep 13 00:56:06.671313 systemd[1]: Starting systemd-journal-flush.service... Sep 13 00:56:06.672643 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:56:06.674833 systemd[1]: Starting systemd-random-seed.service... Sep 13 00:56:06.675884 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:56:06.678530 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:56:06.683748 systemd[1]: Starting systemd-sysusers.service... Sep 13 00:56:06.686978 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 13 00:56:06.687810 systemd[1]: Mounted sys-kernel-config.mount. Sep 13 00:56:06.695517 systemd-journald[1406]: Time spent on flushing to /var/log/journal/ec2b3fbc81447d96ae565ada1dd7da32 is 42.764ms for 1200 entries. Sep 13 00:56:06.695517 systemd-journald[1406]: System Journal (/var/log/journal/ec2b3fbc81447d96ae565ada1dd7da32) is 8.0M, max 195.6M, 187.6M free. Sep 13 00:56:06.757612 systemd-journald[1406]: Received client request to flush runtime journal. Sep 13 00:56:06.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:06.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:06.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:06.705559 systemd[1]: Finished systemd-random-seed.service. Sep 13 00:56:06.706469 systemd[1]: Reached target first-boot-complete.target. Sep 13 00:56:06.735387 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:56:06.758800 systemd[1]: Finished systemd-journal-flush.service. Sep 13 00:56:06.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:06.765794 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:56:06.768123 systemd[1]: Starting systemd-udev-settle.service... Sep 13 00:56:06.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:06.774977 systemd[1]: Finished systemd-sysusers.service. Sep 13 00:56:06.784712 udevadm[1434]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 13 00:56:07.264993 systemd[1]: Finished systemd-hwdb-update.service. Sep 13 00:56:07.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:07.264000 audit: BPF prog-id=21 op=LOAD Sep 13 00:56:07.264000 audit: BPF prog-id=22 op=LOAD Sep 13 00:56:07.265000 audit: BPF prog-id=7 op=UNLOAD Sep 13 00:56:07.265000 audit: BPF prog-id=8 op=UNLOAD Sep 13 00:56:07.266873 systemd[1]: Starting systemd-udevd.service... Sep 13 00:56:07.287093 systemd-udevd[1435]: Using default interface naming scheme 'v252'. Sep 13 00:56:07.334055 systemd[1]: Started systemd-udevd.service. Sep 13 00:56:07.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:07.334000 audit: BPF prog-id=23 op=LOAD Sep 13 00:56:07.339443 systemd[1]: Starting systemd-networkd.service... Sep 13 00:56:07.350000 audit: BPF prog-id=24 op=LOAD Sep 13 00:56:07.350000 audit: BPF prog-id=25 op=LOAD Sep 13 00:56:07.350000 audit: BPF prog-id=26 op=LOAD Sep 13 00:56:07.351965 systemd[1]: Starting systemd-userdbd.service... Sep 13 00:56:07.389602 systemd[1]: Started systemd-userdbd.service. Sep 13 00:56:07.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:07.395666 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Sep 13 00:56:07.413772 (udev-worker)[1448]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:56:07.444152 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 13 00:56:07.454932 kernel: ACPI: button: Power Button [PWRF] Sep 13 00:56:07.455030 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Sep 13 00:56:07.472125 kernel: ACPI: button: Sleep Button [SLPF] Sep 13 00:56:07.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:07.497592 systemd-networkd[1444]: lo: Link UP Sep 13 00:56:07.497605 systemd-networkd[1444]: lo: Gained carrier Sep 13 00:56:07.498258 systemd-networkd[1444]: Enumeration completed Sep 13 00:56:07.498381 systemd[1]: Started systemd-networkd.service. Sep 13 00:56:07.500792 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 13 00:56:07.502620 systemd-networkd[1444]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:56:07.507871 systemd-networkd[1444]: eth0: Link UP Sep 13 00:56:07.508118 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:56:07.508344 systemd-networkd[1444]: eth0: Gained carrier Sep 13 00:56:07.520306 systemd-networkd[1444]: eth0: DHCPv4 address 172.31.27.34/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 13 00:56:07.515000 audit[1447]: AVC avc: denied { confidentiality } for pid=1447 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 13 00:56:07.515000 audit[1447]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=557f01da9b00 a1=338ec a2=7f9faff4dbc5 a3=5 items=110 ppid=1435 pid=1447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:07.546146 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Sep 13 00:56:07.515000 audit: CWD cwd="/" Sep 13 00:56:07.515000 audit: PATH item=0 name=(null) inode=1044 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=1 name=(null) inode=15199 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=2 name=(null) inode=15199 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=3 name=(null) inode=15200 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=4 name=(null) inode=15199 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=5 name=(null) inode=15201 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=6 name=(null) inode=15199 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=7 name=(null) inode=15202 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=8 name=(null) inode=15202 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=9 name=(null) inode=15203 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=10 name=(null) inode=15202 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=11 name=(null) inode=15204 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=12 name=(null) inode=15202 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=13 name=(null) inode=15205 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=14 name=(null) inode=15202 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=15 name=(null) inode=15206 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=16 name=(null) inode=15202 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=17 name=(null) inode=15207 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=18 name=(null) inode=15199 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=19 name=(null) inode=15208 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=20 name=(null) inode=15208 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=21 name=(null) inode=15209 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=22 name=(null) inode=15208 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=23 name=(null) inode=15210 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=24 name=(null) inode=15208 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=25 name=(null) inode=15211 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=26 name=(null) inode=15208 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=27 name=(null) inode=15212 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=28 name=(null) inode=15208 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=29 name=(null) inode=15213 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=30 name=(null) inode=15199 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=31 name=(null) inode=15214 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=32 name=(null) inode=15214 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=33 name=(null) inode=15215 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=34 name=(null) inode=15214 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=35 name=(null) inode=15216 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=36 name=(null) inode=15214 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=37 name=(null) inode=15217 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=38 name=(null) inode=15214 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=39 name=(null) inode=15218 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=40 name=(null) inode=15214 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=41 name=(null) inode=15219 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=42 name=(null) inode=15199 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=43 name=(null) inode=15220 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=44 name=(null) inode=15220 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=45 name=(null) inode=15221 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=46 name=(null) inode=15220 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=47 name=(null) inode=15222 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=48 name=(null) inode=15220 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=49 name=(null) inode=15223 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=50 name=(null) inode=15220 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=51 name=(null) inode=15224 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=52 name=(null) inode=15220 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=53 name=(null) inode=15225 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=54 name=(null) inode=1044 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=55 name=(null) inode=15226 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=56 name=(null) inode=15226 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=57 name=(null) inode=15227 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=58 name=(null) inode=15226 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=59 name=(null) inode=15228 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=60 name=(null) inode=15226 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=61 name=(null) inode=15229 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=62 name=(null) inode=15229 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=63 name=(null) inode=15230 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=64 name=(null) inode=15229 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=65 name=(null) inode=15231 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=66 name=(null) inode=15229 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=67 name=(null) inode=15232 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=68 name=(null) inode=15229 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=69 name=(null) inode=15233 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=70 name=(null) inode=15229 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=71 name=(null) inode=15234 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=72 name=(null) inode=15226 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=73 name=(null) inode=15235 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=74 name=(null) inode=15235 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=75 name=(null) inode=15236 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=76 name=(null) inode=15235 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=77 name=(null) inode=15237 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=78 name=(null) inode=15235 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=79 name=(null) inode=15238 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=80 name=(null) inode=15235 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=81 name=(null) inode=15239 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=82 name=(null) inode=15235 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=83 name=(null) inode=15240 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=84 name=(null) inode=15226 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=85 name=(null) inode=15241 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=86 name=(null) inode=15241 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=87 name=(null) inode=15242 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=88 name=(null) inode=15241 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=89 name=(null) inode=15243 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=90 name=(null) inode=15241 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=91 name=(null) inode=15244 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=92 name=(null) inode=15241 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=93 name=(null) inode=15245 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=94 name=(null) inode=15241 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=95 name=(null) inode=15246 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=96 name=(null) inode=15226 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=97 name=(null) inode=15247 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=98 name=(null) inode=15247 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=99 name=(null) inode=15248 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=100 name=(null) inode=15247 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=101 name=(null) inode=15249 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=102 name=(null) inode=15247 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=103 name=(null) inode=15250 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=104 name=(null) inode=15247 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=105 name=(null) inode=15251 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=106 name=(null) inode=15247 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=107 name=(null) inode=15252 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PATH item=109 name=(null) inode=15253 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:56:07.515000 audit: PROCTITLE proctitle="(udev-worker)" Sep 13 00:56:07.564152 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Sep 13 00:56:07.569215 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:56:07.633618 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:56:07.650098 systemd[1]: Finished systemd-udev-settle.service. Sep 13 00:56:07.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:07.651990 systemd[1]: Starting lvm2-activation-early.service... Sep 13 00:56:07.689970 lvm[1549]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:56:07.720444 systemd[1]: Finished lvm2-activation-early.service. Sep 13 00:56:07.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:07.721080 systemd[1]: Reached target cryptsetup.target. Sep 13 00:56:07.722841 systemd[1]: Starting lvm2-activation.service... Sep 13 00:56:07.727787 lvm[1550]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:56:07.755538 systemd[1]: Finished lvm2-activation.service. Sep 13 00:56:07.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:07.756200 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:56:07.756692 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:56:07.756723 systemd[1]: Reached target local-fs.target. Sep 13 00:56:07.757217 systemd[1]: Reached target machines.target. Sep 13 00:56:07.758898 systemd[1]: Starting ldconfig.service... Sep 13 00:56:07.760651 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:56:07.760712 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:56:07.761946 systemd[1]: Starting systemd-boot-update.service... Sep 13 00:56:07.763593 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 13 00:56:07.765046 systemd[1]: Starting systemd-machine-id-commit.service... Sep 13 00:56:07.766563 systemd[1]: Starting systemd-sysext.service... Sep 13 00:56:07.777257 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1552 (bootctl) Sep 13 00:56:07.778688 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 13 00:56:07.783988 systemd[1]: Unmounting usr-share-oem.mount... Sep 13 00:56:07.785150 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 13 00:56:07.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:07.791360 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 13 00:56:07.791548 systemd[1]: Unmounted usr-share-oem.mount. Sep 13 00:56:07.807143 kernel: loop0: detected capacity change from 0 to 224512 Sep 13 00:56:07.939406 systemd-fsck[1561]: fsck.fat 4.2 (2021-01-31) Sep 13 00:56:07.939406 systemd-fsck[1561]: /dev/nvme0n1p1: 790 files, 120761/258078 clusters Sep 13 00:56:07.942570 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 13 00:56:07.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:07.944970 systemd[1]: Mounting boot.mount... Sep 13 00:56:07.958135 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:56:07.965708 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:56:07.966354 systemd[1]: Finished systemd-machine-id-commit.service. Sep 13 00:56:07.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:07.969560 systemd[1]: Mounted boot.mount. Sep 13 00:56:07.978126 kernel: loop1: detected capacity change from 0 to 224512 Sep 13 00:56:07.999714 systemd[1]: Finished systemd-boot-update.service. Sep 13 00:56:07.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:08.002550 (sd-sysext)[1570]: Using extensions 'kubernetes'. Sep 13 00:56:08.003957 (sd-sysext)[1570]: Merged extensions into '/usr'. Sep 13 00:56:08.023540 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:56:08.025707 systemd[1]: Mounting usr-share-oem.mount... Sep 13 00:56:08.026674 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:56:08.028887 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:56:08.033715 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:56:08.036481 systemd[1]: Starting modprobe@loop.service... Sep 13 00:56:08.037587 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:56:08.038312 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:56:08.038526 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:56:08.042844 systemd[1]: Mounted usr-share-oem.mount. Sep 13 00:56:08.046886 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:56:08.047061 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:56:08.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:08.046000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:08.048501 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:56:08.048688 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:56:08.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:08.048000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:08.049904 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:56:08.050071 systemd[1]: Finished modprobe@loop.service. Sep 13 00:56:08.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:08.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:08.051619 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:56:08.051783 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:56:08.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:08.053499 systemd[1]: Finished systemd-sysext.service. Sep 13 00:56:08.055646 systemd[1]: Starting ensure-sysext.service... Sep 13 00:56:08.059629 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 13 00:56:08.068012 systemd[1]: Reloading. Sep 13 00:56:08.091297 systemd-tmpfiles[1583]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 13 00:56:08.101916 systemd-tmpfiles[1583]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:56:08.115367 systemd-tmpfiles[1583]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:56:08.129212 /usr/lib/systemd/system-generators/torcx-generator[1602]: time="2025-09-13T00:56:08Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:56:08.129242 /usr/lib/systemd/system-generators/torcx-generator[1602]: time="2025-09-13T00:56:08Z" level=info msg="torcx already run" Sep 13 00:56:08.288237 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:56:08.288540 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:56:08.327383 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:56:08.423000 audit: BPF prog-id=27 op=LOAD Sep 13 00:56:08.423000 audit: BPF prog-id=18 op=UNLOAD Sep 13 00:56:08.424000 audit: BPF prog-id=28 op=LOAD Sep 13 00:56:08.424000 audit: BPF prog-id=29 op=LOAD Sep 13 00:56:08.424000 audit: BPF prog-id=19 op=UNLOAD Sep 13 00:56:08.424000 audit: BPF prog-id=20 op=UNLOAD Sep 13 00:56:08.425000 audit: BPF prog-id=30 op=LOAD Sep 13 00:56:08.425000 audit: BPF prog-id=23 op=UNLOAD Sep 13 00:56:08.426000 audit: BPF prog-id=31 op=LOAD Sep 13 00:56:08.427000 audit: BPF prog-id=32 op=LOAD Sep 13 00:56:08.427000 audit: BPF prog-id=21 op=UNLOAD Sep 13 00:56:08.427000 audit: BPF prog-id=22 op=UNLOAD Sep 13 00:56:08.427000 audit: BPF prog-id=33 op=LOAD Sep 13 00:56:08.427000 audit: BPF prog-id=24 op=UNLOAD Sep 13 00:56:08.427000 audit: BPF prog-id=34 op=LOAD Sep 13 00:56:08.427000 audit: BPF prog-id=35 op=LOAD Sep 13 00:56:08.427000 audit: BPF prog-id=25 op=UNLOAD Sep 13 00:56:08.427000 audit: BPF prog-id=26 op=UNLOAD Sep 13 00:56:08.435552 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 13 00:56:08.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:08.441515 systemd[1]: Starting audit-rules.service... Sep 13 00:56:08.443867 systemd[1]: Starting clean-ca-certificates.service... Sep 13 00:56:08.446307 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 13 00:56:08.449000 audit: BPF prog-id=36 op=LOAD Sep 13 00:56:08.454000 audit: BPF prog-id=37 op=LOAD Sep 13 00:56:08.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:08.453319 systemd[1]: Starting systemd-resolved.service... Sep 13 00:56:08.456557 systemd[1]: Starting systemd-timesyncd.service... Sep 13 00:56:08.458899 systemd[1]: Starting systemd-update-utmp.service... Sep 13 00:56:08.460351 systemd[1]: Finished clean-ca-certificates.service. Sep 13 00:56:08.462465 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:56:08.471000 audit[1665]: SYSTEM_BOOT pid=1665 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 13 00:56:08.477890 systemd[1]: Finished systemd-update-utmp.service. Sep 13 00:56:08.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:08.485752 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:56:08.488685 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:56:08.492301 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:56:08.497061 systemd[1]: Starting modprobe@loop.service... Sep 13 00:56:08.497870 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:56:08.498083 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:56:08.498316 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:56:08.501020 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:56:08.501255 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:56:08.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:08.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:08.502579 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:56:08.502762 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:56:08.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:08.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:08.503920 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:56:08.506747 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:56:08.506922 systemd[1]: Finished modprobe@loop.service. Sep 13 00:56:08.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:08.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:08.508256 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:56:08.510962 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:56:08.514753 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:56:08.517527 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:56:08.521001 systemd[1]: Starting modprobe@loop.service... Sep 13 00:56:08.521807 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:56:08.522024 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:56:08.522313 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:56:08.528893 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:56:08.532179 systemd[1]: Starting modprobe@drm.service... Sep 13 00:56:08.533036 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:56:08.535316 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:56:08.535554 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:56:08.536645 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:56:08.536847 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:56:08.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:08.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:08.538012 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:56:08.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:08.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:08.539351 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:56:08.539514 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:56:08.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:08.544224 systemd[1]: Finished ensure-sysext.service. Sep 13 00:56:08.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:08.551000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:08.551510 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:56:08.551701 systemd[1]: Finished modprobe@drm.service. Sep 13 00:56:08.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:08.556000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:08.556526 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:56:08.556724 systemd[1]: Finished modprobe@loop.service. Sep 13 00:56:08.557517 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:56:08.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:56:08.599377 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 13 00:56:08.616000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 13 00:56:08.616000 audit[1686]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffea6191090 a2=420 a3=0 items=0 ppid=1659 pid=1686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:56:08.616000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 13 00:56:08.618988 systemd[1]: Finished audit-rules.service. Sep 13 00:56:08.619999 augenrules[1686]: No rules Sep 13 00:56:08.625154 ldconfig[1551]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:56:08.631726 systemd[1]: Started systemd-timesyncd.service. Sep 13 00:56:08.632403 systemd[1]: Reached target time-set.target. Sep 13 00:56:08.635631 systemd[1]: Finished ldconfig.service. Sep 13 00:56:08.637973 systemd[1]: Starting systemd-update-done.service... Sep 13 00:56:08.648324 systemd[1]: Finished systemd-update-done.service. Sep 13 00:56:08.667171 systemd-resolved[1662]: Positive Trust Anchors: Sep 13 00:56:08.667452 systemd-resolved[1662]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:56:08.667528 systemd-resolved[1662]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:56:08.705407 systemd-resolved[1662]: Defaulting to hostname 'linux'. Sep 13 00:56:08.707184 systemd[1]: Started systemd-resolved.service. Sep 13 00:56:08.707696 systemd[1]: Reached target network.target. Sep 13 00:56:08.708091 systemd[1]: Reached target nss-lookup.target. Sep 13 00:56:08.708480 systemd[1]: Reached target sysinit.target. Sep 13 00:56:08.708936 systemd[1]: Started motdgen.path. Sep 13 00:56:08.709355 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 13 00:56:08.709880 systemd[1]: Started logrotate.timer. Sep 13 00:56:08.710501 systemd[1]: Started mdadm.timer. Sep 13 00:56:08.710860 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 13 00:56:08.711240 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:56:08.711281 systemd[1]: Reached target paths.target. Sep 13 00:56:08.711644 systemd[1]: Reached target timers.target. Sep 13 00:56:08.712318 systemd[1]: Listening on dbus.socket. Sep 13 00:56:08.713889 systemd[1]: Starting docker.socket... Sep 13 00:56:08.717835 systemd[1]: Listening on sshd.socket. Sep 13 00:56:08.718533 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:56:08.719071 systemd[1]: Listening on docker.socket. Sep 13 00:56:08.719558 systemd[1]: Reached target sockets.target. Sep 13 00:56:08.719919 systemd[1]: Reached target basic.target. Sep 13 00:56:08.720356 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:56:08.720399 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:56:08.721582 systemd[1]: Starting containerd.service... Sep 13 00:56:08.723558 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Sep 13 00:56:08.726043 systemd[1]: Starting dbus.service... Sep 13 00:56:08.728514 systemd[1]: Starting enable-oem-cloudinit.service... Sep 13 00:56:08.731888 systemd[1]: Starting extend-filesystems.service... Sep 13 00:56:08.736545 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 13 00:56:08.738042 systemd[1]: Starting motdgen.service... Sep 13 00:56:08.740420 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 13 00:56:08.745810 systemd[1]: Starting sshd-keygen.service... Sep 13 00:56:08.753354 systemd[1]: Starting systemd-logind.service... Sep 13 00:56:08.753999 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:56:08.754120 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:56:08.754854 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 00:56:08.755981 systemd[1]: Starting update-engine.service... Sep 13 00:56:08.758008 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 13 00:56:08.769306 jq[1698]: false Sep 13 00:56:08.768432 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:56:08.768700 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 13 00:56:08.769165 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:56:08.769361 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 13 00:56:08.789578 jq[1706]: true Sep 13 00:56:08.793214 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:56:08.793253 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:56:08.827009 jq[1714]: true Sep 13 00:56:08.843862 dbus-daemon[1697]: [system] SELinux support is enabled Sep 13 00:56:08.844476 systemd[1]: Started dbus.service. Sep 13 00:56:08.848309 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:56:08.848353 systemd[1]: Reached target system-config.target. Sep 13 00:56:08.848833 dbus-daemon[1697]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1444 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 13 00:56:08.849005 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:56:08.849041 systemd[1]: Reached target user-config.target. Sep 13 00:56:08.859243 dbus-daemon[1697]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 13 00:56:08.866513 systemd[1]: Starting systemd-hostnamed.service... Sep 13 00:56:08.877553 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:56:08.877784 systemd[1]: Finished motdgen.service. Sep 13 00:56:08.892201 extend-filesystems[1699]: Found loop1 Sep 13 00:56:08.894071 extend-filesystems[1699]: Found nvme0n1 Sep 13 00:56:08.897317 extend-filesystems[1699]: Found nvme0n1p1 Sep 13 00:56:08.900719 extend-filesystems[1699]: Found nvme0n1p2 Sep 13 00:56:08.903255 extend-filesystems[1699]: Found nvme0n1p3 Sep 13 00:56:08.903912 extend-filesystems[1699]: Found usr Sep 13 00:56:08.903912 extend-filesystems[1699]: Found nvme0n1p4 Sep 13 00:56:08.903912 extend-filesystems[1699]: Found nvme0n1p6 Sep 13 00:56:08.903912 extend-filesystems[1699]: Found nvme0n1p7 Sep 13 00:56:08.903912 extend-filesystems[1699]: Found nvme0n1p9 Sep 13 00:56:08.903912 extend-filesystems[1699]: Checking size of /dev/nvme0n1p9 Sep 13 00:56:08.913859 systemd-networkd[1444]: eth0: Gained IPv6LL Sep 13 00:56:08.914682 systemd-timesyncd[1664]: Network configuration changed, trying to establish connection. Sep 13 00:56:08.916416 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 13 00:56:08.917123 systemd[1]: Reached target network-online.target. Sep 13 00:56:08.919269 systemd[1]: Started amazon-ssm-agent.service. Sep 13 00:56:08.929434 systemd[1]: Starting kubelet.service... Sep 13 00:56:08.931695 systemd[1]: Started nvidia.service. Sep 13 00:56:08.974168 bash[1742]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:56:08.973395 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 13 00:56:09.053626 update_engine[1705]: I0913 00:56:09.046293 1705 main.cc:92] Flatcar Update Engine starting Sep 13 00:56:09.067631 extend-filesystems[1699]: Resized partition /dev/nvme0n1p9 Sep 13 00:56:09.070092 systemd[1]: Started update-engine.service. Sep 13 00:56:09.073157 systemd[1]: Started locksmithd.service. Sep 13 00:56:09.074968 update_engine[1705]: I0913 00:56:09.074775 1705 update_check_scheduler.cc:74] Next update check in 6m57s Sep 13 00:56:09.081026 extend-filesystems[1758]: resize2fs 1.46.5 (30-Dec-2021) Sep 13 00:56:09.132193 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 13 00:56:09.146328 amazon-ssm-agent[1744]: 2025/09/13 00:56:09 Failed to load instance info from vault. RegistrationKey does not exist. Sep 13 00:56:09.174375 amazon-ssm-agent[1744]: Initializing new seelog logger Sep 13 00:56:09.175128 amazon-ssm-agent[1744]: New Seelog Logger Creation Complete Sep 13 00:56:09.177268 amazon-ssm-agent[1744]: 2025/09/13 00:56:09 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 13 00:56:09.177434 amazon-ssm-agent[1744]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 13 00:56:09.178032 amazon-ssm-agent[1744]: 2025/09/13 00:56:09 processing appconfig overrides Sep 13 00:56:09.181513 systemd-logind[1703]: Watching system buttons on /dev/input/event1 (Power Button) Sep 13 00:56:09.181545 systemd-logind[1703]: Watching system buttons on /dev/input/event2 (Sleep Button) Sep 13 00:56:09.181570 systemd-logind[1703]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 00:56:09.181812 systemd-logind[1703]: New seat seat0. Sep 13 00:56:09.187608 systemd[1]: Started systemd-logind.service. Sep 13 00:56:09.228938 env[1711]: time="2025-09-13T00:56:09.228613131Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 13 00:56:09.242782 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 13 00:56:09.265549 extend-filesystems[1758]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 13 00:56:09.265549 extend-filesystems[1758]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 13 00:56:09.265549 extend-filesystems[1758]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 13 00:56:09.264213 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:56:09.273174 extend-filesystems[1699]: Resized filesystem in /dev/nvme0n1p9 Sep 13 00:56:09.264443 systemd[1]: Finished extend-filesystems.service. Sep 13 00:56:09.334459 env[1711]: time="2025-09-13T00:56:09.334342716Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:56:09.349479 env[1711]: time="2025-09-13T00:56:09.349417441Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:56:09.353098 env[1711]: time="2025-09-13T00:56:09.353028494Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:56:09.353299 env[1711]: time="2025-09-13T00:56:09.353274302Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:56:09.353784 env[1711]: time="2025-09-13T00:56:09.353754888Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:56:09.355451 env[1711]: time="2025-09-13T00:56:09.355422593Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:56:09.355558 dbus-daemon[1697]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 13 00:56:09.355747 systemd[1]: Started systemd-hostnamed.service. Sep 13 00:56:09.355899 env[1711]: time="2025-09-13T00:56:09.355875015Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 13 00:56:09.355993 env[1711]: time="2025-09-13T00:56:09.355977474Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:56:09.356310 env[1711]: time="2025-09-13T00:56:09.356290427Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:56:09.359858 dbus-daemon[1697]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1732 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 13 00:56:09.360538 env[1711]: time="2025-09-13T00:56:09.360494128Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:56:09.362522 env[1711]: time="2025-09-13T00:56:09.362471157Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:56:09.364643 systemd[1]: Starting polkit.service... Sep 13 00:56:09.365085 env[1711]: time="2025-09-13T00:56:09.365058526Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:56:09.365344 env[1711]: time="2025-09-13T00:56:09.365295658Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 13 00:56:09.365677 env[1711]: time="2025-09-13T00:56:09.365653992Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:56:09.393087 env[1711]: time="2025-09-13T00:56:09.393044165Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:56:09.393301 env[1711]: time="2025-09-13T00:56:09.393279233Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:56:09.393390 env[1711]: time="2025-09-13T00:56:09.393372648Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:56:09.393509 env[1711]: time="2025-09-13T00:56:09.393492477Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:56:09.393668 env[1711]: time="2025-09-13T00:56:09.393651406Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:56:09.393752 env[1711]: time="2025-09-13T00:56:09.393737467Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:56:09.393825 env[1711]: time="2025-09-13T00:56:09.393810855Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:56:09.393897 env[1711]: time="2025-09-13T00:56:09.393883588Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:56:09.393979 env[1711]: time="2025-09-13T00:56:09.393964969Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 13 00:56:09.394052 env[1711]: time="2025-09-13T00:56:09.394037693Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:56:09.394164 env[1711]: time="2025-09-13T00:56:09.394148306Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:56:09.394256 env[1711]: time="2025-09-13T00:56:09.394240605Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:56:09.394461 env[1711]: time="2025-09-13T00:56:09.394444288Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:56:09.394647 env[1711]: time="2025-09-13T00:56:09.394625953Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:56:09.395298 env[1711]: time="2025-09-13T00:56:09.395269279Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:56:09.395427 env[1711]: time="2025-09-13T00:56:09.395409106Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:56:09.395529 env[1711]: time="2025-09-13T00:56:09.395496801Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:56:09.395666 env[1711]: time="2025-09-13T00:56:09.395649196Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:56:09.395814 env[1711]: time="2025-09-13T00:56:09.395796663Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:56:09.395891 env[1711]: time="2025-09-13T00:56:09.395877119Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:56:09.395962 env[1711]: time="2025-09-13T00:56:09.395948206Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:56:09.396044 env[1711]: time="2025-09-13T00:56:09.396029437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:56:09.396132 env[1711]: time="2025-09-13T00:56:09.396117162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:56:09.396212 env[1711]: time="2025-09-13T00:56:09.396195644Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:56:09.396282 env[1711]: time="2025-09-13T00:56:09.396268018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:56:09.396376 env[1711]: time="2025-09-13T00:56:09.396360967Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:56:09.396630 env[1711]: time="2025-09-13T00:56:09.396603495Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:56:09.396729 env[1711]: time="2025-09-13T00:56:09.396712855Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:56:09.396804 env[1711]: time="2025-09-13T00:56:09.396790086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:56:09.396874 env[1711]: time="2025-09-13T00:56:09.396858835Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:56:09.396956 env[1711]: time="2025-09-13T00:56:09.396935858Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 13 00:56:09.397039 env[1711]: time="2025-09-13T00:56:09.397023799Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:56:09.397137 env[1711]: time="2025-09-13T00:56:09.397120629Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 13 00:56:09.397265 env[1711]: time="2025-09-13T00:56:09.397248263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:56:09.397684 env[1711]: time="2025-09-13T00:56:09.397605430Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:56:09.401542 env[1711]: time="2025-09-13T00:56:09.398432603Z" level=info msg="Connect containerd service" Sep 13 00:56:09.401542 env[1711]: time="2025-09-13T00:56:09.398488588Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:56:09.402669 env[1711]: time="2025-09-13T00:56:09.402629461Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:56:09.407958 systemd[1]: Created slice system-sshd.slice. Sep 13 00:56:09.409946 polkitd[1800]: Started polkitd version 121 Sep 13 00:56:09.424195 env[1711]: time="2025-09-13T00:56:09.424086589Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:56:09.424334 env[1711]: time="2025-09-13T00:56:09.424245696Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:56:09.424420 systemd[1]: Started containerd.service. Sep 13 00:56:09.432771 env[1711]: time="2025-09-13T00:56:09.432703794Z" level=info msg="containerd successfully booted in 0.305384s" Sep 13 00:56:09.435207 env[1711]: time="2025-09-13T00:56:09.435044453Z" level=info msg="Start subscribing containerd event" Sep 13 00:56:09.435554 env[1711]: time="2025-09-13T00:56:09.435525292Z" level=info msg="Start recovering state" Sep 13 00:56:09.435846 env[1711]: time="2025-09-13T00:56:09.435825092Z" level=info msg="Start event monitor" Sep 13 00:56:09.435983 env[1711]: time="2025-09-13T00:56:09.435964946Z" level=info msg="Start snapshots syncer" Sep 13 00:56:09.436075 env[1711]: time="2025-09-13T00:56:09.435990032Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:56:09.436075 env[1711]: time="2025-09-13T00:56:09.436011532Z" level=info msg="Start streaming server" Sep 13 00:56:09.437708 polkitd[1800]: Loading rules from directory /etc/polkit-1/rules.d Sep 13 00:56:09.440236 polkitd[1800]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 13 00:56:09.443822 polkitd[1800]: Finished loading, compiling and executing 2 rules Sep 13 00:56:09.448073 dbus-daemon[1697]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 13 00:56:09.448502 systemd[1]: Started polkit.service. Sep 13 00:56:09.449743 polkitd[1800]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 13 00:56:09.486970 systemd-hostnamed[1732]: Hostname set to (transient) Sep 13 00:56:09.487094 systemd-resolved[1662]: System hostname changed to 'ip-172-31-27-34'. Sep 13 00:56:09.488605 systemd[1]: nvidia.service: Deactivated successfully. Sep 13 00:56:09.718091 coreos-metadata[1696]: Sep 13 00:56:09.717 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 13 00:56:09.724212 coreos-metadata[1696]: Sep 13 00:56:09.724 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Sep 13 00:56:09.725033 coreos-metadata[1696]: Sep 13 00:56:09.725 INFO Fetch successful Sep 13 00:56:09.725138 coreos-metadata[1696]: Sep 13 00:56:09.725 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 13 00:56:09.725745 coreos-metadata[1696]: Sep 13 00:56:09.725 INFO Fetch successful Sep 13 00:56:09.739114 unknown[1696]: wrote ssh authorized keys file for user: core Sep 13 00:56:09.772641 update-ssh-keys[1882]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:56:09.773442 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Sep 13 00:56:09.805373 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO Create new startup processor Sep 13 00:56:09.813424 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO [LongRunningPluginsManager] registered plugins: {} Sep 13 00:56:09.813628 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO Initializing bookkeeping folders Sep 13 00:56:09.813725 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO removing the completed state files Sep 13 00:56:09.813815 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO Initializing bookkeeping folders for long running plugins Sep 13 00:56:09.813901 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Sep 13 00:56:09.813978 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO Initializing healthcheck folders for long running plugins Sep 13 00:56:09.814066 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO Initializing locations for inventory plugin Sep 13 00:56:09.814156 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO Initializing default location for custom inventory Sep 13 00:56:09.814253 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO Initializing default location for file inventory Sep 13 00:56:09.814339 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO Initializing default location for role inventory Sep 13 00:56:09.814420 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO Init the cloudwatchlogs publisher Sep 13 00:56:09.814490 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO [instanceID=i-01222cc93ffd3844f] Successfully loaded platform independent plugin aws:softwareInventory Sep 13 00:56:09.814569 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO [instanceID=i-01222cc93ffd3844f] Successfully loaded platform independent plugin aws:runPowerShellScript Sep 13 00:56:09.814639 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO [instanceID=i-01222cc93ffd3844f] Successfully loaded platform independent plugin aws:updateSsmAgent Sep 13 00:56:09.814717 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO [instanceID=i-01222cc93ffd3844f] Successfully loaded platform independent plugin aws:refreshAssociation Sep 13 00:56:09.814786 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO [instanceID=i-01222cc93ffd3844f] Successfully loaded platform independent plugin aws:configurePackage Sep 13 00:56:09.814874 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO [instanceID=i-01222cc93ffd3844f] Successfully loaded platform independent plugin aws:downloadContent Sep 13 00:56:09.814959 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO [instanceID=i-01222cc93ffd3844f] Successfully loaded platform independent plugin aws:configureDocker Sep 13 00:56:09.815037 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO [instanceID=i-01222cc93ffd3844f] Successfully loaded platform independent plugin aws:runDockerAction Sep 13 00:56:09.816044 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO [instanceID=i-01222cc93ffd3844f] Successfully loaded platform independent plugin aws:runDocument Sep 13 00:56:09.816368 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO [instanceID=i-01222cc93ffd3844f] Successfully loaded platform dependent plugin aws:runShellScript Sep 13 00:56:09.816479 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Sep 13 00:56:09.816571 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO OS: linux, Arch: amd64 Sep 13 00:56:09.821941 amazon-ssm-agent[1744]: datastore file /var/lib/amazon/ssm/i-01222cc93ffd3844f/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Sep 13 00:56:09.915670 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO [MessagingDeliveryService] Starting document processing engine... Sep 13 00:56:09.963785 locksmithd[1760]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:56:10.009896 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO [MessagingDeliveryService] [EngineProcessor] Starting Sep 13 00:56:10.104199 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Sep 13 00:56:10.198710 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO [MessagingDeliveryService] Starting message polling Sep 13 00:56:10.293489 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO [MessagingDeliveryService] Starting send replies to MDS Sep 13 00:56:10.388332 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO [instanceID=i-01222cc93ffd3844f] Starting association polling Sep 13 00:56:10.483685 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Sep 13 00:56:10.579052 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO [MessagingDeliveryService] [Association] Launching response handler Sep 13 00:56:10.674483 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Sep 13 00:56:10.770181 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Sep 13 00:56:10.866144 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Sep 13 00:56:10.909257 sshd_keygen[1718]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:56:10.935810 systemd[1]: Finished sshd-keygen.service. Sep 13 00:56:10.938836 systemd[1]: Starting issuegen.service... Sep 13 00:56:10.941355 systemd[1]: Started sshd@0-172.31.27.34:22-147.75.109.163:42530.service. Sep 13 00:56:10.948050 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:56:10.948305 systemd[1]: Finished issuegen.service. Sep 13 00:56:10.950730 systemd[1]: Starting systemd-user-sessions.service... Sep 13 00:56:10.962125 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO [HealthCheck] HealthCheck reporting agent health. Sep 13 00:56:10.962798 systemd[1]: Finished systemd-user-sessions.service. Sep 13 00:56:10.966028 systemd[1]: Started getty@tty1.service. Sep 13 00:56:10.970353 systemd[1]: Started serial-getty@ttyS0.service. Sep 13 00:56:10.971905 systemd[1]: Reached target getty.target. Sep 13 00:56:11.058561 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO [MessageGatewayService] Starting session document processing engine... Sep 13 00:56:11.142611 sshd[1897]: Accepted publickey for core from 147.75.109.163 port 42530 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:56:11.146426 sshd[1897]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:56:11.154922 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO [MessageGatewayService] [EngineProcessor] Starting Sep 13 00:56:11.164280 systemd[1]: Created slice user-500.slice. Sep 13 00:56:11.166095 systemd[1]: Starting user-runtime-dir@500.service... Sep 13 00:56:11.177466 systemd-logind[1703]: New session 1 of user core. Sep 13 00:56:11.181230 systemd[1]: Finished user-runtime-dir@500.service. Sep 13 00:56:11.183591 systemd[1]: Starting user@500.service... Sep 13 00:56:11.187879 (systemd)[1905]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:56:11.251618 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Sep 13 00:56:11.282030 systemd[1905]: Queued start job for default target default.target. Sep 13 00:56:11.282750 systemd[1905]: Reached target paths.target. Sep 13 00:56:11.282773 systemd[1905]: Reached target sockets.target. Sep 13 00:56:11.282787 systemd[1905]: Reached target timers.target. Sep 13 00:56:11.282799 systemd[1905]: Reached target basic.target. Sep 13 00:56:11.282907 systemd[1]: Started user@500.service. Sep 13 00:56:11.284367 systemd[1]: Started session-1.scope. Sep 13 00:56:11.285322 systemd[1905]: Reached target default.target. Sep 13 00:56:11.285499 systemd[1905]: Startup finished in 90ms. Sep 13 00:56:11.348445 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-01222cc93ffd3844f, requestId: 26f5bc64-752f-40be-8e9c-74661fab4510 Sep 13 00:56:11.430288 systemd[1]: Started sshd@1-172.31.27.34:22-147.75.109.163:42542.service. Sep 13 00:56:11.446219 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO [LongRunningPluginsManager] starting long running plugin manager Sep 13 00:56:11.524900 systemd[1]: Started kubelet.service. Sep 13 00:56:11.527443 systemd[1]: Reached target multi-user.target. Sep 13 00:56:11.529768 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 13 00:56:11.543528 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Sep 13 00:56:11.546300 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 13 00:56:11.546545 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 13 00:56:11.547323 systemd[1]: Startup finished in 626ms (kernel) + 6.269s (initrd) + 9.505s (userspace) = 16.401s. Sep 13 00:56:11.599709 sshd[1914]: Accepted publickey for core from 147.75.109.163 port 42542 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:56:11.600549 sshd[1914]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:56:11.605725 systemd-logind[1703]: New session 2 of user core. Sep 13 00:56:11.606249 systemd[1]: Started session-2.scope. Sep 13 00:56:11.641119 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO [OfflineService] Starting document processing engine... Sep 13 00:56:11.734249 sshd[1914]: pam_unix(sshd:session): session closed for user core Sep 13 00:56:11.738126 systemd[1]: sshd@1-172.31.27.34:22-147.75.109.163:42542.service: Deactivated successfully. Sep 13 00:56:11.738985 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:56:11.739586 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO [OfflineService] [EngineProcessor] Starting Sep 13 00:56:11.739660 systemd-logind[1703]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:56:11.740611 systemd-logind[1703]: Removed session 2. Sep 13 00:56:11.757951 systemd[1]: Started sshd@2-172.31.27.34:22-147.75.109.163:42548.service. Sep 13 00:56:11.837456 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO [OfflineService] [EngineProcessor] Initial processing Sep 13 00:56:11.913260 sshd[1924]: Accepted publickey for core from 147.75.109.163 port 42548 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:56:11.914628 sshd[1924]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:56:11.919741 systemd-logind[1703]: New session 3 of user core. Sep 13 00:56:11.920228 systemd[1]: Started session-3.scope. Sep 13 00:56:11.935504 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO [OfflineService] Starting message polling Sep 13 00:56:12.033755 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO [OfflineService] Starting send replies to MDS Sep 13 00:56:12.041079 sshd[1924]: pam_unix(sshd:session): session closed for user core Sep 13 00:56:12.044634 systemd[1]: sshd@2-172.31.27.34:22-147.75.109.163:42548.service: Deactivated successfully. Sep 13 00:56:12.046269 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:56:12.046517 systemd-logind[1703]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:56:12.047937 systemd-logind[1703]: Removed session 3. Sep 13 00:56:12.067997 systemd[1]: Started sshd@3-172.31.27.34:22-147.75.109.163:42556.service. Sep 13 00:56:12.132034 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO [MessageGatewayService] listening reply. Sep 13 00:56:12.228846 sshd[1935]: Accepted publickey for core from 147.75.109.163 port 42556 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:56:12.230475 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Sep 13 00:56:12.230427 sshd[1935]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:56:12.235606 systemd[1]: Started session-4.scope. Sep 13 00:56:12.236290 systemd-logind[1703]: New session 4 of user core. Sep 13 00:56:12.329577 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO [StartupProcessor] Executing startup processor tasks Sep 13 00:56:12.364055 sshd[1935]: pam_unix(sshd:session): session closed for user core Sep 13 00:56:12.367482 systemd[1]: sshd@3-172.31.27.34:22-147.75.109.163:42556.service: Deactivated successfully. Sep 13 00:56:12.368166 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:56:12.368835 systemd-logind[1703]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:56:12.369652 systemd-logind[1703]: Removed session 4. Sep 13 00:56:12.389135 systemd[1]: Started sshd@4-172.31.27.34:22-147.75.109.163:42558.service. Sep 13 00:56:12.427833 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Sep 13 00:56:12.526908 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Sep 13 00:56:12.547047 sshd[1941]: Accepted publickey for core from 147.75.109.163 port 42558 ssh2: RSA SHA256:9zKSfA0UBs4YCbMNRE+jf2SchYlhVPu6zl9tBdI5N0M Sep 13 00:56:12.548890 sshd[1941]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:56:12.555384 systemd[1]: Started session-5.scope. Sep 13 00:56:12.557650 systemd-logind[1703]: New session 5 of user core. Sep 13 00:56:12.626345 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.8 Sep 13 00:56:12.665793 kubelet[1918]: E0913 00:56:12.665642 1918 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:56:12.669100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:56:12.669249 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:56:12.669490 systemd[1]: kubelet.service: Consumed 1.194s CPU time. Sep 13 00:56:12.685121 sudo[1944]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:56:12.685365 sudo[1944]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 00:56:12.699811 systemd[1]: Starting coreos-metadata.service... Sep 13 00:56:12.725553 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-01222cc93ffd3844f?role=subscribe&stream=input Sep 13 00:56:12.782881 coreos-metadata[1948]: Sep 13 00:56:12.782 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 13 00:56:12.784393 coreos-metadata[1948]: Sep 13 00:56:12.784 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-id: Attempt #1 Sep 13 00:56:12.785355 coreos-metadata[1948]: Sep 13 00:56:12.785 INFO Fetch successful Sep 13 00:56:12.785355 coreos-metadata[1948]: Sep 13 00:56:12.785 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-type: Attempt #1 Sep 13 00:56:12.786006 coreos-metadata[1948]: Sep 13 00:56:12.785 INFO Fetch successful Sep 13 00:56:12.786073 coreos-metadata[1948]: Sep 13 00:56:12.786 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/local-ipv4: Attempt #1 Sep 13 00:56:12.786795 coreos-metadata[1948]: Sep 13 00:56:12.786 INFO Fetch successful Sep 13 00:56:12.786889 coreos-metadata[1948]: Sep 13 00:56:12.786 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-ipv4: Attempt #1 Sep 13 00:56:12.787371 coreos-metadata[1948]: Sep 13 00:56:12.787 INFO Fetch successful Sep 13 00:56:12.787430 coreos-metadata[1948]: Sep 13 00:56:12.787 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/placement/availability-zone: Attempt #1 Sep 13 00:56:12.788038 coreos-metadata[1948]: Sep 13 00:56:12.788 INFO Fetch successful Sep 13 00:56:12.788038 coreos-metadata[1948]: Sep 13 00:56:12.788 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/hostname: Attempt #1 Sep 13 00:56:12.788687 coreos-metadata[1948]: Sep 13 00:56:12.788 INFO Fetch successful Sep 13 00:56:12.788687 coreos-metadata[1948]: Sep 13 00:56:12.788 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-hostname: Attempt #1 Sep 13 00:56:12.789258 coreos-metadata[1948]: Sep 13 00:56:12.789 INFO Fetch successful Sep 13 00:56:12.789258 coreos-metadata[1948]: Sep 13 00:56:12.789 INFO Fetching http://169.254.169.254/2019-10-01/dynamic/instance-identity/document: Attempt #1 Sep 13 00:56:12.790014 coreos-metadata[1948]: Sep 13 00:56:12.789 INFO Fetch successful Sep 13 00:56:12.799153 systemd[1]: Finished coreos-metadata.service. Sep 13 00:56:12.825210 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-01222cc93ffd3844f?role=subscribe&stream=input Sep 13 00:56:12.925096 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO [MessageGatewayService] Starting receiving message from control channel Sep 13 00:56:13.025259 amazon-ssm-agent[1744]: 2025-09-13 00:56:09 INFO [MessageGatewayService] [EngineProcessor] Initial processing Sep 13 00:56:13.125610 amazon-ssm-agent[1744]: 2025-09-13 00:56:12 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Sep 13 00:56:13.802325 systemd[1]: Stopped kubelet.service. Sep 13 00:56:13.802478 systemd[1]: kubelet.service: Consumed 1.194s CPU time. Sep 13 00:56:13.804975 systemd[1]: Starting kubelet.service... Sep 13 00:56:13.833313 systemd[1]: Reloading. Sep 13 00:56:13.934273 /usr/lib/systemd/system-generators/torcx-generator[2001]: time="2025-09-13T00:56:13Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:56:13.937063 /usr/lib/systemd/system-generators/torcx-generator[2001]: time="2025-09-13T00:56:13Z" level=info msg="torcx already run" Sep 13 00:56:14.091861 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:56:14.091890 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:56:14.112620 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:56:14.231222 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:56:14.231454 systemd[1]: Stopped kubelet.service. Sep 13 00:56:14.233692 systemd[1]: Starting kubelet.service... Sep 13 00:56:14.506586 systemd[1]: Started kubelet.service. Sep 13 00:56:14.556734 kubelet[2062]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:56:14.557072 kubelet[2062]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 00:56:14.557145 kubelet[2062]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:56:14.557328 kubelet[2062]: I0913 00:56:14.557306 2062 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:56:15.081868 kubelet[2062]: I0913 00:56:15.081817 2062 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 13 00:56:15.081868 kubelet[2062]: I0913 00:56:15.081864 2062 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:56:15.082534 kubelet[2062]: I0913 00:56:15.082510 2062 server.go:954] "Client rotation is on, will bootstrap in background" Sep 13 00:56:15.140789 kubelet[2062]: I0913 00:56:15.140752 2062 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:56:15.155768 kubelet[2062]: E0913 00:56:15.155712 2062 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:56:15.155768 kubelet[2062]: I0913 00:56:15.155775 2062 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:56:15.158122 kubelet[2062]: I0913 00:56:15.158066 2062 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:56:15.158489 kubelet[2062]: I0913 00:56:15.158451 2062 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:56:15.158684 kubelet[2062]: I0913 00:56:15.158482 2062 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.27.34","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:56:15.158684 kubelet[2062]: I0913 00:56:15.158676 2062 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:56:15.158684 kubelet[2062]: I0913 00:56:15.158686 2062 container_manager_linux.go:304] "Creating device plugin manager" Sep 13 00:56:15.158889 kubelet[2062]: I0913 00:56:15.158799 2062 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:56:15.163745 kubelet[2062]: I0913 00:56:15.163709 2062 kubelet.go:446] "Attempting to sync node with API server" Sep 13 00:56:15.163745 kubelet[2062]: I0913 00:56:15.163752 2062 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:56:15.163928 kubelet[2062]: I0913 00:56:15.163786 2062 kubelet.go:352] "Adding apiserver pod source" Sep 13 00:56:15.163928 kubelet[2062]: I0913 00:56:15.163800 2062 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:56:15.166584 kubelet[2062]: E0913 00:56:15.166554 2062 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:56:15.166804 kubelet[2062]: E0913 00:56:15.166782 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:56:15.171253 kubelet[2062]: I0913 00:56:15.171202 2062 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 00:56:15.171613 kubelet[2062]: I0913 00:56:15.171597 2062 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:56:15.171670 kubelet[2062]: W0913 00:56:15.171646 2062 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:56:15.173713 kubelet[2062]: I0913 00:56:15.173680 2062 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 00:56:15.173821 kubelet[2062]: I0913 00:56:15.173725 2062 server.go:1287] "Started kubelet" Sep 13 00:56:15.173961 kubelet[2062]: I0913 00:56:15.173918 2062 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:56:15.175277 kubelet[2062]: I0913 00:56:15.174793 2062 server.go:479] "Adding debug handlers to kubelet server" Sep 13 00:56:15.180458 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 13 00:56:15.180538 kubelet[2062]: I0913 00:56:15.180254 2062 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:56:15.180794 kubelet[2062]: I0913 00:56:15.180780 2062 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:56:15.180975 kubelet[2062]: I0913 00:56:15.180567 2062 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:56:15.183624 kubelet[2062]: I0913 00:56:15.183243 2062 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:56:15.195845 kubelet[2062]: I0913 00:56:15.195822 2062 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 00:56:15.196199 kubelet[2062]: E0913 00:56:15.196182 2062 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.27.34\" not found" Sep 13 00:56:15.196644 kubelet[2062]: I0913 00:56:15.196631 2062 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 00:56:15.202812 kubelet[2062]: I0913 00:56:15.196797 2062 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:56:15.202943 kubelet[2062]: I0913 00:56:15.199278 2062 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:56:15.203153 kubelet[2062]: I0913 00:56:15.203136 2062 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:56:15.204214 kubelet[2062]: E0913 00:56:15.202777 2062 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.27.34\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Sep 13 00:56:15.204315 kubelet[2062]: W0913 00:56:15.201997 2062 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Sep 13 00:56:15.204408 kubelet[2062]: E0913 00:56:15.204394 2062 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Sep 13 00:56:15.204455 kubelet[2062]: W0913 00:56:15.202038 2062 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.31.27.34" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Sep 13 00:56:15.204505 kubelet[2062]: E0913 00:56:15.204496 2062 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"172.31.27.34\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Sep 13 00:56:15.204545 kubelet[2062]: W0913 00:56:15.202696 2062 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Sep 13 00:56:15.204597 kubelet[2062]: E0913 00:56:15.204586 2062 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Sep 13 00:56:15.205207 kubelet[2062]: E0913 00:56:15.199581 2062 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.27.34.1864b18d560f8873 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.27.34,UID:172.31.27.34,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.27.34,},FirstTimestamp:2025-09-13 00:56:15.173699699 +0000 UTC m=+0.663196390,LastTimestamp:2025-09-13 00:56:15.173699699 +0000 UTC m=+0.663196390,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.27.34,}" Sep 13 00:56:15.205804 kubelet[2062]: I0913 00:56:15.205791 2062 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:56:15.229533 kubelet[2062]: E0913 00:56:15.221210 2062 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:56:15.229533 kubelet[2062]: I0913 00:56:15.227672 2062 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 00:56:15.229533 kubelet[2062]: I0913 00:56:15.227681 2062 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 00:56:15.229533 kubelet[2062]: I0913 00:56:15.227696 2062 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:56:15.233024 kubelet[2062]: I0913 00:56:15.232087 2062 policy_none.go:49] "None policy: Start" Sep 13 00:56:15.233024 kubelet[2062]: I0913 00:56:15.232121 2062 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 00:56:15.233024 kubelet[2062]: I0913 00:56:15.232136 2062 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:56:15.240596 systemd[1]: Created slice kubepods.slice. Sep 13 00:56:15.249704 systemd[1]: Created slice kubepods-burstable.slice. Sep 13 00:56:15.253606 systemd[1]: Created slice kubepods-besteffort.slice. Sep 13 00:56:15.260201 kubelet[2062]: I0913 00:56:15.260176 2062 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:56:15.260504 kubelet[2062]: I0913 00:56:15.260493 2062 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:56:15.260615 kubelet[2062]: I0913 00:56:15.260582 2062 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:56:15.262656 kubelet[2062]: I0913 00:56:15.262638 2062 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:56:15.265891 kubelet[2062]: E0913 00:56:15.265858 2062 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 00:56:15.266000 kubelet[2062]: E0913 00:56:15.265909 2062 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.27.34\" not found" Sep 13 00:56:15.347936 kubelet[2062]: I0913 00:56:15.346346 2062 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:56:15.347936 kubelet[2062]: I0913 00:56:15.347871 2062 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:56:15.347936 kubelet[2062]: I0913 00:56:15.347892 2062 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 13 00:56:15.347936 kubelet[2062]: I0913 00:56:15.347919 2062 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 00:56:15.347936 kubelet[2062]: I0913 00:56:15.347925 2062 kubelet.go:2382] "Starting kubelet main sync loop" Sep 13 00:56:15.348230 kubelet[2062]: E0913 00:56:15.347974 2062 kubelet.go:2406] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Sep 13 00:56:15.362315 kubelet[2062]: I0913 00:56:15.362278 2062 kubelet_node_status.go:75] "Attempting to register node" node="172.31.27.34" Sep 13 00:56:15.372068 kubelet[2062]: I0913 00:56:15.372035 2062 kubelet_node_status.go:78] "Successfully registered node" node="172.31.27.34" Sep 13 00:56:15.372068 kubelet[2062]: E0913 00:56:15.372069 2062 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"172.31.27.34\": node \"172.31.27.34\" not found" Sep 13 00:56:15.397661 kubelet[2062]: E0913 00:56:15.397627 2062 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.27.34\" not found" Sep 13 00:56:15.498578 kubelet[2062]: E0913 00:56:15.498536 2062 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.27.34\" not found" Sep 13 00:56:15.566086 sudo[1944]: pam_unix(sudo:session): session closed for user root Sep 13 00:56:15.590814 sshd[1941]: pam_unix(sshd:session): session closed for user core Sep 13 00:56:15.593660 systemd[1]: sshd@4-172.31.27.34:22-147.75.109.163:42558.service: Deactivated successfully. Sep 13 00:56:15.594601 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:56:15.595528 systemd-logind[1703]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:56:15.597231 systemd-logind[1703]: Removed session 5. Sep 13 00:56:15.599083 kubelet[2062]: E0913 00:56:15.598988 2062 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.27.34\" not found" Sep 13 00:56:15.700148 kubelet[2062]: E0913 00:56:15.700091 2062 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.27.34\" not found" Sep 13 00:56:15.800975 kubelet[2062]: E0913 00:56:15.800932 2062 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.27.34\" not found" Sep 13 00:56:15.901975 kubelet[2062]: E0913 00:56:15.901848 2062 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.27.34\" not found" Sep 13 00:56:16.002705 kubelet[2062]: E0913 00:56:16.002659 2062 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.27.34\" not found" Sep 13 00:56:16.088235 kubelet[2062]: I0913 00:56:16.088192 2062 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Sep 13 00:56:16.088402 kubelet[2062]: W0913 00:56:16.088372 2062 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Sep 13 00:56:16.103743 kubelet[2062]: E0913 00:56:16.103689 2062 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.27.34\" not found" Sep 13 00:56:16.167596 kubelet[2062]: E0913 00:56:16.167478 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:56:16.204188 kubelet[2062]: E0913 00:56:16.204150 2062 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.27.34\" not found" Sep 13 00:56:16.304807 kubelet[2062]: E0913 00:56:16.304765 2062 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.27.34\" not found" Sep 13 00:56:16.405378 kubelet[2062]: E0913 00:56:16.405327 2062 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.27.34\" not found" Sep 13 00:56:16.506525 kubelet[2062]: E0913 00:56:16.506423 2062 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.27.34\" not found" Sep 13 00:56:16.606803 kubelet[2062]: E0913 00:56:16.606746 2062 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.27.34\" not found" Sep 13 00:56:16.707117 kubelet[2062]: E0913 00:56:16.707074 2062 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172.31.27.34\" not found" Sep 13 00:56:16.808633 kubelet[2062]: I0913 00:56:16.808535 2062 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Sep 13 00:56:16.809319 env[1711]: time="2025-09-13T00:56:16.809282932Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:56:16.809771 kubelet[2062]: I0913 00:56:16.809756 2062 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Sep 13 00:56:17.168567 kubelet[2062]: E0913 00:56:17.168444 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:56:17.171812 kubelet[2062]: I0913 00:56:17.171761 2062 apiserver.go:52] "Watching apiserver" Sep 13 00:56:17.181674 systemd[1]: Created slice kubepods-burstable-podfdde9885_6cca_4d33_96bd_890dbe110ac8.slice. Sep 13 00:56:17.196200 systemd[1]: Created slice kubepods-besteffort-podce6d5c80_1a0a_47af_be52_b6b2435e3c13.slice. Sep 13 00:56:17.205084 kubelet[2062]: I0913 00:56:17.205046 2062 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 00:56:17.216475 kubelet[2062]: I0913 00:56:17.216435 2062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ce6d5c80-1a0a-47af-be52-b6b2435e3c13-xtables-lock\") pod \"kube-proxy-zrxq2\" (UID: \"ce6d5c80-1a0a-47af-be52-b6b2435e3c13\") " pod="kube-system/kube-proxy-zrxq2" Sep 13 00:56:17.216669 kubelet[2062]: I0913 00:56:17.216652 2062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fdde9885-6cca-4d33-96bd-890dbe110ac8-lib-modules\") pod \"cilium-mvf9x\" (UID: \"fdde9885-6cca-4d33-96bd-890dbe110ac8\") " pod="kube-system/cilium-mvf9x" Sep 13 00:56:17.216781 kubelet[2062]: I0913 00:56:17.216765 2062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2hkk\" (UniqueName: \"kubernetes.io/projected/ce6d5c80-1a0a-47af-be52-b6b2435e3c13-kube-api-access-g2hkk\") pod \"kube-proxy-zrxq2\" (UID: \"ce6d5c80-1a0a-47af-be52-b6b2435e3c13\") " pod="kube-system/kube-proxy-zrxq2" Sep 13 00:56:17.216883 kubelet[2062]: I0913 00:56:17.216873 2062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fdde9885-6cca-4d33-96bd-890dbe110ac8-bpf-maps\") pod \"cilium-mvf9x\" (UID: \"fdde9885-6cca-4d33-96bd-890dbe110ac8\") " pod="kube-system/cilium-mvf9x" Sep 13 00:56:17.216952 kubelet[2062]: I0913 00:56:17.216942 2062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fdde9885-6cca-4d33-96bd-890dbe110ac8-clustermesh-secrets\") pod \"cilium-mvf9x\" (UID: \"fdde9885-6cca-4d33-96bd-890dbe110ac8\") " pod="kube-system/cilium-mvf9x" Sep 13 00:56:17.217021 kubelet[2062]: I0913 00:56:17.217012 2062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fdde9885-6cca-4d33-96bd-890dbe110ac8-host-proc-sys-kernel\") pod \"cilium-mvf9x\" (UID: \"fdde9885-6cca-4d33-96bd-890dbe110ac8\") " pod="kube-system/cilium-mvf9x" Sep 13 00:56:17.217077 kubelet[2062]: I0913 00:56:17.217068 2062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fdde9885-6cca-4d33-96bd-890dbe110ac8-host-proc-sys-net\") pod \"cilium-mvf9x\" (UID: \"fdde9885-6cca-4d33-96bd-890dbe110ac8\") " pod="kube-system/cilium-mvf9x" Sep 13 00:56:17.217162 kubelet[2062]: I0913 00:56:17.217151 2062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fdde9885-6cca-4d33-96bd-890dbe110ac8-hubble-tls\") pod \"cilium-mvf9x\" (UID: \"fdde9885-6cca-4d33-96bd-890dbe110ac8\") " pod="kube-system/cilium-mvf9x" Sep 13 00:56:17.217269 kubelet[2062]: I0913 00:56:17.217258 2062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ce6d5c80-1a0a-47af-be52-b6b2435e3c13-lib-modules\") pod \"kube-proxy-zrxq2\" (UID: \"ce6d5c80-1a0a-47af-be52-b6b2435e3c13\") " pod="kube-system/kube-proxy-zrxq2" Sep 13 00:56:17.217348 kubelet[2062]: I0913 00:56:17.217338 2062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fdde9885-6cca-4d33-96bd-890dbe110ac8-hostproc\") pod \"cilium-mvf9x\" (UID: \"fdde9885-6cca-4d33-96bd-890dbe110ac8\") " pod="kube-system/cilium-mvf9x" Sep 13 00:56:17.217410 kubelet[2062]: I0913 00:56:17.217400 2062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fdde9885-6cca-4d33-96bd-890dbe110ac8-cilium-cgroup\") pod \"cilium-mvf9x\" (UID: \"fdde9885-6cca-4d33-96bd-890dbe110ac8\") " pod="kube-system/cilium-mvf9x" Sep 13 00:56:17.217477 kubelet[2062]: I0913 00:56:17.217467 2062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fdde9885-6cca-4d33-96bd-890dbe110ac8-etc-cni-netd\") pod \"cilium-mvf9x\" (UID: \"fdde9885-6cca-4d33-96bd-890dbe110ac8\") " pod="kube-system/cilium-mvf9x" Sep 13 00:56:17.217552 kubelet[2062]: I0913 00:56:17.217541 2062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fdde9885-6cca-4d33-96bd-890dbe110ac8-xtables-lock\") pod \"cilium-mvf9x\" (UID: \"fdde9885-6cca-4d33-96bd-890dbe110ac8\") " pod="kube-system/cilium-mvf9x" Sep 13 00:56:17.217642 kubelet[2062]: I0913 00:56:17.217608 2062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fdde9885-6cca-4d33-96bd-890dbe110ac8-cilium-config-path\") pod \"cilium-mvf9x\" (UID: \"fdde9885-6cca-4d33-96bd-890dbe110ac8\") " pod="kube-system/cilium-mvf9x" Sep 13 00:56:17.217642 kubelet[2062]: I0913 00:56:17.217632 2062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ce6d5c80-1a0a-47af-be52-b6b2435e3c13-kube-proxy\") pod \"kube-proxy-zrxq2\" (UID: \"ce6d5c80-1a0a-47af-be52-b6b2435e3c13\") " pod="kube-system/kube-proxy-zrxq2" Sep 13 00:56:17.217642 kubelet[2062]: I0913 00:56:17.217647 2062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fdde9885-6cca-4d33-96bd-890dbe110ac8-cilium-run\") pod \"cilium-mvf9x\" (UID: \"fdde9885-6cca-4d33-96bd-890dbe110ac8\") " pod="kube-system/cilium-mvf9x" Sep 13 00:56:17.217795 kubelet[2062]: I0913 00:56:17.217662 2062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fdde9885-6cca-4d33-96bd-890dbe110ac8-cni-path\") pod \"cilium-mvf9x\" (UID: \"fdde9885-6cca-4d33-96bd-890dbe110ac8\") " pod="kube-system/cilium-mvf9x" Sep 13 00:56:17.217795 kubelet[2062]: I0913 00:56:17.217680 2062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7g9d\" (UniqueName: \"kubernetes.io/projected/fdde9885-6cca-4d33-96bd-890dbe110ac8-kube-api-access-z7g9d\") pod \"cilium-mvf9x\" (UID: \"fdde9885-6cca-4d33-96bd-890dbe110ac8\") " pod="kube-system/cilium-mvf9x" Sep 13 00:56:17.319397 kubelet[2062]: I0913 00:56:17.319366 2062 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 13 00:56:17.493729 env[1711]: time="2025-09-13T00:56:17.492436090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mvf9x,Uid:fdde9885-6cca-4d33-96bd-890dbe110ac8,Namespace:kube-system,Attempt:0,}" Sep 13 00:56:17.505430 env[1711]: time="2025-09-13T00:56:17.505380808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zrxq2,Uid:ce6d5c80-1a0a-47af-be52-b6b2435e3c13,Namespace:kube-system,Attempt:0,}" Sep 13 00:56:18.040558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1287413334.mount: Deactivated successfully. Sep 13 00:56:18.059695 env[1711]: time="2025-09-13T00:56:18.059628553Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:56:18.063558 env[1711]: time="2025-09-13T00:56:18.063516145Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:56:18.065658 env[1711]: time="2025-09-13T00:56:18.065621185Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:56:18.070725 env[1711]: time="2025-09-13T00:56:18.070671986Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:56:18.072563 env[1711]: time="2025-09-13T00:56:18.072520517Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:56:18.074479 env[1711]: time="2025-09-13T00:56:18.074435152Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:56:18.077174 env[1711]: time="2025-09-13T00:56:18.077134235Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:56:18.081178 env[1711]: time="2025-09-13T00:56:18.081135245Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:56:18.118283 env[1711]: time="2025-09-13T00:56:18.110135505Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:56:18.118283 env[1711]: time="2025-09-13T00:56:18.110205522Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:56:18.118283 env[1711]: time="2025-09-13T00:56:18.110222570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:56:18.118283 env[1711]: time="2025-09-13T00:56:18.110412955Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f4bbe350eaeee272c0b15bad26208c3b4df7746f93ddaf0ffd5431dd2f0c77b0 pid=2114 runtime=io.containerd.runc.v2 Sep 13 00:56:18.118617 env[1711]: time="2025-09-13T00:56:18.113267192Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:56:18.118617 env[1711]: time="2025-09-13T00:56:18.113299910Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:56:18.118617 env[1711]: time="2025-09-13T00:56:18.113310630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:56:18.118617 env[1711]: time="2025-09-13T00:56:18.113452360Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/54b36b5f2b7725e5bbf6d7a6082eb37dd668508e09aaa670177af025f4b342f8 pid=2128 runtime=io.containerd.runc.v2 Sep 13 00:56:18.141811 systemd[1]: Started cri-containerd-f4bbe350eaeee272c0b15bad26208c3b4df7746f93ddaf0ffd5431dd2f0c77b0.scope. Sep 13 00:56:18.165801 systemd[1]: Started cri-containerd-54b36b5f2b7725e5bbf6d7a6082eb37dd668508e09aaa670177af025f4b342f8.scope. Sep 13 00:56:18.168915 kubelet[2062]: E0913 00:56:18.168824 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:56:18.195689 env[1711]: time="2025-09-13T00:56:18.195416676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mvf9x,Uid:fdde9885-6cca-4d33-96bd-890dbe110ac8,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4bbe350eaeee272c0b15bad26208c3b4df7746f93ddaf0ffd5431dd2f0c77b0\"" Sep 13 00:56:18.201075 env[1711]: time="2025-09-13T00:56:18.201026952Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 13 00:56:18.209595 env[1711]: time="2025-09-13T00:56:18.209539642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zrxq2,Uid:ce6d5c80-1a0a-47af-be52-b6b2435e3c13,Namespace:kube-system,Attempt:0,} returns sandbox id \"54b36b5f2b7725e5bbf6d7a6082eb37dd668508e09aaa670177af025f4b342f8\"" Sep 13 00:56:19.169443 kubelet[2062]: E0913 00:56:19.169401 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:56:20.170505 kubelet[2062]: E0913 00:56:20.170442 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:56:21.171308 kubelet[2062]: E0913 00:56:21.171262 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:56:22.172099 kubelet[2062]: E0913 00:56:22.172055 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:56:23.145072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2952497588.mount: Deactivated successfully. Sep 13 00:56:23.172675 kubelet[2062]: E0913 00:56:23.172619 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:56:24.173190 kubelet[2062]: E0913 00:56:24.173133 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:56:25.173482 kubelet[2062]: E0913 00:56:25.173443 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:56:26.154534 env[1711]: time="2025-09-13T00:56:26.154473372Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:56:26.159526 env[1711]: time="2025-09-13T00:56:26.159466945Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:56:26.163231 env[1711]: time="2025-09-13T00:56:26.163184793Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:56:26.163933 env[1711]: time="2025-09-13T00:56:26.163890512Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 13 00:56:26.165960 env[1711]: time="2025-09-13T00:56:26.165925501Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 13 00:56:26.167326 env[1711]: time="2025-09-13T00:56:26.167287791Z" level=info msg="CreateContainer within sandbox \"f4bbe350eaeee272c0b15bad26208c3b4df7746f93ddaf0ffd5431dd2f0c77b0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:56:26.175993 kubelet[2062]: E0913 00:56:26.175604 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:56:26.189178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3324114066.mount: Deactivated successfully. Sep 13 00:56:26.198504 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3672811226.mount: Deactivated successfully. Sep 13 00:56:26.209559 env[1711]: time="2025-09-13T00:56:26.209509959Z" level=info msg="CreateContainer within sandbox \"f4bbe350eaeee272c0b15bad26208c3b4df7746f93ddaf0ffd5431dd2f0c77b0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cd1747196b2deff3e0d9d14ec4a24453b1a23f20912c6d9c9160d8f43b682e1e\"" Sep 13 00:56:26.211847 env[1711]: time="2025-09-13T00:56:26.211809173Z" level=info msg="StartContainer for \"cd1747196b2deff3e0d9d14ec4a24453b1a23f20912c6d9c9160d8f43b682e1e\"" Sep 13 00:56:26.238076 systemd[1]: Started cri-containerd-cd1747196b2deff3e0d9d14ec4a24453b1a23f20912c6d9c9160d8f43b682e1e.scope. Sep 13 00:56:26.276434 env[1711]: time="2025-09-13T00:56:26.276378067Z" level=info msg="StartContainer for \"cd1747196b2deff3e0d9d14ec4a24453b1a23f20912c6d9c9160d8f43b682e1e\" returns successfully" Sep 13 00:56:26.285353 systemd[1]: cri-containerd-cd1747196b2deff3e0d9d14ec4a24453b1a23f20912c6d9c9160d8f43b682e1e.scope: Deactivated successfully. Sep 13 00:56:26.367980 env[1711]: time="2025-09-13T00:56:26.367933422Z" level=info msg="shim disconnected" id=cd1747196b2deff3e0d9d14ec4a24453b1a23f20912c6d9c9160d8f43b682e1e Sep 13 00:56:26.367980 env[1711]: time="2025-09-13T00:56:26.367975782Z" level=warning msg="cleaning up after shim disconnected" id=cd1747196b2deff3e0d9d14ec4a24453b1a23f20912c6d9c9160d8f43b682e1e namespace=k8s.io Sep 13 00:56:26.367980 env[1711]: time="2025-09-13T00:56:26.367985322Z" level=info msg="cleaning up dead shim" Sep 13 00:56:26.377360 env[1711]: time="2025-09-13T00:56:26.377313363Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:56:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2240 runtime=io.containerd.runc.v2\n" Sep 13 00:56:26.400848 env[1711]: time="2025-09-13T00:56:26.400804357Z" level=info msg="CreateContainer within sandbox \"f4bbe350eaeee272c0b15bad26208c3b4df7746f93ddaf0ffd5431dd2f0c77b0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:56:26.413800 env[1711]: time="2025-09-13T00:56:26.413675130Z" level=info msg="CreateContainer within sandbox \"f4bbe350eaeee272c0b15bad26208c3b4df7746f93ddaf0ffd5431dd2f0c77b0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e8c0141e9a7e6288445d4b467fa286a9d76eb044d0157bcbfed9b9a0d5927fef\"" Sep 13 00:56:26.415246 env[1711]: time="2025-09-13T00:56:26.415068254Z" level=info msg="StartContainer for \"e8c0141e9a7e6288445d4b467fa286a9d76eb044d0157bcbfed9b9a0d5927fef\"" Sep 13 00:56:26.437442 systemd[1]: Started cri-containerd-e8c0141e9a7e6288445d4b467fa286a9d76eb044d0157bcbfed9b9a0d5927fef.scope. Sep 13 00:56:26.473787 env[1711]: time="2025-09-13T00:56:26.473727655Z" level=info msg="StartContainer for \"e8c0141e9a7e6288445d4b467fa286a9d76eb044d0157bcbfed9b9a0d5927fef\" returns successfully" Sep 13 00:56:26.482660 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:56:26.482969 systemd[1]: Stopped systemd-sysctl.service. Sep 13 00:56:26.483680 systemd[1]: Stopping systemd-sysctl.service... Sep 13 00:56:26.486071 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:56:26.492935 systemd[1]: cri-containerd-e8c0141e9a7e6288445d4b467fa286a9d76eb044d0157bcbfed9b9a0d5927fef.scope: Deactivated successfully. Sep 13 00:56:26.504528 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:56:26.522693 env[1711]: time="2025-09-13T00:56:26.522652808Z" level=info msg="shim disconnected" id=e8c0141e9a7e6288445d4b467fa286a9d76eb044d0157bcbfed9b9a0d5927fef Sep 13 00:56:26.522921 env[1711]: time="2025-09-13T00:56:26.522892858Z" level=warning msg="cleaning up after shim disconnected" id=e8c0141e9a7e6288445d4b467fa286a9d76eb044d0157bcbfed9b9a0d5927fef namespace=k8s.io Sep 13 00:56:26.522921 env[1711]: time="2025-09-13T00:56:26.522911693Z" level=info msg="cleaning up dead shim" Sep 13 00:56:26.530917 env[1711]: time="2025-09-13T00:56:26.530876241Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:56:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2303 runtime=io.containerd.runc.v2\n" Sep 13 00:56:27.176713 kubelet[2062]: E0913 00:56:27.176671 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:56:27.187487 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cd1747196b2deff3e0d9d14ec4a24453b1a23f20912c6d9c9160d8f43b682e1e-rootfs.mount: Deactivated successfully. Sep 13 00:56:27.298460 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3873183764.mount: Deactivated successfully. Sep 13 00:56:27.404612 env[1711]: time="2025-09-13T00:56:27.404560073Z" level=info msg="CreateContainer within sandbox \"f4bbe350eaeee272c0b15bad26208c3b4df7746f93ddaf0ffd5431dd2f0c77b0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:56:27.431252 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2128440135.mount: Deactivated successfully. Sep 13 00:56:27.440711 env[1711]: time="2025-09-13T00:56:27.440660609Z" level=info msg="CreateContainer within sandbox \"f4bbe350eaeee272c0b15bad26208c3b4df7746f93ddaf0ffd5431dd2f0c77b0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ead88a612ae4befee566bb39db66df0ccfa4853d46e0723ee303d73145d7854f\"" Sep 13 00:56:27.441357 env[1711]: time="2025-09-13T00:56:27.441330381Z" level=info msg="StartContainer for \"ead88a612ae4befee566bb39db66df0ccfa4853d46e0723ee303d73145d7854f\"" Sep 13 00:56:27.477262 systemd[1]: Started cri-containerd-ead88a612ae4befee566bb39db66df0ccfa4853d46e0723ee303d73145d7854f.scope. Sep 13 00:56:27.539629 systemd[1]: cri-containerd-ead88a612ae4befee566bb39db66df0ccfa4853d46e0723ee303d73145d7854f.scope: Deactivated successfully. Sep 13 00:56:27.544214 env[1711]: time="2025-09-13T00:56:27.544159914Z" level=info msg="StartContainer for \"ead88a612ae4befee566bb39db66df0ccfa4853d46e0723ee303d73145d7854f\" returns successfully" Sep 13 00:56:27.731084 env[1711]: time="2025-09-13T00:56:27.730882955Z" level=info msg="shim disconnected" id=ead88a612ae4befee566bb39db66df0ccfa4853d46e0723ee303d73145d7854f Sep 13 00:56:27.731084 env[1711]: time="2025-09-13T00:56:27.730931438Z" level=warning msg="cleaning up after shim disconnected" id=ead88a612ae4befee566bb39db66df0ccfa4853d46e0723ee303d73145d7854f namespace=k8s.io Sep 13 00:56:27.731084 env[1711]: time="2025-09-13T00:56:27.730940927Z" level=info msg="cleaning up dead shim" Sep 13 00:56:27.739719 env[1711]: time="2025-09-13T00:56:27.739673659Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:56:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2361 runtime=io.containerd.runc.v2\n" Sep 13 00:56:28.087653 env[1711]: time="2025-09-13T00:56:28.087364308Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:56:28.098390 env[1711]: time="2025-09-13T00:56:28.098067135Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:56:28.100756 env[1711]: time="2025-09-13T00:56:28.100705946Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:56:28.103368 env[1711]: time="2025-09-13T00:56:28.103318920Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:56:28.103674 env[1711]: time="2025-09-13T00:56:28.103645958Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Sep 13 00:56:28.105777 env[1711]: time="2025-09-13T00:56:28.105718335Z" level=info msg="CreateContainer within sandbox \"54b36b5f2b7725e5bbf6d7a6082eb37dd668508e09aaa670177af025f4b342f8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:56:28.130233 env[1711]: time="2025-09-13T00:56:28.130065146Z" level=info msg="CreateContainer within sandbox \"54b36b5f2b7725e5bbf6d7a6082eb37dd668508e09aaa670177af025f4b342f8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bcdf02b588bfe49516b11b921c4ef6014fc21a85387456904497ea9341f26ac3\"" Sep 13 00:56:28.130750 env[1711]: time="2025-09-13T00:56:28.130724262Z" level=info msg="StartContainer for \"bcdf02b588bfe49516b11b921c4ef6014fc21a85387456904497ea9341f26ac3\"" Sep 13 00:56:28.150395 systemd[1]: Started cri-containerd-bcdf02b588bfe49516b11b921c4ef6014fc21a85387456904497ea9341f26ac3.scope. Sep 13 00:56:28.177257 kubelet[2062]: E0913 00:56:28.177191 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:56:28.196802 env[1711]: time="2025-09-13T00:56:28.196745353Z" level=info msg="StartContainer for \"bcdf02b588bfe49516b11b921c4ef6014fc21a85387456904497ea9341f26ac3\" returns successfully" Sep 13 00:56:28.411823 env[1711]: time="2025-09-13T00:56:28.411705934Z" level=info msg="CreateContainer within sandbox \"f4bbe350eaeee272c0b15bad26208c3b4df7746f93ddaf0ffd5431dd2f0c77b0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:56:28.422560 kubelet[2062]: I0913 00:56:28.422041 2062 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zrxq2" podStartSLOduration=3.528497066 podStartE2EDuration="13.422024324s" podCreationTimestamp="2025-09-13 00:56:15 +0000 UTC" firstStartedPulling="2025-09-13 00:56:18.210894958 +0000 UTC m=+3.700391637" lastFinishedPulling="2025-09-13 00:56:28.104422217 +0000 UTC m=+13.593918895" observedRunningTime="2025-09-13 00:56:28.42194578 +0000 UTC m=+13.911442476" watchObservedRunningTime="2025-09-13 00:56:28.422024324 +0000 UTC m=+13.911521023" Sep 13 00:56:28.431328 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3228690508.mount: Deactivated successfully. Sep 13 00:56:28.440687 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3206167700.mount: Deactivated successfully. Sep 13 00:56:28.451274 env[1711]: time="2025-09-13T00:56:28.451223276Z" level=info msg="CreateContainer within sandbox \"f4bbe350eaeee272c0b15bad26208c3b4df7746f93ddaf0ffd5431dd2f0c77b0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bab9ab144db12c6cd629bd57161863a6c7f2fc96f36f5638fcd34c4c5c230fb8\"" Sep 13 00:56:28.451924 env[1711]: time="2025-09-13T00:56:28.451888420Z" level=info msg="StartContainer for \"bab9ab144db12c6cd629bd57161863a6c7f2fc96f36f5638fcd34c4c5c230fb8\"" Sep 13 00:56:28.480550 systemd[1]: Started cri-containerd-bab9ab144db12c6cd629bd57161863a6c7f2fc96f36f5638fcd34c4c5c230fb8.scope. Sep 13 00:56:28.540290 systemd[1]: cri-containerd-bab9ab144db12c6cd629bd57161863a6c7f2fc96f36f5638fcd34c4c5c230fb8.scope: Deactivated successfully. Sep 13 00:56:28.543925 env[1711]: time="2025-09-13T00:56:28.543877039Z" level=info msg="StartContainer for \"bab9ab144db12c6cd629bd57161863a6c7f2fc96f36f5638fcd34c4c5c230fb8\" returns successfully" Sep 13 00:56:28.660011 env[1711]: time="2025-09-13T00:56:28.659969346Z" level=info msg="shim disconnected" id=bab9ab144db12c6cd629bd57161863a6c7f2fc96f36f5638fcd34c4c5c230fb8 Sep 13 00:56:28.660338 env[1711]: time="2025-09-13T00:56:28.660319413Z" level=warning msg="cleaning up after shim disconnected" id=bab9ab144db12c6cd629bd57161863a6c7f2fc96f36f5638fcd34c4c5c230fb8 namespace=k8s.io Sep 13 00:56:28.660413 env[1711]: time="2025-09-13T00:56:28.660402451Z" level=info msg="cleaning up dead shim" Sep 13 00:56:28.668889 env[1711]: time="2025-09-13T00:56:28.668415157Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:56:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2568 runtime=io.containerd.runc.v2\n" Sep 13 00:56:29.177847 kubelet[2062]: E0913 00:56:29.177775 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:56:29.415571 env[1711]: time="2025-09-13T00:56:29.415527489Z" level=info msg="CreateContainer within sandbox \"f4bbe350eaeee272c0b15bad26208c3b4df7746f93ddaf0ffd5431dd2f0c77b0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:56:29.439716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3993738335.mount: Deactivated successfully. Sep 13 00:56:29.448931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1093665954.mount: Deactivated successfully. Sep 13 00:56:29.458891 env[1711]: time="2025-09-13T00:56:29.458840994Z" level=info msg="CreateContainer within sandbox \"f4bbe350eaeee272c0b15bad26208c3b4df7746f93ddaf0ffd5431dd2f0c77b0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"82f67df77863a6ddd7ff8137311b18ba1055c2257df252bc817bd9d1a70dad09\"" Sep 13 00:56:29.459579 env[1711]: time="2025-09-13T00:56:29.459548216Z" level=info msg="StartContainer for \"82f67df77863a6ddd7ff8137311b18ba1055c2257df252bc817bd9d1a70dad09\"" Sep 13 00:56:29.478919 systemd[1]: Started cri-containerd-82f67df77863a6ddd7ff8137311b18ba1055c2257df252bc817bd9d1a70dad09.scope. Sep 13 00:56:29.527033 env[1711]: time="2025-09-13T00:56:29.526969385Z" level=info msg="StartContainer for \"82f67df77863a6ddd7ff8137311b18ba1055c2257df252bc817bd9d1a70dad09\" returns successfully" Sep 13 00:56:29.709078 kubelet[2062]: I0913 00:56:29.708974 2062 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 13 00:56:29.971439 kernel: Initializing XFRM netlink socket Sep 13 00:56:30.179320 kubelet[2062]: E0913 00:56:30.178895 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:56:30.439563 kubelet[2062]: I0913 00:56:30.439431 2062 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mvf9x" podStartSLOduration=7.474310066 podStartE2EDuration="15.439413423s" podCreationTimestamp="2025-09-13 00:56:15 +0000 UTC" firstStartedPulling="2025-09-13 00:56:18.200357128 +0000 UTC m=+3.689853808" lastFinishedPulling="2025-09-13 00:56:26.165460467 +0000 UTC m=+11.654957165" observedRunningTime="2025-09-13 00:56:30.439231287 +0000 UTC m=+15.928727987" watchObservedRunningTime="2025-09-13 00:56:30.439413423 +0000 UTC m=+15.928910104" Sep 13 00:56:31.179272 kubelet[2062]: E0913 00:56:31.179217 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:56:31.616821 systemd-timesyncd[1664]: Network configuration changed, trying to establish connection. Sep 13 00:56:31.619588 systemd-networkd[1444]: cilium_host: Link UP Sep 13 00:56:31.620295 (udev-worker)[2449]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:56:31.620663 systemd-networkd[1444]: cilium_net: Link UP Sep 13 00:56:31.621821 systemd-networkd[1444]: cilium_net: Gained carrier Sep 13 00:56:31.624527 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Sep 13 00:56:31.624591 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 13 00:56:31.624289 systemd-networkd[1444]: cilium_host: Gained carrier Sep 13 00:56:31.624793 (udev-worker)[2744]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:56:32.308731 systemd-timesyncd[1664]: Contacted time server 72.14.186.59:123 (2.flatcar.pool.ntp.org). Sep 13 00:56:32.308847 systemd-timesyncd[1664]: Initial clock synchronization to Sat 2025-09-13 00:56:32.308544 UTC. Sep 13 00:56:32.309749 systemd-resolved[1662]: Clock change detected. Flushing caches. Sep 13 00:56:32.311185 (udev-worker)[2754]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:56:32.317947 systemd-networkd[1444]: cilium_vxlan: Link UP Sep 13 00:56:32.317960 systemd-networkd[1444]: cilium_vxlan: Gained carrier Sep 13 00:56:32.543818 kernel: NET: Registered PF_ALG protocol family Sep 13 00:56:32.753620 kubelet[2062]: E0913 00:56:32.753515 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:56:32.973979 systemd-networkd[1444]: cilium_host: Gained IPv6LL Sep 13 00:56:33.101937 systemd-networkd[1444]: cilium_net: Gained IPv6LL Sep 13 00:56:33.222970 (udev-worker)[2756]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:56:33.238628 systemd-networkd[1444]: lxc_health: Link UP Sep 13 00:56:33.262808 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 13 00:56:33.262840 systemd-networkd[1444]: lxc_health: Gained carrier Sep 13 00:56:33.704102 systemd[1]: Created slice kubepods-besteffort-pod79b43b56_e476_4f00_abd6_0e80f5ed1f5c.slice. Sep 13 00:56:33.753802 kubelet[2062]: E0913 00:56:33.753746 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:56:33.806050 systemd-networkd[1444]: cilium_vxlan: Gained IPv6LL Sep 13 00:56:33.815969 kubelet[2062]: I0913 00:56:33.815913 2062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvs4v\" (UniqueName: \"kubernetes.io/projected/79b43b56-e476-4f00-abd6-0e80f5ed1f5c-kube-api-access-qvs4v\") pod \"nginx-deployment-7fcdb87857-2rqdh\" (UID: \"79b43b56-e476-4f00-abd6-0e80f5ed1f5c\") " pod="default/nginx-deployment-7fcdb87857-2rqdh" Sep 13 00:56:34.009997 env[1711]: time="2025-09-13T00:56:34.009394442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-2rqdh,Uid:79b43b56-e476-4f00-abd6-0e80f5ed1f5c,Namespace:default,Attempt:0,}" Sep 13 00:56:34.089960 systemd-networkd[1444]: lxc279416fd0fa6: Link UP Sep 13 00:56:34.096973 kernel: eth0: renamed from tmpd84ed Sep 13 00:56:34.106170 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc279416fd0fa6: link becomes ready Sep 13 00:56:34.104031 systemd-networkd[1444]: lxc279416fd0fa6: Gained carrier Sep 13 00:56:34.512366 systemd-networkd[1444]: lxc_health: Gained IPv6LL Sep 13 00:56:34.755373 kubelet[2062]: E0913 00:56:34.755324 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:56:35.471997 systemd-networkd[1444]: lxc279416fd0fa6: Gained IPv6LL Sep 13 00:56:35.738100 kubelet[2062]: E0913 00:56:35.737974 2062 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:56:35.756861 kubelet[2062]: E0913 00:56:35.756817 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:56:36.758316 kubelet[2062]: E0913 00:56:36.758267 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:56:37.759659 kubelet[2062]: E0913 00:56:37.759615 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:56:38.070876 env[1711]: time="2025-09-13T00:56:38.070395715Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:56:38.070876 env[1711]: time="2025-09-13T00:56:38.070440893Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:56:38.071380 env[1711]: time="2025-09-13T00:56:38.070458058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:56:38.071380 env[1711]: time="2025-09-13T00:56:38.070618103Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d84ed1310c52dfe08c8bb0a599f411703502d4ba9952ed71dbc794cd114106c9 pid=3119 runtime=io.containerd.runc.v2 Sep 13 00:56:38.099597 systemd[1]: run-containerd-runc-k8s.io-d84ed1310c52dfe08c8bb0a599f411703502d4ba9952ed71dbc794cd114106c9-runc.K3jV2H.mount: Deactivated successfully. Sep 13 00:56:38.104124 systemd[1]: Started cri-containerd-d84ed1310c52dfe08c8bb0a599f411703502d4ba9952ed71dbc794cd114106c9.scope. Sep 13 00:56:38.153288 env[1711]: time="2025-09-13T00:56:38.153242520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-2rqdh,Uid:79b43b56-e476-4f00-abd6-0e80f5ed1f5c,Namespace:default,Attempt:0,} returns sandbox id \"d84ed1310c52dfe08c8bb0a599f411703502d4ba9952ed71dbc794cd114106c9\"" Sep 13 00:56:38.154943 env[1711]: time="2025-09-13T00:56:38.154901724Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Sep 13 00:56:38.491944 kubelet[2062]: I0913 00:56:38.491484 2062 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:56:38.760503 kubelet[2062]: E0913 00:56:38.760367 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:56:39.760510 kubelet[2062]: E0913 00:56:39.760450 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:56:40.092353 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 13 00:56:40.690714 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount24305358.mount: Deactivated successfully. Sep 13 00:56:40.761289 kubelet[2062]: E0913 00:56:40.761244 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:56:41.761678 kubelet[2062]: E0913 00:56:41.761615 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:56:42.337758 env[1711]: time="2025-09-13T00:56:42.337706237Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:56:42.342349 env[1711]: time="2025-09-13T00:56:42.342295753Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4cbb30cb60f877a307c1f0bcdaca389dd24689ff60c6fb370f0cca7367185c48,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:56:42.345730 env[1711]: time="2025-09-13T00:56:42.345680993Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:56:42.348604 env[1711]: time="2025-09-13T00:56:42.348558505Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:883ca821a91fc20bcde818eeee4e1ed55ef63a020d6198ecd5a03af5a4eac530,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:56:42.349437 env[1711]: time="2025-09-13T00:56:42.349399798Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:4cbb30cb60f877a307c1f0bcdaca389dd24689ff60c6fb370f0cca7367185c48\"" Sep 13 00:56:42.351906 env[1711]: time="2025-09-13T00:56:42.351850877Z" level=info msg="CreateContainer within sandbox \"d84ed1310c52dfe08c8bb0a599f411703502d4ba9952ed71dbc794cd114106c9\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Sep 13 00:56:42.368821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount157357623.mount: Deactivated successfully. Sep 13 00:56:42.373637 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3056672048.mount: Deactivated successfully. Sep 13 00:56:42.379439 env[1711]: time="2025-09-13T00:56:42.379379183Z" level=info msg="CreateContainer within sandbox \"d84ed1310c52dfe08c8bb0a599f411703502d4ba9952ed71dbc794cd114106c9\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"39aa40dd9680883c291f9e526e7ec5f5c74530ae143d54ccd985f43905ddfcbe\"" Sep 13 00:56:42.380312 env[1711]: time="2025-09-13T00:56:42.380263607Z" level=info msg="StartContainer for \"39aa40dd9680883c291f9e526e7ec5f5c74530ae143d54ccd985f43905ddfcbe\"" Sep 13 00:56:42.410570 systemd[1]: Started cri-containerd-39aa40dd9680883c291f9e526e7ec5f5c74530ae143d54ccd985f43905ddfcbe.scope. Sep 13 00:56:42.442474 env[1711]: time="2025-09-13T00:56:42.442416472Z" level=info msg="StartContainer for \"39aa40dd9680883c291f9e526e7ec5f5c74530ae143d54ccd985f43905ddfcbe\" returns successfully" Sep 13 00:56:42.631197 amazon-ssm-agent[1744]: 2025-09-13 00:56:42 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Sep 13 00:56:42.762611 kubelet[2062]: E0913 00:56:42.762555 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:56:43.763423 kubelet[2062]: E0913 00:56:43.763351 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:56:44.764515 kubelet[2062]: E0913 00:56:44.764408 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:56:45.764804 kubelet[2062]: E0913 00:56:45.764736 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:56:46.765747 kubelet[2062]: E0913 00:56:46.765706 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:56:47.143059 kubelet[2062]: I0913 00:56:47.142938 2062 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-2rqdh" podStartSLOduration=9.946592029 podStartE2EDuration="14.142921595s" podCreationTimestamp="2025-09-13 00:56:33 +0000 UTC" firstStartedPulling="2025-09-13 00:56:38.15433523 +0000 UTC m=+23.070151286" lastFinishedPulling="2025-09-13 00:56:42.350664808 +0000 UTC m=+27.266480852" observedRunningTime="2025-09-13 00:56:43.02528954 +0000 UTC m=+27.941105603" watchObservedRunningTime="2025-09-13 00:56:47.142921595 +0000 UTC m=+32.058737659" Sep 13 00:56:47.148513 systemd[1]: Created slice kubepods-besteffort-pod6cf3ed1f_d765_43b8_ad20_d0dc1ca76e34.slice. Sep 13 00:56:47.218110 kubelet[2062]: I0913 00:56:47.218062 2062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p85br\" (UniqueName: \"kubernetes.io/projected/6cf3ed1f-d765-43b8-ad20-d0dc1ca76e34-kube-api-access-p85br\") pod \"nfs-server-provisioner-0\" (UID: \"6cf3ed1f-d765-43b8-ad20-d0dc1ca76e34\") " pod="default/nfs-server-provisioner-0" Sep 13 00:56:47.218304 kubelet[2062]: I0913 00:56:47.218118 2062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/6cf3ed1f-d765-43b8-ad20-d0dc1ca76e34-data\") pod \"nfs-server-provisioner-0\" (UID: \"6cf3ed1f-d765-43b8-ad20-d0dc1ca76e34\") " pod="default/nfs-server-provisioner-0" Sep 13 00:56:47.455810 env[1711]: time="2025-09-13T00:56:47.452640993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:6cf3ed1f-d765-43b8-ad20-d0dc1ca76e34,Namespace:default,Attempt:0,}" Sep 13 00:56:47.526461 (udev-worker)[3218]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:56:47.527404 (udev-worker)[3234]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:56:47.530219 systemd-networkd[1444]: lxc6697486d8ced: Link UP Sep 13 00:56:47.535928 kernel: eth0: renamed from tmpd5d27 Sep 13 00:56:47.545456 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:56:47.545586 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc6697486d8ced: link becomes ready Sep 13 00:56:47.545761 systemd-networkd[1444]: lxc6697486d8ced: Gained carrier Sep 13 00:56:47.762074 env[1711]: time="2025-09-13T00:56:47.761978348Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:56:47.762074 env[1711]: time="2025-09-13T00:56:47.762035145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:56:47.762343 env[1711]: time="2025-09-13T00:56:47.762051721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:56:47.762343 env[1711]: time="2025-09-13T00:56:47.762227790Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d5d27fbc01649700bbdbd4ee280e56aed442a1f205eb5f85b6c8d7fd403e5805 pid=3249 runtime=io.containerd.runc.v2 Sep 13 00:56:47.768269 kubelet[2062]: E0913 00:56:47.767471 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:56:47.781056 systemd[1]: Started cri-containerd-d5d27fbc01649700bbdbd4ee280e56aed442a1f205eb5f85b6c8d7fd403e5805.scope. Sep 13 00:56:47.840421 env[1711]: time="2025-09-13T00:56:47.840371368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:6cf3ed1f-d765-43b8-ad20-d0dc1ca76e34,Namespace:default,Attempt:0,} returns sandbox id \"d5d27fbc01649700bbdbd4ee280e56aed442a1f205eb5f85b6c8d7fd403e5805\"" Sep 13 00:56:47.842226 env[1711]: time="2025-09-13T00:56:47.842184326Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Sep 13 00:56:48.655597 systemd-networkd[1444]: lxc6697486d8ced: Gained IPv6LL Sep 13 00:56:48.767900 kubelet[2062]: E0913 00:56:48.767839 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:56:49.768765 kubelet[2062]: E0913 00:56:49.768721 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:56:50.341782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1081506881.mount: Deactivated successfully. Sep 13 00:56:50.769192 kubelet[2062]: E0913 00:56:50.769121 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:56:51.769444 kubelet[2062]: E0913 00:56:51.769395 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:56:52.534958 env[1711]: time="2025-09-13T00:56:52.534890603Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:56:52.539235 env[1711]: time="2025-09-13T00:56:52.539186847Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:56:52.542653 env[1711]: time="2025-09-13T00:56:52.542615916Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:56:52.545362 env[1711]: time="2025-09-13T00:56:52.545319471Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:56:52.546191 env[1711]: time="2025-09-13T00:56:52.546082322Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Sep 13 00:56:52.549052 env[1711]: time="2025-09-13T00:56:52.549015258Z" level=info msg="CreateContainer within sandbox \"d5d27fbc01649700bbdbd4ee280e56aed442a1f205eb5f85b6c8d7fd403e5805\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Sep 13 00:56:52.563856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3509823048.mount: Deactivated successfully. Sep 13 00:56:52.570038 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1905842861.mount: Deactivated successfully. Sep 13 00:56:52.580186 env[1711]: time="2025-09-13T00:56:52.580126235Z" level=info msg="CreateContainer within sandbox \"d5d27fbc01649700bbdbd4ee280e56aed442a1f205eb5f85b6c8d7fd403e5805\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"bf9767b0a269fa2919740bd56db96536c6bdb807c1f4981d0f8e539a00a8dbd5\"" Sep 13 00:56:52.580935 env[1711]: time="2025-09-13T00:56:52.580905398Z" level=info msg="StartContainer for \"bf9767b0a269fa2919740bd56db96536c6bdb807c1f4981d0f8e539a00a8dbd5\"" Sep 13 00:56:52.605453 systemd[1]: Started cri-containerd-bf9767b0a269fa2919740bd56db96536c6bdb807c1f4981d0f8e539a00a8dbd5.scope. Sep 13 00:56:52.645589 env[1711]: time="2025-09-13T00:56:52.645528871Z" level=info msg="StartContainer for \"bf9767b0a269fa2919740bd56db96536c6bdb807c1f4981d0f8e539a00a8dbd5\" returns successfully" Sep 13 00:56:52.769677 kubelet[2062]: E0913 00:56:52.769643 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:56:53.055662 kubelet[2062]: I0913 00:56:53.055573 2062 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.349735071 podStartE2EDuration="6.055556712s" podCreationTimestamp="2025-09-13 00:56:47 +0000 UTC" firstStartedPulling="2025-09-13 00:56:47.841736321 +0000 UTC m=+32.757552365" lastFinishedPulling="2025-09-13 00:56:52.547557963 +0000 UTC m=+37.463374006" observedRunningTime="2025-09-13 00:56:53.054943732 +0000 UTC m=+37.970759805" watchObservedRunningTime="2025-09-13 00:56:53.055556712 +0000 UTC m=+37.971372773" Sep 13 00:56:53.770479 kubelet[2062]: E0913 00:56:53.770434 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:56:54.722682 update_engine[1705]: I0913 00:56:54.722563 1705 update_attempter.cc:509] Updating boot flags... Sep 13 00:56:54.774168 kubelet[2062]: E0913 00:56:54.772921 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:56:55.737726 kubelet[2062]: E0913 00:56:55.737672 2062 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:56:55.773593 kubelet[2062]: E0913 00:56:55.773554 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:56:56.774447 kubelet[2062]: E0913 00:56:56.774392 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:56:57.775015 kubelet[2062]: E0913 00:56:57.774959 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:56:58.775224 kubelet[2062]: E0913 00:56:58.775166 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:56:59.776148 kubelet[2062]: E0913 00:56:59.776088 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:57:00.776251 kubelet[2062]: E0913 00:57:00.776206 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:57:01.777005 kubelet[2062]: E0913 00:57:01.776950 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:57:02.081227 systemd[1]: Created slice kubepods-besteffort-pod16568e86_5544_4834_bf28_e924080116d1.slice. Sep 13 00:57:02.129866 kubelet[2062]: I0913 00:57:02.129805 2062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79gjn\" (UniqueName: \"kubernetes.io/projected/16568e86-5544-4834-bf28-e924080116d1-kube-api-access-79gjn\") pod \"test-pod-1\" (UID: \"16568e86-5544-4834-bf28-e924080116d1\") " pod="default/test-pod-1" Sep 13 00:57:02.130057 kubelet[2062]: I0913 00:57:02.129870 2062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b48103e7-03a6-4cab-bc7f-fec1b1c6d449\" (UniqueName: \"kubernetes.io/nfs/16568e86-5544-4834-bf28-e924080116d1-pvc-b48103e7-03a6-4cab-bc7f-fec1b1c6d449\") pod \"test-pod-1\" (UID: \"16568e86-5544-4834-bf28-e924080116d1\") " pod="default/test-pod-1" Sep 13 00:57:02.389816 kernel: FS-Cache: Loaded Sep 13 00:57:02.561596 kernel: RPC: Registered named UNIX socket transport module. Sep 13 00:57:02.561769 kernel: RPC: Registered udp transport module. Sep 13 00:57:02.561827 kernel: RPC: Registered tcp transport module. Sep 13 00:57:02.564949 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Sep 13 00:57:02.776843 kernel: FS-Cache: Netfs 'nfs' registered for caching Sep 13 00:57:02.777181 kubelet[2062]: E0913 00:57:02.777033 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:57:03.309509 kernel: NFS: Registering the id_resolver key type Sep 13 00:57:03.309672 kernel: Key type id_resolver registered Sep 13 00:57:03.309721 kernel: Key type id_legacy registered Sep 13 00:57:03.425925 nfsidmap[3573]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Sep 13 00:57:03.432461 nfsidmap[3574]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Sep 13 00:57:03.594596 env[1711]: time="2025-09-13T00:57:03.594454663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:16568e86-5544-4834-bf28-e924080116d1,Namespace:default,Attempt:0,}" Sep 13 00:57:03.673906 (udev-worker)[3562]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:57:03.674604 (udev-worker)[3568]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:57:03.681179 systemd-networkd[1444]: lxc4a44e3129e7d: Link UP Sep 13 00:57:03.687905 kernel: eth0: renamed from tmpcf427 Sep 13 00:57:03.696513 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 00:57:03.696651 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc4a44e3129e7d: link becomes ready Sep 13 00:57:03.697321 systemd-networkd[1444]: lxc4a44e3129e7d: Gained carrier Sep 13 00:57:03.777502 kubelet[2062]: E0913 00:57:03.777411 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:57:03.945270 env[1711]: time="2025-09-13T00:57:03.945100224Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:57:03.945270 env[1711]: time="2025-09-13T00:57:03.945143232Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:57:03.945714 env[1711]: time="2025-09-13T00:57:03.945174278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:57:03.945905 env[1711]: time="2025-09-13T00:57:03.945764920Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cf4274a3dbc566d8a53270a1b65df69f22c597e089fa51b36b34d2a9a344fb8b pid=3601 runtime=io.containerd.runc.v2 Sep 13 00:57:03.967234 systemd[1]: Started cri-containerd-cf4274a3dbc566d8a53270a1b65df69f22c597e089fa51b36b34d2a9a344fb8b.scope. Sep 13 00:57:04.023025 env[1711]: time="2025-09-13T00:57:04.022982382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:16568e86-5544-4834-bf28-e924080116d1,Namespace:default,Attempt:0,} returns sandbox id \"cf4274a3dbc566d8a53270a1b65df69f22c597e089fa51b36b34d2a9a344fb8b\"" Sep 13 00:57:04.024966 env[1711]: time="2025-09-13T00:57:04.024935125Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Sep 13 00:57:04.350087 env[1711]: time="2025-09-13T00:57:04.350031835Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:57:04.354496 env[1711]: time="2025-09-13T00:57:04.354407077Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:4cbb30cb60f877a307c1f0bcdaca389dd24689ff60c6fb370f0cca7367185c48,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:57:04.357425 env[1711]: time="2025-09-13T00:57:04.357377011Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:57:04.360304 env[1711]: time="2025-09-13T00:57:04.360266921Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:883ca821a91fc20bcde818eeee4e1ed55ef63a020d6198ecd5a03af5a4eac530,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:57:04.361312 env[1711]: time="2025-09-13T00:57:04.361201965Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:4cbb30cb60f877a307c1f0bcdaca389dd24689ff60c6fb370f0cca7367185c48\"" Sep 13 00:57:04.366874 env[1711]: time="2025-09-13T00:57:04.366825116Z" level=info msg="CreateContainer within sandbox \"cf4274a3dbc566d8a53270a1b65df69f22c597e089fa51b36b34d2a9a344fb8b\" for container &ContainerMetadata{Name:test,Attempt:0,}" Sep 13 00:57:04.393124 env[1711]: time="2025-09-13T00:57:04.393037286Z" level=info msg="CreateContainer within sandbox \"cf4274a3dbc566d8a53270a1b65df69f22c597e089fa51b36b34d2a9a344fb8b\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"ad23373605bc1757f9e9346266b267f7c6e622345042270b694ef8369238a7d0\"" Sep 13 00:57:04.394043 env[1711]: time="2025-09-13T00:57:04.393990024Z" level=info msg="StartContainer for \"ad23373605bc1757f9e9346266b267f7c6e622345042270b694ef8369238a7d0\"" Sep 13 00:57:04.412401 systemd[1]: Started cri-containerd-ad23373605bc1757f9e9346266b267f7c6e622345042270b694ef8369238a7d0.scope. Sep 13 00:57:04.453172 env[1711]: time="2025-09-13T00:57:04.452899802Z" level=info msg="StartContainer for \"ad23373605bc1757f9e9346266b267f7c6e622345042270b694ef8369238a7d0\" returns successfully" Sep 13 00:57:04.778427 kubelet[2062]: E0913 00:57:04.778316 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:57:05.129853 kubelet[2062]: I0913 00:57:05.129553 2062 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=17.789364241 podStartE2EDuration="18.129534563s" podCreationTimestamp="2025-09-13 00:56:47 +0000 UTC" firstStartedPulling="2025-09-13 00:57:04.024165461 +0000 UTC m=+48.939981503" lastFinishedPulling="2025-09-13 00:57:04.364335768 +0000 UTC m=+49.280151825" observedRunningTime="2025-09-13 00:57:05.128601708 +0000 UTC m=+50.044417768" watchObservedRunningTime="2025-09-13 00:57:05.129534563 +0000 UTC m=+50.045350649" Sep 13 00:57:05.294073 systemd-networkd[1444]: lxc4a44e3129e7d: Gained IPv6LL Sep 13 00:57:05.779343 kubelet[2062]: E0913 00:57:05.779292 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:57:06.779843 kubelet[2062]: E0913 00:57:06.779779 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:57:07.781191 kubelet[2062]: E0913 00:57:07.781111 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:57:08.781806 kubelet[2062]: E0913 00:57:08.781734 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:57:09.782181 kubelet[2062]: E0913 00:57:09.782118 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:57:10.782949 kubelet[2062]: E0913 00:57:10.782842 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:57:11.783059 kubelet[2062]: E0913 00:57:11.782999 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:57:12.275421 systemd[1]: run-containerd-runc-k8s.io-82f67df77863a6ddd7ff8137311b18ba1055c2257df252bc817bd9d1a70dad09-runc.WXeqg7.mount: Deactivated successfully. Sep 13 00:57:12.300939 env[1711]: time="2025-09-13T00:57:12.300866614Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:57:12.307646 env[1711]: time="2025-09-13T00:57:12.307597555Z" level=info msg="StopContainer for \"82f67df77863a6ddd7ff8137311b18ba1055c2257df252bc817bd9d1a70dad09\" with timeout 2 (s)" Sep 13 00:57:12.308048 env[1711]: time="2025-09-13T00:57:12.308017063Z" level=info msg="Stop container \"82f67df77863a6ddd7ff8137311b18ba1055c2257df252bc817bd9d1a70dad09\" with signal terminated" Sep 13 00:57:12.316535 systemd-networkd[1444]: lxc_health: Link DOWN Sep 13 00:57:12.316544 systemd-networkd[1444]: lxc_health: Lost carrier Sep 13 00:57:12.342480 systemd[1]: cri-containerd-82f67df77863a6ddd7ff8137311b18ba1055c2257df252bc817bd9d1a70dad09.scope: Deactivated successfully. Sep 13 00:57:12.342749 systemd[1]: cri-containerd-82f67df77863a6ddd7ff8137311b18ba1055c2257df252bc817bd9d1a70dad09.scope: Consumed 7.341s CPU time. Sep 13 00:57:12.366827 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-82f67df77863a6ddd7ff8137311b18ba1055c2257df252bc817bd9d1a70dad09-rootfs.mount: Deactivated successfully. Sep 13 00:57:12.393767 env[1711]: time="2025-09-13T00:57:12.393706334Z" level=info msg="shim disconnected" id=82f67df77863a6ddd7ff8137311b18ba1055c2257df252bc817bd9d1a70dad09 Sep 13 00:57:12.394040 env[1711]: time="2025-09-13T00:57:12.393834411Z" level=warning msg="cleaning up after shim disconnected" id=82f67df77863a6ddd7ff8137311b18ba1055c2257df252bc817bd9d1a70dad09 namespace=k8s.io Sep 13 00:57:12.394040 env[1711]: time="2025-09-13T00:57:12.393856162Z" level=info msg="cleaning up dead shim" Sep 13 00:57:12.402949 env[1711]: time="2025-09-13T00:57:12.402889632Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:57:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3733 runtime=io.containerd.runc.v2\n" Sep 13 00:57:12.406897 env[1711]: time="2025-09-13T00:57:12.406846419Z" level=info msg="StopContainer for \"82f67df77863a6ddd7ff8137311b18ba1055c2257df252bc817bd9d1a70dad09\" returns successfully" Sep 13 00:57:12.407687 env[1711]: time="2025-09-13T00:57:12.407649267Z" level=info msg="StopPodSandbox for \"f4bbe350eaeee272c0b15bad26208c3b4df7746f93ddaf0ffd5431dd2f0c77b0\"" Sep 13 00:57:12.407844 env[1711]: time="2025-09-13T00:57:12.407725296Z" level=info msg="Container to stop \"e8c0141e9a7e6288445d4b467fa286a9d76eb044d0157bcbfed9b9a0d5927fef\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:57:12.407844 env[1711]: time="2025-09-13T00:57:12.407756560Z" level=info msg="Container to stop \"ead88a612ae4befee566bb39db66df0ccfa4853d46e0723ee303d73145d7854f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:57:12.407844 env[1711]: time="2025-09-13T00:57:12.407774752Z" level=info msg="Container to stop \"bab9ab144db12c6cd629bd57161863a6c7f2fc96f36f5638fcd34c4c5c230fb8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:57:12.407844 env[1711]: time="2025-09-13T00:57:12.407819683Z" level=info msg="Container to stop \"cd1747196b2deff3e0d9d14ec4a24453b1a23f20912c6d9c9160d8f43b682e1e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:57:12.407844 env[1711]: time="2025-09-13T00:57:12.407835658Z" level=info msg="Container to stop \"82f67df77863a6ddd7ff8137311b18ba1055c2257df252bc817bd9d1a70dad09\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:57:12.410432 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f4bbe350eaeee272c0b15bad26208c3b4df7746f93ddaf0ffd5431dd2f0c77b0-shm.mount: Deactivated successfully. Sep 13 00:57:12.418952 systemd[1]: cri-containerd-f4bbe350eaeee272c0b15bad26208c3b4df7746f93ddaf0ffd5431dd2f0c77b0.scope: Deactivated successfully. Sep 13 00:57:12.440601 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f4bbe350eaeee272c0b15bad26208c3b4df7746f93ddaf0ffd5431dd2f0c77b0-rootfs.mount: Deactivated successfully. Sep 13 00:57:12.453865 env[1711]: time="2025-09-13T00:57:12.453816755Z" level=info msg="shim disconnected" id=f4bbe350eaeee272c0b15bad26208c3b4df7746f93ddaf0ffd5431dd2f0c77b0 Sep 13 00:57:12.453865 env[1711]: time="2025-09-13T00:57:12.453859872Z" level=warning msg="cleaning up after shim disconnected" id=f4bbe350eaeee272c0b15bad26208c3b4df7746f93ddaf0ffd5431dd2f0c77b0 namespace=k8s.io Sep 13 00:57:12.453865 env[1711]: time="2025-09-13T00:57:12.453868953Z" level=info msg="cleaning up dead shim" Sep 13 00:57:12.463265 env[1711]: time="2025-09-13T00:57:12.463064415Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:57:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3764 runtime=io.containerd.runc.v2\n" Sep 13 00:57:12.464477 env[1711]: time="2025-09-13T00:57:12.464437042Z" level=info msg="TearDown network for sandbox \"f4bbe350eaeee272c0b15bad26208c3b4df7746f93ddaf0ffd5431dd2f0c77b0\" successfully" Sep 13 00:57:12.464477 env[1711]: time="2025-09-13T00:57:12.464469669Z" level=info msg="StopPodSandbox for \"f4bbe350eaeee272c0b15bad26208c3b4df7746f93ddaf0ffd5431dd2f0c77b0\" returns successfully" Sep 13 00:57:12.545007 kubelet[2062]: I0913 00:57:12.544191 2062 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fdde9885-6cca-4d33-96bd-890dbe110ac8-cilium-cgroup\") pod \"fdde9885-6cca-4d33-96bd-890dbe110ac8\" (UID: \"fdde9885-6cca-4d33-96bd-890dbe110ac8\") " Sep 13 00:57:12.545007 kubelet[2062]: I0913 00:57:12.544244 2062 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fdde9885-6cca-4d33-96bd-890dbe110ac8-cilium-config-path\") pod \"fdde9885-6cca-4d33-96bd-890dbe110ac8\" (UID: \"fdde9885-6cca-4d33-96bd-890dbe110ac8\") " Sep 13 00:57:12.545007 kubelet[2062]: I0913 00:57:12.544265 2062 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fdde9885-6cca-4d33-96bd-890dbe110ac8-bpf-maps\") pod \"fdde9885-6cca-4d33-96bd-890dbe110ac8\" (UID: \"fdde9885-6cca-4d33-96bd-890dbe110ac8\") " Sep 13 00:57:12.545007 kubelet[2062]: I0913 00:57:12.544279 2062 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fdde9885-6cca-4d33-96bd-890dbe110ac8-xtables-lock\") pod \"fdde9885-6cca-4d33-96bd-890dbe110ac8\" (UID: \"fdde9885-6cca-4d33-96bd-890dbe110ac8\") " Sep 13 00:57:12.545007 kubelet[2062]: I0913 00:57:12.544300 2062 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fdde9885-6cca-4d33-96bd-890dbe110ac8-host-proc-sys-net\") pod \"fdde9885-6cca-4d33-96bd-890dbe110ac8\" (UID: \"fdde9885-6cca-4d33-96bd-890dbe110ac8\") " Sep 13 00:57:12.545007 kubelet[2062]: I0913 00:57:12.544315 2062 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fdde9885-6cca-4d33-96bd-890dbe110ac8-hostproc\") pod \"fdde9885-6cca-4d33-96bd-890dbe110ac8\" (UID: \"fdde9885-6cca-4d33-96bd-890dbe110ac8\") " Sep 13 00:57:12.545297 kubelet[2062]: I0913 00:57:12.544334 2062 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7g9d\" (UniqueName: \"kubernetes.io/projected/fdde9885-6cca-4d33-96bd-890dbe110ac8-kube-api-access-z7g9d\") pod \"fdde9885-6cca-4d33-96bd-890dbe110ac8\" (UID: \"fdde9885-6cca-4d33-96bd-890dbe110ac8\") " Sep 13 00:57:12.545297 kubelet[2062]: I0913 00:57:12.544349 2062 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fdde9885-6cca-4d33-96bd-890dbe110ac8-etc-cni-netd\") pod \"fdde9885-6cca-4d33-96bd-890dbe110ac8\" (UID: \"fdde9885-6cca-4d33-96bd-890dbe110ac8\") " Sep 13 00:57:12.545297 kubelet[2062]: I0913 00:57:12.544366 2062 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fdde9885-6cca-4d33-96bd-890dbe110ac8-lib-modules\") pod \"fdde9885-6cca-4d33-96bd-890dbe110ac8\" (UID: \"fdde9885-6cca-4d33-96bd-890dbe110ac8\") " Sep 13 00:57:12.545297 kubelet[2062]: I0913 00:57:12.544385 2062 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fdde9885-6cca-4d33-96bd-890dbe110ac8-clustermesh-secrets\") pod \"fdde9885-6cca-4d33-96bd-890dbe110ac8\" (UID: \"fdde9885-6cca-4d33-96bd-890dbe110ac8\") " Sep 13 00:57:12.545297 kubelet[2062]: I0913 00:57:12.544405 2062 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fdde9885-6cca-4d33-96bd-890dbe110ac8-hubble-tls\") pod \"fdde9885-6cca-4d33-96bd-890dbe110ac8\" (UID: \"fdde9885-6cca-4d33-96bd-890dbe110ac8\") " Sep 13 00:57:12.545297 kubelet[2062]: I0913 00:57:12.544419 2062 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fdde9885-6cca-4d33-96bd-890dbe110ac8-cilium-run\") pod \"fdde9885-6cca-4d33-96bd-890dbe110ac8\" (UID: \"fdde9885-6cca-4d33-96bd-890dbe110ac8\") " Sep 13 00:57:12.545462 kubelet[2062]: I0913 00:57:12.544434 2062 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fdde9885-6cca-4d33-96bd-890dbe110ac8-cni-path\") pod \"fdde9885-6cca-4d33-96bd-890dbe110ac8\" (UID: \"fdde9885-6cca-4d33-96bd-890dbe110ac8\") " Sep 13 00:57:12.545462 kubelet[2062]: I0913 00:57:12.544450 2062 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fdde9885-6cca-4d33-96bd-890dbe110ac8-host-proc-sys-kernel\") pod \"fdde9885-6cca-4d33-96bd-890dbe110ac8\" (UID: \"fdde9885-6cca-4d33-96bd-890dbe110ac8\") " Sep 13 00:57:12.545462 kubelet[2062]: I0913 00:57:12.544520 2062 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdde9885-6cca-4d33-96bd-890dbe110ac8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "fdde9885-6cca-4d33-96bd-890dbe110ac8" (UID: "fdde9885-6cca-4d33-96bd-890dbe110ac8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:57:12.545462 kubelet[2062]: I0913 00:57:12.544555 2062 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdde9885-6cca-4d33-96bd-890dbe110ac8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "fdde9885-6cca-4d33-96bd-890dbe110ac8" (UID: "fdde9885-6cca-4d33-96bd-890dbe110ac8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:57:12.545462 kubelet[2062]: I0913 00:57:12.544757 2062 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdde9885-6cca-4d33-96bd-890dbe110ac8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "fdde9885-6cca-4d33-96bd-890dbe110ac8" (UID: "fdde9885-6cca-4d33-96bd-890dbe110ac8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:57:12.545603 kubelet[2062]: I0913 00:57:12.544776 2062 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdde9885-6cca-4d33-96bd-890dbe110ac8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "fdde9885-6cca-4d33-96bd-890dbe110ac8" (UID: "fdde9885-6cca-4d33-96bd-890dbe110ac8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:57:12.545603 kubelet[2062]: I0913 00:57:12.544828 2062 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdde9885-6cca-4d33-96bd-890dbe110ac8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "fdde9885-6cca-4d33-96bd-890dbe110ac8" (UID: "fdde9885-6cca-4d33-96bd-890dbe110ac8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:57:12.545603 kubelet[2062]: I0913 00:57:12.544845 2062 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdde9885-6cca-4d33-96bd-890dbe110ac8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "fdde9885-6cca-4d33-96bd-890dbe110ac8" (UID: "fdde9885-6cca-4d33-96bd-890dbe110ac8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:57:12.545603 kubelet[2062]: I0913 00:57:12.544858 2062 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdde9885-6cca-4d33-96bd-890dbe110ac8-hostproc" (OuterVolumeSpecName: "hostproc") pod "fdde9885-6cca-4d33-96bd-890dbe110ac8" (UID: "fdde9885-6cca-4d33-96bd-890dbe110ac8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:57:12.549285 kubelet[2062]: I0913 00:57:12.549238 2062 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdde9885-6cca-4d33-96bd-890dbe110ac8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fdde9885-6cca-4d33-96bd-890dbe110ac8" (UID: "fdde9885-6cca-4d33-96bd-890dbe110ac8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 00:57:12.549419 kubelet[2062]: I0913 00:57:12.549317 2062 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdde9885-6cca-4d33-96bd-890dbe110ac8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "fdde9885-6cca-4d33-96bd-890dbe110ac8" (UID: "fdde9885-6cca-4d33-96bd-890dbe110ac8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:57:12.550953 kubelet[2062]: I0913 00:57:12.550917 2062 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdde9885-6cca-4d33-96bd-890dbe110ac8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "fdde9885-6cca-4d33-96bd-890dbe110ac8" (UID: "fdde9885-6cca-4d33-96bd-890dbe110ac8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:57:12.551232 kubelet[2062]: I0913 00:57:12.551205 2062 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdde9885-6cca-4d33-96bd-890dbe110ac8-cni-path" (OuterVolumeSpecName: "cni-path") pod "fdde9885-6cca-4d33-96bd-890dbe110ac8" (UID: "fdde9885-6cca-4d33-96bd-890dbe110ac8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:57:12.552461 kubelet[2062]: I0913 00:57:12.552438 2062 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdde9885-6cca-4d33-96bd-890dbe110ac8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "fdde9885-6cca-4d33-96bd-890dbe110ac8" (UID: "fdde9885-6cca-4d33-96bd-890dbe110ac8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 00:57:12.552711 kubelet[2062]: I0913 00:57:12.552697 2062 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdde9885-6cca-4d33-96bd-890dbe110ac8-kube-api-access-z7g9d" (OuterVolumeSpecName: "kube-api-access-z7g9d") pod "fdde9885-6cca-4d33-96bd-890dbe110ac8" (UID: "fdde9885-6cca-4d33-96bd-890dbe110ac8"). InnerVolumeSpecName "kube-api-access-z7g9d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:57:12.554769 kubelet[2062]: I0913 00:57:12.554746 2062 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdde9885-6cca-4d33-96bd-890dbe110ac8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "fdde9885-6cca-4d33-96bd-890dbe110ac8" (UID: "fdde9885-6cca-4d33-96bd-890dbe110ac8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:57:12.645395 kubelet[2062]: I0913 00:57:12.645337 2062 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fdde9885-6cca-4d33-96bd-890dbe110ac8-xtables-lock\") on node \"172.31.27.34\" DevicePath \"\"" Sep 13 00:57:12.645395 kubelet[2062]: I0913 00:57:12.645392 2062 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fdde9885-6cca-4d33-96bd-890dbe110ac8-etc-cni-netd\") on node \"172.31.27.34\" DevicePath \"\"" Sep 13 00:57:12.645648 kubelet[2062]: I0913 00:57:12.645406 2062 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fdde9885-6cca-4d33-96bd-890dbe110ac8-host-proc-sys-net\") on node \"172.31.27.34\" DevicePath \"\"" Sep 13 00:57:12.645648 kubelet[2062]: I0913 00:57:12.645421 2062 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fdde9885-6cca-4d33-96bd-890dbe110ac8-hostproc\") on node \"172.31.27.34\" DevicePath \"\"" Sep 13 00:57:12.645648 kubelet[2062]: I0913 00:57:12.645434 2062 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z7g9d\" (UniqueName: \"kubernetes.io/projected/fdde9885-6cca-4d33-96bd-890dbe110ac8-kube-api-access-z7g9d\") on node \"172.31.27.34\" DevicePath \"\"" Sep 13 00:57:12.645648 kubelet[2062]: I0913 00:57:12.645446 2062 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fdde9885-6cca-4d33-96bd-890dbe110ac8-host-proc-sys-kernel\") on node \"172.31.27.34\" DevicePath \"\"" Sep 13 00:57:12.645648 kubelet[2062]: I0913 00:57:12.645456 2062 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fdde9885-6cca-4d33-96bd-890dbe110ac8-lib-modules\") on node \"172.31.27.34\" DevicePath \"\"" Sep 13 00:57:12.645648 kubelet[2062]: I0913 00:57:12.645466 2062 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fdde9885-6cca-4d33-96bd-890dbe110ac8-clustermesh-secrets\") on node \"172.31.27.34\" DevicePath \"\"" Sep 13 00:57:12.645648 kubelet[2062]: I0913 00:57:12.645477 2062 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fdde9885-6cca-4d33-96bd-890dbe110ac8-hubble-tls\") on node \"172.31.27.34\" DevicePath \"\"" Sep 13 00:57:12.645648 kubelet[2062]: I0913 00:57:12.645488 2062 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fdde9885-6cca-4d33-96bd-890dbe110ac8-cilium-run\") on node \"172.31.27.34\" DevicePath \"\"" Sep 13 00:57:12.645946 kubelet[2062]: I0913 00:57:12.645500 2062 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fdde9885-6cca-4d33-96bd-890dbe110ac8-cni-path\") on node \"172.31.27.34\" DevicePath \"\"" Sep 13 00:57:12.645946 kubelet[2062]: I0913 00:57:12.645511 2062 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fdde9885-6cca-4d33-96bd-890dbe110ac8-bpf-maps\") on node \"172.31.27.34\" DevicePath \"\"" Sep 13 00:57:12.645946 kubelet[2062]: I0913 00:57:12.645521 2062 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fdde9885-6cca-4d33-96bd-890dbe110ac8-cilium-cgroup\") on node \"172.31.27.34\" DevicePath \"\"" Sep 13 00:57:12.645946 kubelet[2062]: I0913 00:57:12.645535 2062 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fdde9885-6cca-4d33-96bd-890dbe110ac8-cilium-config-path\") on node \"172.31.27.34\" DevicePath \"\"" Sep 13 00:57:12.783279 kubelet[2062]: E0913 00:57:12.783192 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:57:13.083519 kubelet[2062]: I0913 00:57:13.083486 2062 scope.go:117] "RemoveContainer" containerID="82f67df77863a6ddd7ff8137311b18ba1055c2257df252bc817bd9d1a70dad09" Sep 13 00:57:13.084726 env[1711]: time="2025-09-13T00:57:13.084689552Z" level=info msg="RemoveContainer for \"82f67df77863a6ddd7ff8137311b18ba1055c2257df252bc817bd9d1a70dad09\"" Sep 13 00:57:13.090566 systemd[1]: Removed slice kubepods-burstable-podfdde9885_6cca_4d33_96bd_890dbe110ac8.slice. Sep 13 00:57:13.090701 systemd[1]: kubepods-burstable-podfdde9885_6cca_4d33_96bd_890dbe110ac8.slice: Consumed 7.471s CPU time. Sep 13 00:57:13.092672 env[1711]: time="2025-09-13T00:57:13.092627160Z" level=info msg="RemoveContainer for \"82f67df77863a6ddd7ff8137311b18ba1055c2257df252bc817bd9d1a70dad09\" returns successfully" Sep 13 00:57:13.093128 kubelet[2062]: I0913 00:57:13.093095 2062 scope.go:117] "RemoveContainer" containerID="bab9ab144db12c6cd629bd57161863a6c7f2fc96f36f5638fcd34c4c5c230fb8" Sep 13 00:57:13.094268 env[1711]: time="2025-09-13T00:57:13.094234941Z" level=info msg="RemoveContainer for \"bab9ab144db12c6cd629bd57161863a6c7f2fc96f36f5638fcd34c4c5c230fb8\"" Sep 13 00:57:13.099433 env[1711]: time="2025-09-13T00:57:13.099370558Z" level=info msg="RemoveContainer for \"bab9ab144db12c6cd629bd57161863a6c7f2fc96f36f5638fcd34c4c5c230fb8\" returns successfully" Sep 13 00:57:13.099608 kubelet[2062]: I0913 00:57:13.099582 2062 scope.go:117] "RemoveContainer" containerID="ead88a612ae4befee566bb39db66df0ccfa4853d46e0723ee303d73145d7854f" Sep 13 00:57:13.100723 env[1711]: time="2025-09-13T00:57:13.100691679Z" level=info msg="RemoveContainer for \"ead88a612ae4befee566bb39db66df0ccfa4853d46e0723ee303d73145d7854f\"" Sep 13 00:57:13.106185 env[1711]: time="2025-09-13T00:57:13.106136241Z" level=info msg="RemoveContainer for \"ead88a612ae4befee566bb39db66df0ccfa4853d46e0723ee303d73145d7854f\" returns successfully" Sep 13 00:57:13.106372 kubelet[2062]: I0913 00:57:13.106349 2062 scope.go:117] "RemoveContainer" containerID="e8c0141e9a7e6288445d4b467fa286a9d76eb044d0157bcbfed9b9a0d5927fef" Sep 13 00:57:13.107671 env[1711]: time="2025-09-13T00:57:13.107629492Z" level=info msg="RemoveContainer for \"e8c0141e9a7e6288445d4b467fa286a9d76eb044d0157bcbfed9b9a0d5927fef\"" Sep 13 00:57:13.112439 env[1711]: time="2025-09-13T00:57:13.112391193Z" level=info msg="RemoveContainer for \"e8c0141e9a7e6288445d4b467fa286a9d76eb044d0157bcbfed9b9a0d5927fef\" returns successfully" Sep 13 00:57:13.112616 kubelet[2062]: I0913 00:57:13.112595 2062 scope.go:117] "RemoveContainer" containerID="cd1747196b2deff3e0d9d14ec4a24453b1a23f20912c6d9c9160d8f43b682e1e" Sep 13 00:57:13.113709 env[1711]: time="2025-09-13T00:57:13.113658610Z" level=info msg="RemoveContainer for \"cd1747196b2deff3e0d9d14ec4a24453b1a23f20912c6d9c9160d8f43b682e1e\"" Sep 13 00:57:13.118637 env[1711]: time="2025-09-13T00:57:13.118568616Z" level=info msg="RemoveContainer for \"cd1747196b2deff3e0d9d14ec4a24453b1a23f20912c6d9c9160d8f43b682e1e\" returns successfully" Sep 13 00:57:13.118841 kubelet[2062]: I0913 00:57:13.118817 2062 scope.go:117] "RemoveContainer" containerID="82f67df77863a6ddd7ff8137311b18ba1055c2257df252bc817bd9d1a70dad09" Sep 13 00:57:13.119413 env[1711]: time="2025-09-13T00:57:13.119304938Z" level=error msg="ContainerStatus for \"82f67df77863a6ddd7ff8137311b18ba1055c2257df252bc817bd9d1a70dad09\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"82f67df77863a6ddd7ff8137311b18ba1055c2257df252bc817bd9d1a70dad09\": not found" Sep 13 00:57:13.121452 kubelet[2062]: E0913 00:57:13.121405 2062 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"82f67df77863a6ddd7ff8137311b18ba1055c2257df252bc817bd9d1a70dad09\": not found" containerID="82f67df77863a6ddd7ff8137311b18ba1055c2257df252bc817bd9d1a70dad09" Sep 13 00:57:13.121605 kubelet[2062]: I0913 00:57:13.121481 2062 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"82f67df77863a6ddd7ff8137311b18ba1055c2257df252bc817bd9d1a70dad09"} err="failed to get container status \"82f67df77863a6ddd7ff8137311b18ba1055c2257df252bc817bd9d1a70dad09\": rpc error: code = NotFound desc = an error occurred when try to find container \"82f67df77863a6ddd7ff8137311b18ba1055c2257df252bc817bd9d1a70dad09\": not found" Sep 13 00:57:13.121605 kubelet[2062]: I0913 00:57:13.121588 2062 scope.go:117] "RemoveContainer" containerID="bab9ab144db12c6cd629bd57161863a6c7f2fc96f36f5638fcd34c4c5c230fb8" Sep 13 00:57:13.122032 env[1711]: time="2025-09-13T00:57:13.121966249Z" level=error msg="ContainerStatus for \"bab9ab144db12c6cd629bd57161863a6c7f2fc96f36f5638fcd34c4c5c230fb8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bab9ab144db12c6cd629bd57161863a6c7f2fc96f36f5638fcd34c4c5c230fb8\": not found" Sep 13 00:57:13.122220 kubelet[2062]: E0913 00:57:13.122179 2062 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bab9ab144db12c6cd629bd57161863a6c7f2fc96f36f5638fcd34c4c5c230fb8\": not found" containerID="bab9ab144db12c6cd629bd57161863a6c7f2fc96f36f5638fcd34c4c5c230fb8" Sep 13 00:57:13.122308 kubelet[2062]: I0913 00:57:13.122227 2062 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bab9ab144db12c6cd629bd57161863a6c7f2fc96f36f5638fcd34c4c5c230fb8"} err="failed to get container status \"bab9ab144db12c6cd629bd57161863a6c7f2fc96f36f5638fcd34c4c5c230fb8\": rpc error: code = NotFound desc = an error occurred when try to find container \"bab9ab144db12c6cd629bd57161863a6c7f2fc96f36f5638fcd34c4c5c230fb8\": not found" Sep 13 00:57:13.122308 kubelet[2062]: I0913 00:57:13.122253 2062 scope.go:117] "RemoveContainer" containerID="ead88a612ae4befee566bb39db66df0ccfa4853d46e0723ee303d73145d7854f" Sep 13 00:57:13.122545 env[1711]: time="2025-09-13T00:57:13.122483693Z" level=error msg="ContainerStatus for \"ead88a612ae4befee566bb39db66df0ccfa4853d46e0723ee303d73145d7854f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ead88a612ae4befee566bb39db66df0ccfa4853d46e0723ee303d73145d7854f\": not found" Sep 13 00:57:13.122689 kubelet[2062]: E0913 00:57:13.122662 2062 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ead88a612ae4befee566bb39db66df0ccfa4853d46e0723ee303d73145d7854f\": not found" containerID="ead88a612ae4befee566bb39db66df0ccfa4853d46e0723ee303d73145d7854f" Sep 13 00:57:13.122773 kubelet[2062]: I0913 00:57:13.122693 2062 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ead88a612ae4befee566bb39db66df0ccfa4853d46e0723ee303d73145d7854f"} err="failed to get container status \"ead88a612ae4befee566bb39db66df0ccfa4853d46e0723ee303d73145d7854f\": rpc error: code = NotFound desc = an error occurred when try to find container \"ead88a612ae4befee566bb39db66df0ccfa4853d46e0723ee303d73145d7854f\": not found" Sep 13 00:57:13.122773 kubelet[2062]: I0913 00:57:13.122714 2062 scope.go:117] "RemoveContainer" containerID="e8c0141e9a7e6288445d4b467fa286a9d76eb044d0157bcbfed9b9a0d5927fef" Sep 13 00:57:13.122982 env[1711]: time="2025-09-13T00:57:13.122928077Z" level=error msg="ContainerStatus for \"e8c0141e9a7e6288445d4b467fa286a9d76eb044d0157bcbfed9b9a0d5927fef\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e8c0141e9a7e6288445d4b467fa286a9d76eb044d0157bcbfed9b9a0d5927fef\": not found" Sep 13 00:57:13.123270 kubelet[2062]: E0913 00:57:13.123242 2062 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e8c0141e9a7e6288445d4b467fa286a9d76eb044d0157bcbfed9b9a0d5927fef\": not found" containerID="e8c0141e9a7e6288445d4b467fa286a9d76eb044d0157bcbfed9b9a0d5927fef" Sep 13 00:57:13.123362 kubelet[2062]: I0913 00:57:13.123276 2062 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e8c0141e9a7e6288445d4b467fa286a9d76eb044d0157bcbfed9b9a0d5927fef"} err="failed to get container status \"e8c0141e9a7e6288445d4b467fa286a9d76eb044d0157bcbfed9b9a0d5927fef\": rpc error: code = NotFound desc = an error occurred when try to find container \"e8c0141e9a7e6288445d4b467fa286a9d76eb044d0157bcbfed9b9a0d5927fef\": not found" Sep 13 00:57:13.123362 kubelet[2062]: I0913 00:57:13.123296 2062 scope.go:117] "RemoveContainer" containerID="cd1747196b2deff3e0d9d14ec4a24453b1a23f20912c6d9c9160d8f43b682e1e" Sep 13 00:57:13.123558 env[1711]: time="2025-09-13T00:57:13.123504135Z" level=error msg="ContainerStatus for \"cd1747196b2deff3e0d9d14ec4a24453b1a23f20912c6d9c9160d8f43b682e1e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cd1747196b2deff3e0d9d14ec4a24453b1a23f20912c6d9c9160d8f43b682e1e\": not found" Sep 13 00:57:13.123681 kubelet[2062]: E0913 00:57:13.123655 2062 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cd1747196b2deff3e0d9d14ec4a24453b1a23f20912c6d9c9160d8f43b682e1e\": not found" containerID="cd1747196b2deff3e0d9d14ec4a24453b1a23f20912c6d9c9160d8f43b682e1e" Sep 13 00:57:13.123758 kubelet[2062]: I0913 00:57:13.123685 2062 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cd1747196b2deff3e0d9d14ec4a24453b1a23f20912c6d9c9160d8f43b682e1e"} err="failed to get container status \"cd1747196b2deff3e0d9d14ec4a24453b1a23f20912c6d9c9160d8f43b682e1e\": rpc error: code = NotFound desc = an error occurred when try to find container \"cd1747196b2deff3e0d9d14ec4a24453b1a23f20912c6d9c9160d8f43b682e1e\": not found" Sep 13 00:57:13.270949 systemd[1]: var-lib-kubelet-pods-fdde9885\x2d6cca\x2d4d33\x2d96bd\x2d890dbe110ac8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz7g9d.mount: Deactivated successfully. Sep 13 00:57:13.271176 systemd[1]: var-lib-kubelet-pods-fdde9885\x2d6cca\x2d4d33\x2d96bd\x2d890dbe110ac8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 00:57:13.271278 systemd[1]: var-lib-kubelet-pods-fdde9885\x2d6cca\x2d4d33\x2d96bd\x2d890dbe110ac8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 00:57:13.783923 kubelet[2062]: E0913 00:57:13.783867 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:57:13.924534 kubelet[2062]: I0913 00:57:13.924487 2062 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdde9885-6cca-4d33-96bd-890dbe110ac8" path="/var/lib/kubelet/pods/fdde9885-6cca-4d33-96bd-890dbe110ac8/volumes" Sep 13 00:57:14.784708 kubelet[2062]: E0913 00:57:14.784604 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:57:15.493287 kubelet[2062]: I0913 00:57:15.493243 2062 memory_manager.go:355] "RemoveStaleState removing state" podUID="fdde9885-6cca-4d33-96bd-890dbe110ac8" containerName="cilium-agent" Sep 13 00:57:15.498388 systemd[1]: Created slice kubepods-burstable-pod90045384_b740_41b7_8b27_0e055abfc5ce.slice. Sep 13 00:57:15.509858 systemd[1]: Created slice kubepods-besteffort-pod75516ab1_874a_4290_9785_3e91a8efde2a.slice. Sep 13 00:57:15.564891 kubelet[2062]: I0913 00:57:15.564850 2062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/90045384-b740-41b7-8b27-0e055abfc5ce-cilium-run\") pod \"cilium-gr4r7\" (UID: \"90045384-b740-41b7-8b27-0e055abfc5ce\") " pod="kube-system/cilium-gr4r7" Sep 13 00:57:15.564891 kubelet[2062]: I0913 00:57:15.564892 2062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/90045384-b740-41b7-8b27-0e055abfc5ce-lib-modules\") pod \"cilium-gr4r7\" (UID: \"90045384-b740-41b7-8b27-0e055abfc5ce\") " pod="kube-system/cilium-gr4r7" Sep 13 00:57:15.565146 kubelet[2062]: I0913 00:57:15.564912 2062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/90045384-b740-41b7-8b27-0e055abfc5ce-cilium-config-path\") pod \"cilium-gr4r7\" (UID: \"90045384-b740-41b7-8b27-0e055abfc5ce\") " pod="kube-system/cilium-gr4r7" Sep 13 00:57:15.565146 kubelet[2062]: I0913 00:57:15.564931 2062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/90045384-b740-41b7-8b27-0e055abfc5ce-host-proc-sys-kernel\") pod \"cilium-gr4r7\" (UID: \"90045384-b740-41b7-8b27-0e055abfc5ce\") " pod="kube-system/cilium-gr4r7" Sep 13 00:57:15.565146 kubelet[2062]: I0913 00:57:15.564947 2062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/75516ab1-874a-4290-9785-3e91a8efde2a-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-mtbpl\" (UID: \"75516ab1-874a-4290-9785-3e91a8efde2a\") " pod="kube-system/cilium-operator-6c4d7847fc-mtbpl" Sep 13 00:57:15.565146 kubelet[2062]: I0913 00:57:15.564962 2062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/90045384-b740-41b7-8b27-0e055abfc5ce-bpf-maps\") pod \"cilium-gr4r7\" (UID: \"90045384-b740-41b7-8b27-0e055abfc5ce\") " pod="kube-system/cilium-gr4r7" Sep 13 00:57:15.565146 kubelet[2062]: I0913 00:57:15.564990 2062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/90045384-b740-41b7-8b27-0e055abfc5ce-xtables-lock\") pod \"cilium-gr4r7\" (UID: \"90045384-b740-41b7-8b27-0e055abfc5ce\") " pod="kube-system/cilium-gr4r7" Sep 13 00:57:15.565288 kubelet[2062]: I0913 00:57:15.565004 2062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/90045384-b740-41b7-8b27-0e055abfc5ce-clustermesh-secrets\") pod \"cilium-gr4r7\" (UID: \"90045384-b740-41b7-8b27-0e055abfc5ce\") " pod="kube-system/cilium-gr4r7" Sep 13 00:57:15.565288 kubelet[2062]: I0913 00:57:15.565028 2062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/90045384-b740-41b7-8b27-0e055abfc5ce-cilium-ipsec-secrets\") pod \"cilium-gr4r7\" (UID: \"90045384-b740-41b7-8b27-0e055abfc5ce\") " pod="kube-system/cilium-gr4r7" Sep 13 00:57:15.565288 kubelet[2062]: I0913 00:57:15.565047 2062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/90045384-b740-41b7-8b27-0e055abfc5ce-hubble-tls\") pod \"cilium-gr4r7\" (UID: \"90045384-b740-41b7-8b27-0e055abfc5ce\") " pod="kube-system/cilium-gr4r7" Sep 13 00:57:15.565288 kubelet[2062]: I0913 00:57:15.565066 2062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mdmc\" (UniqueName: \"kubernetes.io/projected/75516ab1-874a-4290-9785-3e91a8efde2a-kube-api-access-9mdmc\") pod \"cilium-operator-6c4d7847fc-mtbpl\" (UID: \"75516ab1-874a-4290-9785-3e91a8efde2a\") " pod="kube-system/cilium-operator-6c4d7847fc-mtbpl" Sep 13 00:57:15.565288 kubelet[2062]: I0913 00:57:15.565082 2062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/90045384-b740-41b7-8b27-0e055abfc5ce-hostproc\") pod \"cilium-gr4r7\" (UID: \"90045384-b740-41b7-8b27-0e055abfc5ce\") " pod="kube-system/cilium-gr4r7" Sep 13 00:57:15.565462 kubelet[2062]: I0913 00:57:15.565097 2062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/90045384-b740-41b7-8b27-0e055abfc5ce-cilium-cgroup\") pod \"cilium-gr4r7\" (UID: \"90045384-b740-41b7-8b27-0e055abfc5ce\") " pod="kube-system/cilium-gr4r7" Sep 13 00:57:15.565462 kubelet[2062]: I0913 00:57:15.565117 2062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/90045384-b740-41b7-8b27-0e055abfc5ce-etc-cni-netd\") pod \"cilium-gr4r7\" (UID: \"90045384-b740-41b7-8b27-0e055abfc5ce\") " pod="kube-system/cilium-gr4r7" Sep 13 00:57:15.565462 kubelet[2062]: I0913 00:57:15.565132 2062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/90045384-b740-41b7-8b27-0e055abfc5ce-host-proc-sys-net\") pod \"cilium-gr4r7\" (UID: \"90045384-b740-41b7-8b27-0e055abfc5ce\") " pod="kube-system/cilium-gr4r7" Sep 13 00:57:15.565462 kubelet[2062]: I0913 00:57:15.565148 2062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/90045384-b740-41b7-8b27-0e055abfc5ce-cni-path\") pod \"cilium-gr4r7\" (UID: \"90045384-b740-41b7-8b27-0e055abfc5ce\") " pod="kube-system/cilium-gr4r7" Sep 13 00:57:15.565462 kubelet[2062]: I0913 00:57:15.565169 2062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4ft2\" (UniqueName: \"kubernetes.io/projected/90045384-b740-41b7-8b27-0e055abfc5ce-kube-api-access-q4ft2\") pod \"cilium-gr4r7\" (UID: \"90045384-b740-41b7-8b27-0e055abfc5ce\") " pod="kube-system/cilium-gr4r7" Sep 13 00:57:15.738283 kubelet[2062]: E0913 00:57:15.738223 2062 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:57:15.785725 kubelet[2062]: E0913 00:57:15.785596 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:57:15.795643 env[1711]: time="2025-09-13T00:57:15.795604680Z" level=info msg="StopPodSandbox for \"f4bbe350eaeee272c0b15bad26208c3b4df7746f93ddaf0ffd5431dd2f0c77b0\"" Sep 13 00:57:15.796002 env[1711]: time="2025-09-13T00:57:15.795691549Z" level=info msg="TearDown network for sandbox \"f4bbe350eaeee272c0b15bad26208c3b4df7746f93ddaf0ffd5431dd2f0c77b0\" successfully" Sep 13 00:57:15.796002 env[1711]: time="2025-09-13T00:57:15.795723648Z" level=info msg="StopPodSandbox for \"f4bbe350eaeee272c0b15bad26208c3b4df7746f93ddaf0ffd5431dd2f0c77b0\" returns successfully" Sep 13 00:57:15.796261 env[1711]: time="2025-09-13T00:57:15.796228080Z" level=info msg="RemovePodSandbox for \"f4bbe350eaeee272c0b15bad26208c3b4df7746f93ddaf0ffd5431dd2f0c77b0\"" Sep 13 00:57:15.796392 env[1711]: time="2025-09-13T00:57:15.796259462Z" level=info msg="Forcibly stopping sandbox \"f4bbe350eaeee272c0b15bad26208c3b4df7746f93ddaf0ffd5431dd2f0c77b0\"" Sep 13 00:57:15.796392 env[1711]: time="2025-09-13T00:57:15.796326713Z" level=info msg="TearDown network for sandbox \"f4bbe350eaeee272c0b15bad26208c3b4df7746f93ddaf0ffd5431dd2f0c77b0\" successfully" Sep 13 00:57:15.801727 env[1711]: time="2025-09-13T00:57:15.801674133Z" level=info msg="RemovePodSandbox \"f4bbe350eaeee272c0b15bad26208c3b4df7746f93ddaf0ffd5431dd2f0c77b0\" returns successfully" Sep 13 00:57:15.806747 env[1711]: time="2025-09-13T00:57:15.806704150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gr4r7,Uid:90045384-b740-41b7-8b27-0e055abfc5ce,Namespace:kube-system,Attempt:0,}" Sep 13 00:57:15.813418 env[1711]: time="2025-09-13T00:57:15.813375026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-mtbpl,Uid:75516ab1-874a-4290-9785-3e91a8efde2a,Namespace:kube-system,Attempt:0,}" Sep 13 00:57:15.834948 env[1711]: time="2025-09-13T00:57:15.834595425Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:57:15.834948 env[1711]: time="2025-09-13T00:57:15.834649166Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:57:15.834948 env[1711]: time="2025-09-13T00:57:15.834665895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:57:15.834948 env[1711]: time="2025-09-13T00:57:15.834843579Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7491e81e42f4bfb024ec65a970800bde45eff36ab37517e82eff216f664f9e74 pid=3793 runtime=io.containerd.runc.v2 Sep 13 00:57:15.846407 env[1711]: time="2025-09-13T00:57:15.846332047Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:57:15.846612 env[1711]: time="2025-09-13T00:57:15.846581673Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:57:15.846744 env[1711]: time="2025-09-13T00:57:15.846718449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:57:15.847357 env[1711]: time="2025-09-13T00:57:15.847318139Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e99d11fe925baf004635edbb925f463b53bf354ecf34a5a14f688ae7f8393baa pid=3817 runtime=io.containerd.runc.v2 Sep 13 00:57:15.857668 systemd[1]: Started cri-containerd-7491e81e42f4bfb024ec65a970800bde45eff36ab37517e82eff216f664f9e74.scope. Sep 13 00:57:15.864706 kubelet[2062]: E0913 00:57:15.861117 2062 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 00:57:15.886565 systemd[1]: Started cri-containerd-e99d11fe925baf004635edbb925f463b53bf354ecf34a5a14f688ae7f8393baa.scope. Sep 13 00:57:15.915534 env[1711]: time="2025-09-13T00:57:15.915488789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gr4r7,Uid:90045384-b740-41b7-8b27-0e055abfc5ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"7491e81e42f4bfb024ec65a970800bde45eff36ab37517e82eff216f664f9e74\"" Sep 13 00:57:15.919944 env[1711]: time="2025-09-13T00:57:15.919900038Z" level=info msg="CreateContainer within sandbox \"7491e81e42f4bfb024ec65a970800bde45eff36ab37517e82eff216f664f9e74\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:57:15.948303 env[1711]: time="2025-09-13T00:57:15.948250521Z" level=info msg="CreateContainer within sandbox \"7491e81e42f4bfb024ec65a970800bde45eff36ab37517e82eff216f664f9e74\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6655aa2dc314fcff7157a47eefbb2e85fd115d6883f1cfb2b00b52c91c06cf7d\"" Sep 13 00:57:15.949263 env[1711]: time="2025-09-13T00:57:15.949224369Z" level=info msg="StartContainer for \"6655aa2dc314fcff7157a47eefbb2e85fd115d6883f1cfb2b00b52c91c06cf7d\"" Sep 13 00:57:15.973994 env[1711]: time="2025-09-13T00:57:15.973946131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-mtbpl,Uid:75516ab1-874a-4290-9785-3e91a8efde2a,Namespace:kube-system,Attempt:0,} returns sandbox id \"e99d11fe925baf004635edbb925f463b53bf354ecf34a5a14f688ae7f8393baa\"" Sep 13 00:57:15.976993 env[1711]: time="2025-09-13T00:57:15.976942342Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 13 00:57:15.987668 systemd[1]: Started cri-containerd-6655aa2dc314fcff7157a47eefbb2e85fd115d6883f1cfb2b00b52c91c06cf7d.scope. Sep 13 00:57:16.002080 systemd[1]: cri-containerd-6655aa2dc314fcff7157a47eefbb2e85fd115d6883f1cfb2b00b52c91c06cf7d.scope: Deactivated successfully. Sep 13 00:57:16.033294 env[1711]: time="2025-09-13T00:57:16.033239781Z" level=info msg="shim disconnected" id=6655aa2dc314fcff7157a47eefbb2e85fd115d6883f1cfb2b00b52c91c06cf7d Sep 13 00:57:16.033294 env[1711]: time="2025-09-13T00:57:16.033290566Z" level=warning msg="cleaning up after shim disconnected" id=6655aa2dc314fcff7157a47eefbb2e85fd115d6883f1cfb2b00b52c91c06cf7d namespace=k8s.io Sep 13 00:57:16.033294 env[1711]: time="2025-09-13T00:57:16.033299160Z" level=info msg="cleaning up dead shim" Sep 13 00:57:16.043424 env[1711]: time="2025-09-13T00:57:16.042375210Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:57:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3895 runtime=io.containerd.runc.v2\ntime=\"2025-09-13T00:57:16Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/6655aa2dc314fcff7157a47eefbb2e85fd115d6883f1cfb2b00b52c91c06cf7d/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Sep 13 00:57:16.043424 env[1711]: time="2025-09-13T00:57:16.042623542Z" level=error msg="copy shim log" error="read /proc/self/fd/62: file already closed" Sep 13 00:57:16.043882 env[1711]: time="2025-09-13T00:57:16.043796320Z" level=error msg="Failed to pipe stdout of container \"6655aa2dc314fcff7157a47eefbb2e85fd115d6883f1cfb2b00b52c91c06cf7d\"" error="reading from a closed fifo" Sep 13 00:57:16.043882 env[1711]: time="2025-09-13T00:57:16.043859341Z" level=error msg="Failed to pipe stderr of container \"6655aa2dc314fcff7157a47eefbb2e85fd115d6883f1cfb2b00b52c91c06cf7d\"" error="reading from a closed fifo" Sep 13 00:57:16.047353 env[1711]: time="2025-09-13T00:57:16.047263238Z" level=error msg="StartContainer for \"6655aa2dc314fcff7157a47eefbb2e85fd115d6883f1cfb2b00b52c91c06cf7d\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Sep 13 00:57:16.047548 kubelet[2062]: E0913 00:57:16.047509 2062 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="6655aa2dc314fcff7157a47eefbb2e85fd115d6883f1cfb2b00b52c91c06cf7d" Sep 13 00:57:16.047927 kubelet[2062]: E0913 00:57:16.047707 2062 kuberuntime_manager.go:1341] "Unhandled Error" err=< Sep 13 00:57:16.047927 kubelet[2062]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Sep 13 00:57:16.047927 kubelet[2062]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Sep 13 00:57:16.047927 kubelet[2062]: rm /hostbin/cilium-mount Sep 13 00:57:16.048083 kubelet[2062]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q4ft2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-gr4r7_kube-system(90045384-b740-41b7-8b27-0e055abfc5ce): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Sep 13 00:57:16.048083 kubelet[2062]: > logger="UnhandledError" Sep 13 00:57:16.048912 kubelet[2062]: E0913 00:57:16.048853 2062 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-gr4r7" podUID="90045384-b740-41b7-8b27-0e055abfc5ce" Sep 13 00:57:16.093122 env[1711]: time="2025-09-13T00:57:16.093077879Z" level=info msg="StopPodSandbox for \"7491e81e42f4bfb024ec65a970800bde45eff36ab37517e82eff216f664f9e74\"" Sep 13 00:57:16.093343 env[1711]: time="2025-09-13T00:57:16.093307768Z" level=info msg="Container to stop \"6655aa2dc314fcff7157a47eefbb2e85fd115d6883f1cfb2b00b52c91c06cf7d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:57:16.100327 systemd[1]: cri-containerd-7491e81e42f4bfb024ec65a970800bde45eff36ab37517e82eff216f664f9e74.scope: Deactivated successfully. Sep 13 00:57:16.132579 env[1711]: time="2025-09-13T00:57:16.132531175Z" level=info msg="shim disconnected" id=7491e81e42f4bfb024ec65a970800bde45eff36ab37517e82eff216f664f9e74 Sep 13 00:57:16.132579 env[1711]: time="2025-09-13T00:57:16.132576121Z" level=warning msg="cleaning up after shim disconnected" id=7491e81e42f4bfb024ec65a970800bde45eff36ab37517e82eff216f664f9e74 namespace=k8s.io Sep 13 00:57:16.132579 env[1711]: time="2025-09-13T00:57:16.132586400Z" level=info msg="cleaning up dead shim" Sep 13 00:57:16.142142 env[1711]: time="2025-09-13T00:57:16.142093606Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:57:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3927 runtime=io.containerd.runc.v2\n" Sep 13 00:57:16.142425 env[1711]: time="2025-09-13T00:57:16.142397093Z" level=info msg="TearDown network for sandbox \"7491e81e42f4bfb024ec65a970800bde45eff36ab37517e82eff216f664f9e74\" successfully" Sep 13 00:57:16.142482 env[1711]: time="2025-09-13T00:57:16.142424036Z" level=info msg="StopPodSandbox for \"7491e81e42f4bfb024ec65a970800bde45eff36ab37517e82eff216f664f9e74\" returns successfully" Sep 13 00:57:16.273828 kubelet[2062]: I0913 00:57:16.273756 2062 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/90045384-b740-41b7-8b27-0e055abfc5ce-bpf-maps\") pod \"90045384-b740-41b7-8b27-0e055abfc5ce\" (UID: \"90045384-b740-41b7-8b27-0e055abfc5ce\") " Sep 13 00:57:16.273828 kubelet[2062]: I0913 00:57:16.273837 2062 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/90045384-b740-41b7-8b27-0e055abfc5ce-cilium-ipsec-secrets\") pod \"90045384-b740-41b7-8b27-0e055abfc5ce\" (UID: \"90045384-b740-41b7-8b27-0e055abfc5ce\") " Sep 13 00:57:16.274055 kubelet[2062]: I0913 00:57:16.273866 2062 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/90045384-b740-41b7-8b27-0e055abfc5ce-hostproc\") pod \"90045384-b740-41b7-8b27-0e055abfc5ce\" (UID: \"90045384-b740-41b7-8b27-0e055abfc5ce\") " Sep 13 00:57:16.274055 kubelet[2062]: I0913 00:57:16.273882 2062 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/90045384-b740-41b7-8b27-0e055abfc5ce-etc-cni-netd\") pod \"90045384-b740-41b7-8b27-0e055abfc5ce\" (UID: \"90045384-b740-41b7-8b27-0e055abfc5ce\") " Sep 13 00:57:16.274055 kubelet[2062]: I0913 00:57:16.273900 2062 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/90045384-b740-41b7-8b27-0e055abfc5ce-host-proc-sys-kernel\") pod \"90045384-b740-41b7-8b27-0e055abfc5ce\" (UID: \"90045384-b740-41b7-8b27-0e055abfc5ce\") " Sep 13 00:57:16.274055 kubelet[2062]: I0913 00:57:16.273915 2062 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/90045384-b740-41b7-8b27-0e055abfc5ce-xtables-lock\") pod \"90045384-b740-41b7-8b27-0e055abfc5ce\" (UID: \"90045384-b740-41b7-8b27-0e055abfc5ce\") " Sep 13 00:57:16.274055 kubelet[2062]: I0913 00:57:16.273936 2062 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/90045384-b740-41b7-8b27-0e055abfc5ce-host-proc-sys-net\") pod \"90045384-b740-41b7-8b27-0e055abfc5ce\" (UID: \"90045384-b740-41b7-8b27-0e055abfc5ce\") " Sep 13 00:57:16.274055 kubelet[2062]: I0913 00:57:16.273952 2062 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/90045384-b740-41b7-8b27-0e055abfc5ce-lib-modules\") pod \"90045384-b740-41b7-8b27-0e055abfc5ce\" (UID: \"90045384-b740-41b7-8b27-0e055abfc5ce\") " Sep 13 00:57:16.274055 kubelet[2062]: I0913 00:57:16.273972 2062 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/90045384-b740-41b7-8b27-0e055abfc5ce-clustermesh-secrets\") pod \"90045384-b740-41b7-8b27-0e055abfc5ce\" (UID: \"90045384-b740-41b7-8b27-0e055abfc5ce\") " Sep 13 00:57:16.274055 kubelet[2062]: I0913 00:57:16.273991 2062 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4ft2\" (UniqueName: \"kubernetes.io/projected/90045384-b740-41b7-8b27-0e055abfc5ce-kube-api-access-q4ft2\") pod \"90045384-b740-41b7-8b27-0e055abfc5ce\" (UID: \"90045384-b740-41b7-8b27-0e055abfc5ce\") " Sep 13 00:57:16.274055 kubelet[2062]: I0913 00:57:16.274006 2062 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/90045384-b740-41b7-8b27-0e055abfc5ce-cilium-run\") pod \"90045384-b740-41b7-8b27-0e055abfc5ce\" (UID: \"90045384-b740-41b7-8b27-0e055abfc5ce\") " Sep 13 00:57:16.274055 kubelet[2062]: I0913 00:57:16.274023 2062 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/90045384-b740-41b7-8b27-0e055abfc5ce-cilium-config-path\") pod \"90045384-b740-41b7-8b27-0e055abfc5ce\" (UID: \"90045384-b740-41b7-8b27-0e055abfc5ce\") " Sep 13 00:57:16.274055 kubelet[2062]: I0913 00:57:16.274047 2062 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/90045384-b740-41b7-8b27-0e055abfc5ce-hubble-tls\") pod \"90045384-b740-41b7-8b27-0e055abfc5ce\" (UID: \"90045384-b740-41b7-8b27-0e055abfc5ce\") " Sep 13 00:57:16.274362 kubelet[2062]: I0913 00:57:16.274062 2062 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/90045384-b740-41b7-8b27-0e055abfc5ce-cilium-cgroup\") pod \"90045384-b740-41b7-8b27-0e055abfc5ce\" (UID: \"90045384-b740-41b7-8b27-0e055abfc5ce\") " Sep 13 00:57:16.274362 kubelet[2062]: I0913 00:57:16.274077 2062 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/90045384-b740-41b7-8b27-0e055abfc5ce-cni-path\") pod \"90045384-b740-41b7-8b27-0e055abfc5ce\" (UID: \"90045384-b740-41b7-8b27-0e055abfc5ce\") " Sep 13 00:57:16.274362 kubelet[2062]: I0913 00:57:16.274145 2062 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90045384-b740-41b7-8b27-0e055abfc5ce-cni-path" (OuterVolumeSpecName: "cni-path") pod "90045384-b740-41b7-8b27-0e055abfc5ce" (UID: "90045384-b740-41b7-8b27-0e055abfc5ce"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:57:16.274362 kubelet[2062]: I0913 00:57:16.274171 2062 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90045384-b740-41b7-8b27-0e055abfc5ce-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "90045384-b740-41b7-8b27-0e055abfc5ce" (UID: "90045384-b740-41b7-8b27-0e055abfc5ce"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:57:16.274480 kubelet[2062]: I0913 00:57:16.274463 2062 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90045384-b740-41b7-8b27-0e055abfc5ce-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "90045384-b740-41b7-8b27-0e055abfc5ce" (UID: "90045384-b740-41b7-8b27-0e055abfc5ce"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:57:16.274513 kubelet[2062]: I0913 00:57:16.274488 2062 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90045384-b740-41b7-8b27-0e055abfc5ce-hostproc" (OuterVolumeSpecName: "hostproc") pod "90045384-b740-41b7-8b27-0e055abfc5ce" (UID: "90045384-b740-41b7-8b27-0e055abfc5ce"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:57:16.274513 kubelet[2062]: I0913 00:57:16.274501 2062 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90045384-b740-41b7-8b27-0e055abfc5ce-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "90045384-b740-41b7-8b27-0e055abfc5ce" (UID: "90045384-b740-41b7-8b27-0e055abfc5ce"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:57:16.274572 kubelet[2062]: I0913 00:57:16.274514 2062 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90045384-b740-41b7-8b27-0e055abfc5ce-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "90045384-b740-41b7-8b27-0e055abfc5ce" (UID: "90045384-b740-41b7-8b27-0e055abfc5ce"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:57:16.274572 kubelet[2062]: I0913 00:57:16.274529 2062 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90045384-b740-41b7-8b27-0e055abfc5ce-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "90045384-b740-41b7-8b27-0e055abfc5ce" (UID: "90045384-b740-41b7-8b27-0e055abfc5ce"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:57:16.274572 kubelet[2062]: I0913 00:57:16.274541 2062 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90045384-b740-41b7-8b27-0e055abfc5ce-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "90045384-b740-41b7-8b27-0e055abfc5ce" (UID: "90045384-b740-41b7-8b27-0e055abfc5ce"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:57:16.274572 kubelet[2062]: I0913 00:57:16.274560 2062 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90045384-b740-41b7-8b27-0e055abfc5ce-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "90045384-b740-41b7-8b27-0e055abfc5ce" (UID: "90045384-b740-41b7-8b27-0e055abfc5ce"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:57:16.277165 kubelet[2062]: I0913 00:57:16.277127 2062 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90045384-b740-41b7-8b27-0e055abfc5ce-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "90045384-b740-41b7-8b27-0e055abfc5ce" (UID: "90045384-b740-41b7-8b27-0e055abfc5ce"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 00:57:16.278056 kubelet[2062]: I0913 00:57:16.278023 2062 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90045384-b740-41b7-8b27-0e055abfc5ce-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "90045384-b740-41b7-8b27-0e055abfc5ce" (UID: "90045384-b740-41b7-8b27-0e055abfc5ce"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:57:16.280436 kubelet[2062]: I0913 00:57:16.280398 2062 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90045384-b740-41b7-8b27-0e055abfc5ce-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "90045384-b740-41b7-8b27-0e055abfc5ce" (UID: "90045384-b740-41b7-8b27-0e055abfc5ce"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 00:57:16.280668 kubelet[2062]: I0913 00:57:16.280653 2062 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90045384-b740-41b7-8b27-0e055abfc5ce-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "90045384-b740-41b7-8b27-0e055abfc5ce" (UID: "90045384-b740-41b7-8b27-0e055abfc5ce"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 00:57:16.281374 kubelet[2062]: I0913 00:57:16.281354 2062 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90045384-b740-41b7-8b27-0e055abfc5ce-kube-api-access-q4ft2" (OuterVolumeSpecName: "kube-api-access-q4ft2") pod "90045384-b740-41b7-8b27-0e055abfc5ce" (UID: "90045384-b740-41b7-8b27-0e055abfc5ce"). InnerVolumeSpecName "kube-api-access-q4ft2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:57:16.282701 kubelet[2062]: I0913 00:57:16.282673 2062 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90045384-b740-41b7-8b27-0e055abfc5ce-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "90045384-b740-41b7-8b27-0e055abfc5ce" (UID: "90045384-b740-41b7-8b27-0e055abfc5ce"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:57:16.375840 kubelet[2062]: I0913 00:57:16.374850 2062 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4ft2\" (UniqueName: \"kubernetes.io/projected/90045384-b740-41b7-8b27-0e055abfc5ce-kube-api-access-q4ft2\") on node \"172.31.27.34\" DevicePath \"\"" Sep 13 00:57:16.375840 kubelet[2062]: I0913 00:57:16.374885 2062 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/90045384-b740-41b7-8b27-0e055abfc5ce-cilium-run\") on node \"172.31.27.34\" DevicePath \"\"" Sep 13 00:57:16.375840 kubelet[2062]: I0913 00:57:16.374895 2062 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/90045384-b740-41b7-8b27-0e055abfc5ce-cilium-config-path\") on node \"172.31.27.34\" DevicePath \"\"" Sep 13 00:57:16.375840 kubelet[2062]: I0913 00:57:16.374903 2062 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/90045384-b740-41b7-8b27-0e055abfc5ce-hubble-tls\") on node \"172.31.27.34\" DevicePath \"\"" Sep 13 00:57:16.375840 kubelet[2062]: I0913 00:57:16.374986 2062 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/90045384-b740-41b7-8b27-0e055abfc5ce-cilium-cgroup\") on node \"172.31.27.34\" DevicePath \"\"" Sep 13 00:57:16.375840 kubelet[2062]: I0913 00:57:16.374995 2062 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/90045384-b740-41b7-8b27-0e055abfc5ce-cni-path\") on node \"172.31.27.34\" DevicePath \"\"" Sep 13 00:57:16.375840 kubelet[2062]: I0913 00:57:16.375003 2062 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/90045384-b740-41b7-8b27-0e055abfc5ce-bpf-maps\") on node \"172.31.27.34\" DevicePath \"\"" Sep 13 00:57:16.375840 kubelet[2062]: I0913 00:57:16.375011 2062 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/90045384-b740-41b7-8b27-0e055abfc5ce-cilium-ipsec-secrets\") on node \"172.31.27.34\" DevicePath \"\"" Sep 13 00:57:16.375840 kubelet[2062]: I0913 00:57:16.375019 2062 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/90045384-b740-41b7-8b27-0e055abfc5ce-hostproc\") on node \"172.31.27.34\" DevicePath \"\"" Sep 13 00:57:16.375840 kubelet[2062]: I0913 00:57:16.375026 2062 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/90045384-b740-41b7-8b27-0e055abfc5ce-etc-cni-netd\") on node \"172.31.27.34\" DevicePath \"\"" Sep 13 00:57:16.375840 kubelet[2062]: I0913 00:57:16.375036 2062 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/90045384-b740-41b7-8b27-0e055abfc5ce-host-proc-sys-kernel\") on node \"172.31.27.34\" DevicePath \"\"" Sep 13 00:57:16.375840 kubelet[2062]: I0913 00:57:16.375044 2062 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/90045384-b740-41b7-8b27-0e055abfc5ce-xtables-lock\") on node \"172.31.27.34\" DevicePath \"\"" Sep 13 00:57:16.375840 kubelet[2062]: I0913 00:57:16.375051 2062 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/90045384-b740-41b7-8b27-0e055abfc5ce-lib-modules\") on node \"172.31.27.34\" DevicePath \"\"" Sep 13 00:57:16.375840 kubelet[2062]: I0913 00:57:16.375060 2062 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/90045384-b740-41b7-8b27-0e055abfc5ce-clustermesh-secrets\") on node \"172.31.27.34\" DevicePath \"\"" Sep 13 00:57:16.375840 kubelet[2062]: I0913 00:57:16.375067 2062 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/90045384-b740-41b7-8b27-0e055abfc5ce-host-proc-sys-net\") on node \"172.31.27.34\" DevicePath \"\"" Sep 13 00:57:16.675199 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7491e81e42f4bfb024ec65a970800bde45eff36ab37517e82eff216f664f9e74-shm.mount: Deactivated successfully. Sep 13 00:57:16.675302 systemd[1]: var-lib-kubelet-pods-90045384\x2db740\x2d41b7\x2d8b27\x2d0e055abfc5ce-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq4ft2.mount: Deactivated successfully. Sep 13 00:57:16.675362 systemd[1]: var-lib-kubelet-pods-90045384\x2db740\x2d41b7\x2d8b27\x2d0e055abfc5ce-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 00:57:16.675417 systemd[1]: var-lib-kubelet-pods-90045384\x2db740\x2d41b7\x2d8b27\x2d0e055abfc5ce-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 13 00:57:16.675474 systemd[1]: var-lib-kubelet-pods-90045384\x2db740\x2d41b7\x2d8b27\x2d0e055abfc5ce-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 00:57:16.786340 kubelet[2062]: E0913 00:57:16.786287 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:57:16.834277 kubelet[2062]: I0913 00:57:16.834225 2062 setters.go:602] "Node became not ready" node="172.31.27.34" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-13T00:57:16Z","lastTransitionTime":"2025-09-13T00:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 13 00:57:17.096285 kubelet[2062]: I0913 00:57:17.096255 2062 scope.go:117] "RemoveContainer" containerID="6655aa2dc314fcff7157a47eefbb2e85fd115d6883f1cfb2b00b52c91c06cf7d" Sep 13 00:57:17.100422 systemd[1]: Removed slice kubepods-burstable-pod90045384_b740_41b7_8b27_0e055abfc5ce.slice. Sep 13 00:57:17.103622 env[1711]: time="2025-09-13T00:57:17.102566401Z" level=info msg="RemoveContainer for \"6655aa2dc314fcff7157a47eefbb2e85fd115d6883f1cfb2b00b52c91c06cf7d\"" Sep 13 00:57:17.108129 env[1711]: time="2025-09-13T00:57:17.108071898Z" level=info msg="RemoveContainer for \"6655aa2dc314fcff7157a47eefbb2e85fd115d6883f1cfb2b00b52c91c06cf7d\" returns successfully" Sep 13 00:57:17.202070 kubelet[2062]: I0913 00:57:17.202034 2062 memory_manager.go:355] "RemoveStaleState removing state" podUID="90045384-b740-41b7-8b27-0e055abfc5ce" containerName="mount-cgroup" Sep 13 00:57:17.210313 systemd[1]: Created slice kubepods-burstable-pod7cc5124d_2d13_4405_8e60_6bc8615a0fba.slice. Sep 13 00:57:17.281755 kubelet[2062]: I0913 00:57:17.281122 2062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7cc5124d-2d13-4405-8e60-6bc8615a0fba-host-proc-sys-net\") pod \"cilium-sb5tz\" (UID: \"7cc5124d-2d13-4405-8e60-6bc8615a0fba\") " pod="kube-system/cilium-sb5tz" Sep 13 00:57:17.281755 kubelet[2062]: I0913 00:57:17.281172 2062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7cc5124d-2d13-4405-8e60-6bc8615a0fba-host-proc-sys-kernel\") pod \"cilium-sb5tz\" (UID: \"7cc5124d-2d13-4405-8e60-6bc8615a0fba\") " pod="kube-system/cilium-sb5tz" Sep 13 00:57:17.281755 kubelet[2062]: I0913 00:57:17.281217 2062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7cc5124d-2d13-4405-8e60-6bc8615a0fba-cilium-run\") pod \"cilium-sb5tz\" (UID: \"7cc5124d-2d13-4405-8e60-6bc8615a0fba\") " pod="kube-system/cilium-sb5tz" Sep 13 00:57:17.281755 kubelet[2062]: I0913 00:57:17.281244 2062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7cc5124d-2d13-4405-8e60-6bc8615a0fba-cilium-ipsec-secrets\") pod \"cilium-sb5tz\" (UID: \"7cc5124d-2d13-4405-8e60-6bc8615a0fba\") " pod="kube-system/cilium-sb5tz" Sep 13 00:57:17.281755 kubelet[2062]: I0913 00:57:17.281282 2062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49whr\" (UniqueName: \"kubernetes.io/projected/7cc5124d-2d13-4405-8e60-6bc8615a0fba-kube-api-access-49whr\") pod \"cilium-sb5tz\" (UID: \"7cc5124d-2d13-4405-8e60-6bc8615a0fba\") " pod="kube-system/cilium-sb5tz" Sep 13 00:57:17.281755 kubelet[2062]: I0913 00:57:17.281307 2062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7cc5124d-2d13-4405-8e60-6bc8615a0fba-bpf-maps\") pod \"cilium-sb5tz\" (UID: \"7cc5124d-2d13-4405-8e60-6bc8615a0fba\") " pod="kube-system/cilium-sb5tz" Sep 13 00:57:17.281755 kubelet[2062]: I0913 00:57:17.281345 2062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7cc5124d-2d13-4405-8e60-6bc8615a0fba-clustermesh-secrets\") pod \"cilium-sb5tz\" (UID: \"7cc5124d-2d13-4405-8e60-6bc8615a0fba\") " pod="kube-system/cilium-sb5tz" Sep 13 00:57:17.281755 kubelet[2062]: I0913 00:57:17.281369 2062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7cc5124d-2d13-4405-8e60-6bc8615a0fba-hubble-tls\") pod \"cilium-sb5tz\" (UID: \"7cc5124d-2d13-4405-8e60-6bc8615a0fba\") " pod="kube-system/cilium-sb5tz" Sep 13 00:57:17.281755 kubelet[2062]: I0913 00:57:17.281394 2062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7cc5124d-2d13-4405-8e60-6bc8615a0fba-cilium-cgroup\") pod \"cilium-sb5tz\" (UID: \"7cc5124d-2d13-4405-8e60-6bc8615a0fba\") " pod="kube-system/cilium-sb5tz" Sep 13 00:57:17.281755 kubelet[2062]: I0913 00:57:17.281433 2062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7cc5124d-2d13-4405-8e60-6bc8615a0fba-etc-cni-netd\") pod \"cilium-sb5tz\" (UID: \"7cc5124d-2d13-4405-8e60-6bc8615a0fba\") " pod="kube-system/cilium-sb5tz" Sep 13 00:57:17.281755 kubelet[2062]: I0913 00:57:17.281460 2062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7cc5124d-2d13-4405-8e60-6bc8615a0fba-xtables-lock\") pod \"cilium-sb5tz\" (UID: \"7cc5124d-2d13-4405-8e60-6bc8615a0fba\") " pod="kube-system/cilium-sb5tz" Sep 13 00:57:17.281755 kubelet[2062]: I0913 00:57:17.281498 2062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7cc5124d-2d13-4405-8e60-6bc8615a0fba-lib-modules\") pod \"cilium-sb5tz\" (UID: \"7cc5124d-2d13-4405-8e60-6bc8615a0fba\") " pod="kube-system/cilium-sb5tz" Sep 13 00:57:17.281755 kubelet[2062]: I0913 00:57:17.281521 2062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7cc5124d-2d13-4405-8e60-6bc8615a0fba-cilium-config-path\") pod \"cilium-sb5tz\" (UID: \"7cc5124d-2d13-4405-8e60-6bc8615a0fba\") " pod="kube-system/cilium-sb5tz" Sep 13 00:57:17.281755 kubelet[2062]: I0913 00:57:17.281549 2062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7cc5124d-2d13-4405-8e60-6bc8615a0fba-hostproc\") pod \"cilium-sb5tz\" (UID: \"7cc5124d-2d13-4405-8e60-6bc8615a0fba\") " pod="kube-system/cilium-sb5tz" Sep 13 00:57:17.281755 kubelet[2062]: I0913 00:57:17.281583 2062 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7cc5124d-2d13-4405-8e60-6bc8615a0fba-cni-path\") pod \"cilium-sb5tz\" (UID: \"7cc5124d-2d13-4405-8e60-6bc8615a0fba\") " pod="kube-system/cilium-sb5tz" Sep 13 00:57:17.347164 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1500435441.mount: Deactivated successfully. Sep 13 00:57:17.520488 env[1711]: time="2025-09-13T00:57:17.520453250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sb5tz,Uid:7cc5124d-2d13-4405-8e60-6bc8615a0fba,Namespace:kube-system,Attempt:0,}" Sep 13 00:57:17.557152 env[1711]: time="2025-09-13T00:57:17.557065709Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:57:17.557152 env[1711]: time="2025-09-13T00:57:17.557114336Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:57:17.557388 env[1711]: time="2025-09-13T00:57:17.557130277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:57:17.557388 env[1711]: time="2025-09-13T00:57:17.557296545Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5bd2ad7ff9738c880dff82ad48125c5fd78551f08664712ab353201c5c47653c pid=3954 runtime=io.containerd.runc.v2 Sep 13 00:57:17.578278 systemd[1]: Started cri-containerd-5bd2ad7ff9738c880dff82ad48125c5fd78551f08664712ab353201c5c47653c.scope. Sep 13 00:57:17.636758 env[1711]: time="2025-09-13T00:57:17.636640803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sb5tz,Uid:7cc5124d-2d13-4405-8e60-6bc8615a0fba,Namespace:kube-system,Attempt:0,} returns sandbox id \"5bd2ad7ff9738c880dff82ad48125c5fd78551f08664712ab353201c5c47653c\"" Sep 13 00:57:17.640434 env[1711]: time="2025-09-13T00:57:17.640383526Z" level=info msg="CreateContainer within sandbox \"5bd2ad7ff9738c880dff82ad48125c5fd78551f08664712ab353201c5c47653c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:57:17.660870 env[1711]: time="2025-09-13T00:57:17.660805436Z" level=info msg="CreateContainer within sandbox \"5bd2ad7ff9738c880dff82ad48125c5fd78551f08664712ab353201c5c47653c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5283734b468766fad41eb7c84922bf60b584597a1c96f473a016113861d46f44\"" Sep 13 00:57:17.661856 env[1711]: time="2025-09-13T00:57:17.661823754Z" level=info msg="StartContainer for \"5283734b468766fad41eb7c84922bf60b584597a1c96f473a016113861d46f44\"" Sep 13 00:57:17.709248 systemd[1]: Started cri-containerd-5283734b468766fad41eb7c84922bf60b584597a1c96f473a016113861d46f44.scope. Sep 13 00:57:17.769998 env[1711]: time="2025-09-13T00:57:17.769941172Z" level=info msg="StartContainer for \"5283734b468766fad41eb7c84922bf60b584597a1c96f473a016113861d46f44\" returns successfully" Sep 13 00:57:17.786537 systemd[1]: cri-containerd-5283734b468766fad41eb7c84922bf60b584597a1c96f473a016113861d46f44.scope: Deactivated successfully. Sep 13 00:57:17.787291 kubelet[2062]: E0913 00:57:17.786812 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:57:17.819684 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5283734b468766fad41eb7c84922bf60b584597a1c96f473a016113861d46f44-rootfs.mount: Deactivated successfully. Sep 13 00:57:17.901273 env[1711]: time="2025-09-13T00:57:17.901144984Z" level=info msg="shim disconnected" id=5283734b468766fad41eb7c84922bf60b584597a1c96f473a016113861d46f44 Sep 13 00:57:17.901273 env[1711]: time="2025-09-13T00:57:17.901206091Z" level=warning msg="cleaning up after shim disconnected" id=5283734b468766fad41eb7c84922bf60b584597a1c96f473a016113861d46f44 namespace=k8s.io Sep 13 00:57:17.901273 env[1711]: time="2025-09-13T00:57:17.901218774Z" level=info msg="cleaning up dead shim" Sep 13 00:57:17.913856 env[1711]: time="2025-09-13T00:57:17.913806491Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:57:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4042 runtime=io.containerd.runc.v2\n" Sep 13 00:57:17.926174 kubelet[2062]: I0913 00:57:17.926133 2062 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90045384-b740-41b7-8b27-0e055abfc5ce" path="/var/lib/kubelet/pods/90045384-b740-41b7-8b27-0e055abfc5ce/volumes" Sep 13 00:57:18.103617 env[1711]: time="2025-09-13T00:57:18.103570221Z" level=info msg="CreateContainer within sandbox \"5bd2ad7ff9738c880dff82ad48125c5fd78551f08664712ab353201c5c47653c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:57:18.142279 env[1711]: time="2025-09-13T00:57:18.142225809Z" level=info msg="CreateContainer within sandbox \"5bd2ad7ff9738c880dff82ad48125c5fd78551f08664712ab353201c5c47653c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"078a7c4f342e8447e837c6b0b4a04c43d6f1b119bc1170c12552fe14f263bfef\"" Sep 13 00:57:18.143432 env[1711]: time="2025-09-13T00:57:18.143392636Z" level=info msg="StartContainer for \"078a7c4f342e8447e837c6b0b4a04c43d6f1b119bc1170c12552fe14f263bfef\"" Sep 13 00:57:18.182415 systemd[1]: Started cri-containerd-078a7c4f342e8447e837c6b0b4a04c43d6f1b119bc1170c12552fe14f263bfef.scope. Sep 13 00:57:18.232426 env[1711]: time="2025-09-13T00:57:18.232368535Z" level=info msg="StartContainer for \"078a7c4f342e8447e837c6b0b4a04c43d6f1b119bc1170c12552fe14f263bfef\" returns successfully" Sep 13 00:57:18.241053 env[1711]: time="2025-09-13T00:57:18.240996534Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:57:18.246397 env[1711]: time="2025-09-13T00:57:18.246353470Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:57:18.249379 systemd[1]: cri-containerd-078a7c4f342e8447e837c6b0b4a04c43d6f1b119bc1170c12552fe14f263bfef.scope: Deactivated successfully. Sep 13 00:57:18.250092 env[1711]: time="2025-09-13T00:57:18.250052106Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:57:18.251024 env[1711]: time="2025-09-13T00:57:18.250977204Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 13 00:57:18.253867 env[1711]: time="2025-09-13T00:57:18.253828551Z" level=info msg="CreateContainer within sandbox \"e99d11fe925baf004635edbb925f463b53bf354ecf34a5a14f688ae7f8393baa\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 13 00:57:18.297557 env[1711]: time="2025-09-13T00:57:18.297508465Z" level=info msg="CreateContainer within sandbox \"e99d11fe925baf004635edbb925f463b53bf354ecf34a5a14f688ae7f8393baa\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2eeb7b73272ff2cf5a6d9e82c8717e1e3fe34541751b92a2fc92d6e64d44822b\"" Sep 13 00:57:18.298399 env[1711]: time="2025-09-13T00:57:18.298283184Z" level=info msg="StartContainer for \"2eeb7b73272ff2cf5a6d9e82c8717e1e3fe34541751b92a2fc92d6e64d44822b\"" Sep 13 00:57:18.316884 systemd[1]: Started cri-containerd-2eeb7b73272ff2cf5a6d9e82c8717e1e3fe34541751b92a2fc92d6e64d44822b.scope. Sep 13 00:57:18.354050 env[1711]: time="2025-09-13T00:57:18.353979957Z" level=info msg="shim disconnected" id=078a7c4f342e8447e837c6b0b4a04c43d6f1b119bc1170c12552fe14f263bfef Sep 13 00:57:18.354563 env[1711]: time="2025-09-13T00:57:18.354525777Z" level=warning msg="cleaning up after shim disconnected" id=078a7c4f342e8447e837c6b0b4a04c43d6f1b119bc1170c12552fe14f263bfef namespace=k8s.io Sep 13 00:57:18.354695 env[1711]: time="2025-09-13T00:57:18.354677439Z" level=info msg="cleaning up dead shim" Sep 13 00:57:18.355098 env[1711]: time="2025-09-13T00:57:18.355057138Z" level=info msg="StartContainer for \"2eeb7b73272ff2cf5a6d9e82c8717e1e3fe34541751b92a2fc92d6e64d44822b\" returns successfully" Sep 13 00:57:18.367135 env[1711]: time="2025-09-13T00:57:18.367087626Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:57:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4141 runtime=io.containerd.runc.v2\n" Sep 13 00:57:18.675112 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1915189028.mount: Deactivated successfully. Sep 13 00:57:18.787867 kubelet[2062]: E0913 00:57:18.787818 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:57:19.108296 env[1711]: time="2025-09-13T00:57:19.108254329Z" level=info msg="CreateContainer within sandbox \"5bd2ad7ff9738c880dff82ad48125c5fd78551f08664712ab353201c5c47653c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:57:19.131661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1784396600.mount: Deactivated successfully. Sep 13 00:57:19.145580 kubelet[2062]: W0913 00:57:19.145524 2062 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90045384_b740_41b7_8b27_0e055abfc5ce.slice/cri-containerd-6655aa2dc314fcff7157a47eefbb2e85fd115d6883f1cfb2b00b52c91c06cf7d.scope WatchSource:0}: container "6655aa2dc314fcff7157a47eefbb2e85fd115d6883f1cfb2b00b52c91c06cf7d" in namespace "k8s.io": not found Sep 13 00:57:19.146969 env[1711]: time="2025-09-13T00:57:19.146845661Z" level=info msg="CreateContainer within sandbox \"5bd2ad7ff9738c880dff82ad48125c5fd78551f08664712ab353201c5c47653c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"789ca078bc97b6d9a05079014404e936ea505602220f6e20b19661e05ab204b5\"" Sep 13 00:57:19.147885 env[1711]: time="2025-09-13T00:57:19.147661385Z" level=info msg="StartContainer for \"789ca078bc97b6d9a05079014404e936ea505602220f6e20b19661e05ab204b5\"" Sep 13 00:57:19.162406 kubelet[2062]: I0913 00:57:19.161842 2062 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-mtbpl" podStartSLOduration=1.885751604 podStartE2EDuration="4.16181997s" podCreationTimestamp="2025-09-13 00:57:15 +0000 UTC" firstStartedPulling="2025-09-13 00:57:15.976121154 +0000 UTC m=+60.891937207" lastFinishedPulling="2025-09-13 00:57:18.252189503 +0000 UTC m=+63.168005573" observedRunningTime="2025-09-13 00:57:19.121341817 +0000 UTC m=+64.037157879" watchObservedRunningTime="2025-09-13 00:57:19.16181997 +0000 UTC m=+64.077636034" Sep 13 00:57:19.175922 systemd[1]: Started cri-containerd-789ca078bc97b6d9a05079014404e936ea505602220f6e20b19661e05ab204b5.scope. Sep 13 00:57:19.217519 env[1711]: time="2025-09-13T00:57:19.217332732Z" level=info msg="StartContainer for \"789ca078bc97b6d9a05079014404e936ea505602220f6e20b19661e05ab204b5\" returns successfully" Sep 13 00:57:19.229318 systemd[1]: cri-containerd-789ca078bc97b6d9a05079014404e936ea505602220f6e20b19661e05ab204b5.scope: Deactivated successfully. Sep 13 00:57:19.266068 env[1711]: time="2025-09-13T00:57:19.266011746Z" level=info msg="shim disconnected" id=789ca078bc97b6d9a05079014404e936ea505602220f6e20b19661e05ab204b5 Sep 13 00:57:19.266325 env[1711]: time="2025-09-13T00:57:19.266297230Z" level=warning msg="cleaning up after shim disconnected" id=789ca078bc97b6d9a05079014404e936ea505602220f6e20b19661e05ab204b5 namespace=k8s.io Sep 13 00:57:19.266395 env[1711]: time="2025-09-13T00:57:19.266382856Z" level=info msg="cleaning up dead shim" Sep 13 00:57:19.275315 env[1711]: time="2025-09-13T00:57:19.275267406Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:57:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4202 runtime=io.containerd.runc.v2\n" Sep 13 00:57:19.674332 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-789ca078bc97b6d9a05079014404e936ea505602220f6e20b19661e05ab204b5-rootfs.mount: Deactivated successfully. Sep 13 00:57:19.788141 kubelet[2062]: E0913 00:57:19.788009 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:57:20.113804 env[1711]: time="2025-09-13T00:57:20.113745979Z" level=info msg="CreateContainer within sandbox \"5bd2ad7ff9738c880dff82ad48125c5fd78551f08664712ab353201c5c47653c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:57:20.142210 env[1711]: time="2025-09-13T00:57:20.142133892Z" level=info msg="CreateContainer within sandbox \"5bd2ad7ff9738c880dff82ad48125c5fd78551f08664712ab353201c5c47653c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1854a49a5b48e01e796896d5469f9f58c567c16f95747f3e988f61fb0631bb27\"" Sep 13 00:57:20.143028 env[1711]: time="2025-09-13T00:57:20.142716336Z" level=info msg="StartContainer for \"1854a49a5b48e01e796896d5469f9f58c567c16f95747f3e988f61fb0631bb27\"" Sep 13 00:57:20.164645 systemd[1]: Started cri-containerd-1854a49a5b48e01e796896d5469f9f58c567c16f95747f3e988f61fb0631bb27.scope. Sep 13 00:57:20.200167 systemd[1]: cri-containerd-1854a49a5b48e01e796896d5469f9f58c567c16f95747f3e988f61fb0631bb27.scope: Deactivated successfully. Sep 13 00:57:20.202309 env[1711]: time="2025-09-13T00:57:20.202260668Z" level=info msg="StartContainer for \"1854a49a5b48e01e796896d5469f9f58c567c16f95747f3e988f61fb0631bb27\" returns successfully" Sep 13 00:57:20.239251 env[1711]: time="2025-09-13T00:57:20.239199495Z" level=info msg="shim disconnected" id=1854a49a5b48e01e796896d5469f9f58c567c16f95747f3e988f61fb0631bb27 Sep 13 00:57:20.239251 env[1711]: time="2025-09-13T00:57:20.239247395Z" level=warning msg="cleaning up after shim disconnected" id=1854a49a5b48e01e796896d5469f9f58c567c16f95747f3e988f61fb0631bb27 namespace=k8s.io Sep 13 00:57:20.239251 env[1711]: time="2025-09-13T00:57:20.239257528Z" level=info msg="cleaning up dead shim" Sep 13 00:57:20.248217 env[1711]: time="2025-09-13T00:57:20.248171366Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:57:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4257 runtime=io.containerd.runc.v2\n" Sep 13 00:57:20.674200 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1854a49a5b48e01e796896d5469f9f58c567c16f95747f3e988f61fb0631bb27-rootfs.mount: Deactivated successfully. Sep 13 00:57:20.788911 kubelet[2062]: E0913 00:57:20.788856 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:57:20.861970 kubelet[2062]: E0913 00:57:20.861932 2062 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 00:57:21.117969 env[1711]: time="2025-09-13T00:57:21.117920215Z" level=info msg="CreateContainer within sandbox \"5bd2ad7ff9738c880dff82ad48125c5fd78551f08664712ab353201c5c47653c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:57:21.149756 env[1711]: time="2025-09-13T00:57:21.149635251Z" level=info msg="CreateContainer within sandbox \"5bd2ad7ff9738c880dff82ad48125c5fd78551f08664712ab353201c5c47653c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"55d540a77c0a0408b4a9e13a1a31a08dd0d58fe3860b61fe68bc71dc5211ecf2\"" Sep 13 00:57:21.150227 env[1711]: time="2025-09-13T00:57:21.150195934Z" level=info msg="StartContainer for \"55d540a77c0a0408b4a9e13a1a31a08dd0d58fe3860b61fe68bc71dc5211ecf2\"" Sep 13 00:57:21.180687 systemd[1]: Started cri-containerd-55d540a77c0a0408b4a9e13a1a31a08dd0d58fe3860b61fe68bc71dc5211ecf2.scope. Sep 13 00:57:21.226020 env[1711]: time="2025-09-13T00:57:21.225959131Z" level=info msg="StartContainer for \"55d540a77c0a0408b4a9e13a1a31a08dd0d58fe3860b61fe68bc71dc5211ecf2\" returns successfully" Sep 13 00:57:21.790050 kubelet[2062]: E0913 00:57:21.789961 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:57:21.815985 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 13 00:57:22.260123 kubelet[2062]: W0913 00:57:22.260082 2062 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cc5124d_2d13_4405_8e60_6bc8615a0fba.slice/cri-containerd-5283734b468766fad41eb7c84922bf60b584597a1c96f473a016113861d46f44.scope WatchSource:0}: task 5283734b468766fad41eb7c84922bf60b584597a1c96f473a016113861d46f44 not found: not found Sep 13 00:57:22.723025 systemd[1]: run-containerd-runc-k8s.io-55d540a77c0a0408b4a9e13a1a31a08dd0d58fe3860b61fe68bc71dc5211ecf2-runc.HBb5Uk.mount: Deactivated successfully. Sep 13 00:57:22.791092 kubelet[2062]: E0913 00:57:22.791049 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:57:23.791402 kubelet[2062]: E0913 00:57:23.791359 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:57:24.652361 (udev-worker)[4348]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:57:24.653069 (udev-worker)[4818]: Network interface NamePolicy= disabled on kernel command line. Sep 13 00:57:24.666536 systemd-networkd[1444]: lxc_health: Link UP Sep 13 00:57:24.676078 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 13 00:57:24.675673 systemd-networkd[1444]: lxc_health: Gained carrier Sep 13 00:57:24.792665 kubelet[2062]: E0913 00:57:24.792616 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:57:24.896778 systemd[1]: run-containerd-runc-k8s.io-55d540a77c0a0408b4a9e13a1a31a08dd0d58fe3860b61fe68bc71dc5211ecf2-runc.p9A3gV.mount: Deactivated successfully. Sep 13 00:57:25.369428 kubelet[2062]: W0913 00:57:25.369376 2062 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cc5124d_2d13_4405_8e60_6bc8615a0fba.slice/cri-containerd-078a7c4f342e8447e837c6b0b4a04c43d6f1b119bc1170c12552fe14f263bfef.scope WatchSource:0}: task 078a7c4f342e8447e837c6b0b4a04c43d6f1b119bc1170c12552fe14f263bfef not found: not found Sep 13 00:57:25.548568 kubelet[2062]: I0913 00:57:25.548502 2062 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-sb5tz" podStartSLOduration=8.548479805 podStartE2EDuration="8.548479805s" podCreationTimestamp="2025-09-13 00:57:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:57:22.168975269 +0000 UTC m=+67.084791329" watchObservedRunningTime="2025-09-13 00:57:25.548479805 +0000 UTC m=+70.464295868" Sep 13 00:57:25.793849 kubelet[2062]: E0913 00:57:25.793794 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:57:26.095923 systemd-networkd[1444]: lxc_health: Gained IPv6LL Sep 13 00:57:26.794039 kubelet[2062]: E0913 00:57:26.793967 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:57:27.795148 kubelet[2062]: E0913 00:57:27.795099 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:57:28.488537 kubelet[2062]: W0913 00:57:28.488497 2062 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cc5124d_2d13_4405_8e60_6bc8615a0fba.slice/cri-containerd-789ca078bc97b6d9a05079014404e936ea505602220f6e20b19661e05ab204b5.scope WatchSource:0}: task 789ca078bc97b6d9a05079014404e936ea505602220f6e20b19661e05ab204b5 not found: not found Sep 13 00:57:28.795654 kubelet[2062]: E0913 00:57:28.795532 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:57:29.538294 systemd[1]: run-containerd-runc-k8s.io-55d540a77c0a0408b4a9e13a1a31a08dd0d58fe3860b61fe68bc71dc5211ecf2-runc.S3AAZ2.mount: Deactivated successfully. Sep 13 00:57:29.797635 kubelet[2062]: E0913 00:57:29.797512 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:57:30.798803 kubelet[2062]: E0913 00:57:30.798725 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:57:31.600219 kubelet[2062]: W0913 00:57:31.600182 2062 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cc5124d_2d13_4405_8e60_6bc8615a0fba.slice/cri-containerd-1854a49a5b48e01e796896d5469f9f58c567c16f95747f3e988f61fb0631bb27.scope WatchSource:0}: task 1854a49a5b48e01e796896d5469f9f58c567c16f95747f3e988f61fb0631bb27 not found: not found Sep 13 00:57:31.800840 kubelet[2062]: E0913 00:57:31.799367 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:57:32.800485 kubelet[2062]: E0913 00:57:32.800431 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:57:33.801119 kubelet[2062]: E0913 00:57:33.801041 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:57:33.993044 systemd[1]: run-containerd-runc-k8s.io-55d540a77c0a0408b4a9e13a1a31a08dd0d58fe3860b61fe68bc71dc5211ecf2-runc.J33A2z.mount: Deactivated successfully. Sep 13 00:57:34.801767 kubelet[2062]: E0913 00:57:34.801713 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:57:35.738748 kubelet[2062]: E0913 00:57:35.738625 2062 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:57:35.802670 kubelet[2062]: E0913 00:57:35.802627 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 13 00:57:36.803334 kubelet[2062]: E0913 00:57:36.803274 2062 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"