May 17 00:39:01.031062 kernel: Linux version 5.15.182-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri May 16 23:09:52 -00 2025 May 17 00:39:01.031094 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:39:01.031113 kernel: BIOS-provided physical RAM map: May 17 00:39:01.031125 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 17 00:39:01.031135 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable May 17 00:39:01.031146 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved May 17 00:39:01.031160 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data May 17 00:39:01.031172 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS May 17 00:39:01.031186 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable May 17 00:39:01.031198 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved May 17 00:39:01.031209 kernel: NX (Execute Disable) protection: active May 17 00:39:01.031221 kernel: e820: update [mem 0x76813018-0x7681be57] usable ==> usable May 17 00:39:01.031233 kernel: e820: update [mem 0x76813018-0x7681be57] usable ==> usable May 17 00:39:01.031246 kernel: extended physical RAM map: May 17 00:39:01.031263 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable May 17 00:39:01.031276 kernel: reserve setup_data: [mem 0x0000000000100000-0x0000000076813017] usable May 17 00:39:01.031289 kernel: reserve setup_data: [mem 0x0000000076813018-0x000000007681be57] usable May 17 00:39:01.031302 kernel: reserve setup_data: [mem 0x000000007681be58-0x00000000786cdfff] usable May 17 00:39:01.031338 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved May 17 00:39:01.031352 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data May 17 00:39:01.031365 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS May 17 00:39:01.031377 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable May 17 00:39:01.031390 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved May 17 00:39:01.031402 kernel: efi: EFI v2.70 by EDK II May 17 00:39:01.031418 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77004a98 May 17 00:39:01.031431 kernel: SMBIOS 2.7 present. May 17 00:39:01.031444 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 May 17 00:39:01.031457 kernel: Hypervisor detected: KVM May 17 00:39:01.031469 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 17 00:39:01.031481 kernel: kvm-clock: cpu 0, msr 1d19a001, primary cpu clock May 17 00:39:01.031494 kernel: kvm-clock: using sched offset of 4463994554 cycles May 17 00:39:01.031508 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 17 00:39:01.031521 kernel: tsc: Detected 2500.004 MHz processor May 17 00:39:01.031535 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 17 00:39:01.031547 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 17 00:39:01.031563 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 May 17 00:39:01.031576 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 17 00:39:01.031589 kernel: Using GB pages for direct mapping May 17 00:39:01.031602 kernel: Secure boot disabled May 17 00:39:01.031616 kernel: ACPI: Early table checksum verification disabled May 17 00:39:01.031634 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) May 17 00:39:01.031648 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) May 17 00:39:01.031664 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) May 17 00:39:01.031678 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) May 17 00:39:01.031692 kernel: ACPI: FACS 0x00000000789D0000 000040 May 17 00:39:01.031707 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) May 17 00:39:01.031721 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) May 17 00:39:01.031735 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) May 17 00:39:01.031749 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) May 17 00:39:01.031765 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) May 17 00:39:01.031779 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) May 17 00:39:01.031794 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) May 17 00:39:01.031808 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) May 17 00:39:01.031822 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] May 17 00:39:01.031837 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] May 17 00:39:01.031851 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] May 17 00:39:01.031865 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] May 17 00:39:01.031879 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] May 17 00:39:01.031896 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] May 17 00:39:01.031910 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] May 17 00:39:01.031925 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] May 17 00:39:01.031938 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] May 17 00:39:01.031952 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] May 17 00:39:01.031966 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] May 17 00:39:01.031980 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 17 00:39:01.031994 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 May 17 00:39:01.032008 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] May 17 00:39:01.032025 kernel: NUMA: Initialized distance table, cnt=1 May 17 00:39:01.032038 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] May 17 00:39:01.032053 kernel: Zone ranges: May 17 00:39:01.032067 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 17 00:39:01.032081 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] May 17 00:39:01.032095 kernel: Normal empty May 17 00:39:01.032109 kernel: Movable zone start for each node May 17 00:39:01.032123 kernel: Early memory node ranges May 17 00:39:01.032137 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 17 00:39:01.032153 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] May 17 00:39:01.032168 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] May 17 00:39:01.032182 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] May 17 00:39:01.032196 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 00:39:01.032209 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 17 00:39:01.032224 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges May 17 00:39:01.032238 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges May 17 00:39:01.032252 kernel: ACPI: PM-Timer IO Port: 0xb008 May 17 00:39:01.032267 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 17 00:39:01.032283 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 May 17 00:39:01.032297 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 17 00:39:01.032323 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 17 00:39:01.032334 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 17 00:39:01.032347 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 17 00:39:01.032361 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 17 00:39:01.032375 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 17 00:39:01.032389 kernel: TSC deadline timer available May 17 00:39:01.032402 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 17 00:39:01.032417 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices May 17 00:39:01.032434 kernel: Booting paravirtualized kernel on KVM May 17 00:39:01.032449 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 17 00:39:01.032463 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 May 17 00:39:01.032477 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 May 17 00:39:01.032491 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 May 17 00:39:01.032506 kernel: pcpu-alloc: [0] 0 1 May 17 00:39:01.032519 kernel: kvm-guest: stealtime: cpu 0, msr 7a41c0c0 May 17 00:39:01.032533 kernel: kvm-guest: PV spinlocks enabled May 17 00:39:01.032547 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 17 00:39:01.032565 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 May 17 00:39:01.032579 kernel: Policy zone: DMA32 May 17 00:39:01.032595 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:39:01.032609 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:39:01.032623 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:39:01.032637 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 17 00:39:01.032652 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:39:01.032670 kernel: Memory: 1876640K/2037804K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47472K init, 4108K bss, 160904K reserved, 0K cma-reserved) May 17 00:39:01.032684 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 17 00:39:01.032698 kernel: Kernel/User page tables isolation: enabled May 17 00:39:01.032712 kernel: ftrace: allocating 34585 entries in 136 pages May 17 00:39:01.032727 kernel: ftrace: allocated 136 pages with 2 groups May 17 00:39:01.032740 kernel: rcu: Hierarchical RCU implementation. May 17 00:39:01.032756 kernel: rcu: RCU event tracing is enabled. May 17 00:39:01.032783 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 17 00:39:01.032798 kernel: Rude variant of Tasks RCU enabled. May 17 00:39:01.032814 kernel: Tracing variant of Tasks RCU enabled. May 17 00:39:01.032829 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:39:01.032843 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 17 00:39:01.032862 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 17 00:39:01.032876 kernel: random: crng init done May 17 00:39:01.032891 kernel: Console: colour dummy device 80x25 May 17 00:39:01.032906 kernel: printk: console [tty0] enabled May 17 00:39:01.032921 kernel: printk: console [ttyS0] enabled May 17 00:39:01.032935 kernel: ACPI: Core revision 20210730 May 17 00:39:01.032951 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns May 17 00:39:01.032969 kernel: APIC: Switch to symmetric I/O mode setup May 17 00:39:01.032984 kernel: x2apic enabled May 17 00:39:01.032999 kernel: Switched APIC routing to physical x2apic. May 17 00:39:01.033014 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns May 17 00:39:01.033029 kernel: Calibrating delay loop (skipped) preset value.. 5000.00 BogoMIPS (lpj=2500004) May 17 00:39:01.033044 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 17 00:39:01.033059 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 17 00:39:01.033077 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 17 00:39:01.033091 kernel: Spectre V2 : Mitigation: Retpolines May 17 00:39:01.033106 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 17 00:39:01.033121 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! May 17 00:39:01.033136 kernel: RETBleed: Vulnerable May 17 00:39:01.033151 kernel: Speculative Store Bypass: Vulnerable May 17 00:39:01.033165 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode May 17 00:39:01.033179 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 17 00:39:01.033194 kernel: GDS: Unknown: Dependent on hypervisor status May 17 00:39:01.033209 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 17 00:39:01.033224 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 17 00:39:01.033241 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 17 00:39:01.033255 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' May 17 00:39:01.033270 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' May 17 00:39:01.033285 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' May 17 00:39:01.033299 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' May 17 00:39:01.037376 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' May 17 00:39:01.037419 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' May 17 00:39:01.037435 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 17 00:39:01.037451 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 May 17 00:39:01.037466 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 May 17 00:39:01.037481 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 May 17 00:39:01.037502 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 May 17 00:39:01.037516 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 May 17 00:39:01.037530 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 May 17 00:39:01.037545 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. May 17 00:39:01.037560 kernel: Freeing SMP alternatives memory: 32K May 17 00:39:01.037575 kernel: pid_max: default: 32768 minimum: 301 May 17 00:39:01.037590 kernel: LSM: Security Framework initializing May 17 00:39:01.037605 kernel: SELinux: Initializing. May 17 00:39:01.037621 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 17 00:39:01.037635 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 17 00:39:01.037651 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8175M CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x4) May 17 00:39:01.037669 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. May 17 00:39:01.037684 kernel: signal: max sigframe size: 3632 May 17 00:39:01.037700 kernel: rcu: Hierarchical SRCU implementation. May 17 00:39:01.037715 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 17 00:39:01.037730 kernel: smp: Bringing up secondary CPUs ... May 17 00:39:01.037746 kernel: x86: Booting SMP configuration: May 17 00:39:01.037761 kernel: .... node #0, CPUs: #1 May 17 00:39:01.037776 kernel: kvm-clock: cpu 1, msr 1d19a041, secondary cpu clock May 17 00:39:01.037790 kernel: kvm-guest: stealtime: cpu 1, msr 7a51c0c0 May 17 00:39:01.037807 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. May 17 00:39:01.037826 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. May 17 00:39:01.037841 kernel: smp: Brought up 1 node, 2 CPUs May 17 00:39:01.037856 kernel: smpboot: Max logical packages: 1 May 17 00:39:01.037871 kernel: smpboot: Total of 2 processors activated (10000.01 BogoMIPS) May 17 00:39:01.037886 kernel: devtmpfs: initialized May 17 00:39:01.037901 kernel: x86/mm: Memory block size: 128MB May 17 00:39:01.037916 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) May 17 00:39:01.037931 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:39:01.037949 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 17 00:39:01.037964 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:39:01.037980 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:39:01.037995 kernel: audit: initializing netlink subsys (disabled) May 17 00:39:01.038009 kernel: audit: type=2000 audit(1747442340.889:1): state=initialized audit_enabled=0 res=1 May 17 00:39:01.038024 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:39:01.038039 kernel: thermal_sys: Registered thermal governor 'user_space' May 17 00:39:01.038055 kernel: cpuidle: using governor menu May 17 00:39:01.038070 kernel: ACPI: bus type PCI registered May 17 00:39:01.038087 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:39:01.038102 kernel: dca service started, version 1.12.1 May 17 00:39:01.038117 kernel: PCI: Using configuration type 1 for base access May 17 00:39:01.038133 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 17 00:39:01.038148 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:39:01.038163 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:39:01.038178 kernel: ACPI: Added _OSI(Module Device) May 17 00:39:01.038193 kernel: ACPI: Added _OSI(Processor Device) May 17 00:39:01.038208 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:39:01.038226 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:39:01.038240 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 17 00:39:01.038255 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 17 00:39:01.038270 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 17 00:39:01.038300 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded May 17 00:39:01.038329 kernel: ACPI: Interpreter enabled May 17 00:39:01.038344 kernel: ACPI: PM: (supports S0 S5) May 17 00:39:01.038360 kernel: ACPI: Using IOAPIC for interrupt routing May 17 00:39:01.038375 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 17 00:39:01.038394 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 17 00:39:01.038409 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 17 00:39:01.038622 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 17 00:39:01.038755 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. May 17 00:39:01.038776 kernel: acpiphp: Slot [3] registered May 17 00:39:01.038791 kernel: acpiphp: Slot [4] registered May 17 00:39:01.038807 kernel: acpiphp: Slot [5] registered May 17 00:39:01.038825 kernel: acpiphp: Slot [6] registered May 17 00:39:01.038840 kernel: acpiphp: Slot [7] registered May 17 00:39:01.038855 kernel: acpiphp: Slot [8] registered May 17 00:39:01.038870 kernel: acpiphp: Slot [9] registered May 17 00:39:01.038886 kernel: acpiphp: Slot [10] registered May 17 00:39:01.038901 kernel: acpiphp: Slot [11] registered May 17 00:39:01.038916 kernel: acpiphp: Slot [12] registered May 17 00:39:01.038931 kernel: acpiphp: Slot [13] registered May 17 00:39:01.038946 kernel: acpiphp: Slot [14] registered May 17 00:39:01.038960 kernel: acpiphp: Slot [15] registered May 17 00:39:01.038977 kernel: acpiphp: Slot [16] registered May 17 00:39:01.038992 kernel: acpiphp: Slot [17] registered May 17 00:39:01.039007 kernel: acpiphp: Slot [18] registered May 17 00:39:01.039022 kernel: acpiphp: Slot [19] registered May 17 00:39:01.039037 kernel: acpiphp: Slot [20] registered May 17 00:39:01.039052 kernel: acpiphp: Slot [21] registered May 17 00:39:01.039067 kernel: acpiphp: Slot [22] registered May 17 00:39:01.039082 kernel: acpiphp: Slot [23] registered May 17 00:39:01.039097 kernel: acpiphp: Slot [24] registered May 17 00:39:01.039114 kernel: acpiphp: Slot [25] registered May 17 00:39:01.039129 kernel: acpiphp: Slot [26] registered May 17 00:39:01.039144 kernel: acpiphp: Slot [27] registered May 17 00:39:01.039159 kernel: acpiphp: Slot [28] registered May 17 00:39:01.039174 kernel: acpiphp: Slot [29] registered May 17 00:39:01.039188 kernel: acpiphp: Slot [30] registered May 17 00:39:01.039203 kernel: acpiphp: Slot [31] registered May 17 00:39:01.039218 kernel: PCI host bridge to bus 0000:00 May 17 00:39:01.047037 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 17 00:39:01.047203 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 17 00:39:01.050344 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 17 00:39:01.050524 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] May 17 00:39:01.050636 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] May 17 00:39:01.050741 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 17 00:39:01.050883 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 17 00:39:01.051022 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 May 17 00:39:01.051166 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 May 17 00:39:01.051295 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI May 17 00:39:01.051454 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff May 17 00:39:01.051580 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff May 17 00:39:01.051706 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff May 17 00:39:01.051833 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff May 17 00:39:01.051963 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff May 17 00:39:01.052089 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff May 17 00:39:01.052222 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 May 17 00:39:01.052371 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] May 17 00:39:01.052500 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 17 00:39:01.052627 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb May 17 00:39:01.052755 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 17 00:39:01.052899 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 May 17 00:39:01.053040 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] May 17 00:39:01.053176 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 May 17 00:39:01.053304 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] May 17 00:39:01.053350 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 17 00:39:01.053365 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 17 00:39:01.053380 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 17 00:39:01.053400 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 17 00:39:01.053415 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 17 00:39:01.053431 kernel: iommu: Default domain type: Translated May 17 00:39:01.053446 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 17 00:39:01.053585 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device May 17 00:39:01.053718 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 17 00:39:01.053856 kernel: pci 0000:00:03.0: vgaarb: bridge control possible May 17 00:39:01.053876 kernel: vgaarb: loaded May 17 00:39:01.053897 kernel: pps_core: LinuxPPS API ver. 1 registered May 17 00:39:01.053916 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 17 00:39:01.053931 kernel: PTP clock support registered May 17 00:39:01.053950 kernel: Registered efivars operations May 17 00:39:01.053965 kernel: PCI: Using ACPI for IRQ routing May 17 00:39:01.053980 kernel: PCI: pci_cache_line_size set to 64 bytes May 17 00:39:01.053995 kernel: e820: reserve RAM buffer [mem 0x76813018-0x77ffffff] May 17 00:39:01.054015 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] May 17 00:39:01.054030 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] May 17 00:39:01.054043 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 May 17 00:39:01.054061 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter May 17 00:39:01.054076 kernel: clocksource: Switched to clocksource kvm-clock May 17 00:39:01.054090 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:39:01.054106 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:39:01.054121 kernel: pnp: PnP ACPI init May 17 00:39:01.054136 kernel: pnp: PnP ACPI: found 5 devices May 17 00:39:01.054151 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 17 00:39:01.054166 kernel: NET: Registered PF_INET protocol family May 17 00:39:01.054181 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 17 00:39:01.054203 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 17 00:39:01.054219 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:39:01.054234 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 17 00:39:01.054249 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) May 17 00:39:01.054264 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 17 00:39:01.054280 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 17 00:39:01.054363 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 17 00:39:01.054378 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:39:01.054392 kernel: NET: Registered PF_XDP protocol family May 17 00:39:01.054534 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 17 00:39:01.054657 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 17 00:39:01.054776 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 17 00:39:01.054896 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] May 17 00:39:01.055014 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] May 17 00:39:01.055160 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 17 00:39:01.055327 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds May 17 00:39:01.055349 kernel: PCI: CLS 0 bytes, default 64 May 17 00:39:01.055365 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 17 00:39:01.055387 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns May 17 00:39:01.055403 kernel: clocksource: Switched to clocksource tsc May 17 00:39:01.055424 kernel: Initialise system trusted keyrings May 17 00:39:01.055439 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 17 00:39:01.055453 kernel: Key type asymmetric registered May 17 00:39:01.055474 kernel: Asymmetric key parser 'x509' registered May 17 00:39:01.055489 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 17 00:39:01.055506 kernel: io scheduler mq-deadline registered May 17 00:39:01.055521 kernel: io scheduler kyber registered May 17 00:39:01.055535 kernel: io scheduler bfq registered May 17 00:39:01.055555 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 17 00:39:01.055570 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:39:01.055586 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 17 00:39:01.055601 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 17 00:39:01.055616 kernel: i8042: Warning: Keylock active May 17 00:39:01.055636 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 17 00:39:01.055654 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 17 00:39:01.055804 kernel: rtc_cmos 00:00: RTC can wake from S4 May 17 00:39:01.055943 kernel: rtc_cmos 00:00: registered as rtc0 May 17 00:39:01.056065 kernel: rtc_cmos 00:00: setting system clock to 2025-05-17T00:39:00 UTC (1747442340) May 17 00:39:01.056179 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram May 17 00:39:01.056197 kernel: intel_pstate: CPU model not supported May 17 00:39:01.056213 kernel: efifb: probing for efifb May 17 00:39:01.056228 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k May 17 00:39:01.056247 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 May 17 00:39:01.056262 kernel: efifb: scrolling: redraw May 17 00:39:01.056276 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 17 00:39:01.056292 kernel: Console: switching to colour frame buffer device 100x37 May 17 00:39:01.056307 kernel: fb0: EFI VGA frame buffer device May 17 00:39:01.056343 kernel: pstore: Registered efi as persistent store backend May 17 00:39:01.056380 kernel: NET: Registered PF_INET6 protocol family May 17 00:39:01.056397 kernel: Segment Routing with IPv6 May 17 00:39:01.056413 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:39:01.056431 kernel: NET: Registered PF_PACKET protocol family May 17 00:39:01.056447 kernel: Key type dns_resolver registered May 17 00:39:01.056463 kernel: IPI shorthand broadcast: enabled May 17 00:39:01.056478 kernel: sched_clock: Marking stable (382210481, 180429761)->(675065143, -112424901) May 17 00:39:01.056494 kernel: registered taskstats version 1 May 17 00:39:01.056510 kernel: Loading compiled-in X.509 certificates May 17 00:39:01.056526 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.182-flatcar: 01ca23caa8e5879327538f9287e5164b3e97ac0c' May 17 00:39:01.056541 kernel: Key type .fscrypt registered May 17 00:39:01.056556 kernel: Key type fscrypt-provisioning registered May 17 00:39:01.056575 kernel: pstore: Using crash dump compression: deflate May 17 00:39:01.056590 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:39:01.056606 kernel: ima: Allocated hash algorithm: sha1 May 17 00:39:01.056625 kernel: ima: No architecture policies found May 17 00:39:01.056640 kernel: clk: Disabling unused clocks May 17 00:39:01.056656 kernel: Freeing unused kernel image (initmem) memory: 47472K May 17 00:39:01.056671 kernel: Write protecting the kernel read-only data: 28672k May 17 00:39:01.056687 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 17 00:39:01.056703 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 17 00:39:01.056721 kernel: Run /init as init process May 17 00:39:01.056737 kernel: with arguments: May 17 00:39:01.056753 kernel: /init May 17 00:39:01.056769 kernel: with environment: May 17 00:39:01.056784 kernel: HOME=/ May 17 00:39:01.056800 kernel: TERM=linux May 17 00:39:01.056815 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:39:01.056834 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 00:39:01.056855 systemd[1]: Detected virtualization amazon. May 17 00:39:01.056872 systemd[1]: Detected architecture x86-64. May 17 00:39:01.056887 systemd[1]: Running in initrd. May 17 00:39:01.056903 systemd[1]: No hostname configured, using default hostname. May 17 00:39:01.056919 systemd[1]: Hostname set to . May 17 00:39:01.056935 systemd[1]: Initializing machine ID from VM UUID. May 17 00:39:01.056951 systemd[1]: Queued start job for default target initrd.target. May 17 00:39:01.056967 systemd[1]: Started systemd-ask-password-console.path. May 17 00:39:01.056986 systemd[1]: Reached target cryptsetup.target. May 17 00:39:01.057002 systemd[1]: Reached target paths.target. May 17 00:39:01.057018 systemd[1]: Reached target slices.target. May 17 00:39:01.057034 systemd[1]: Reached target swap.target. May 17 00:39:01.057053 systemd[1]: Reached target timers.target. May 17 00:39:01.057073 systemd[1]: Listening on iscsid.socket. May 17 00:39:01.057089 systemd[1]: Listening on iscsiuio.socket. May 17 00:39:01.057105 systemd[1]: Listening on systemd-journald-audit.socket. May 17 00:39:01.057121 systemd[1]: Listening on systemd-journald-dev-log.socket. May 17 00:39:01.057137 systemd[1]: Listening on systemd-journald.socket. May 17 00:39:01.057153 systemd[1]: Listening on systemd-networkd.socket. May 17 00:39:01.057170 systemd[1]: Listening on systemd-udevd-control.socket. May 17 00:39:01.057186 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 00:39:01.057204 systemd[1]: Reached target sockets.target. May 17 00:39:01.057221 systemd[1]: Starting kmod-static-nodes.service... May 17 00:39:01.057237 systemd[1]: Finished network-cleanup.service. May 17 00:39:01.057253 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:39:01.057270 systemd[1]: Starting systemd-journald.service... May 17 00:39:01.057286 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 17 00:39:01.057302 systemd[1]: Starting systemd-modules-load.service... May 17 00:39:01.066743 systemd[1]: Starting systemd-resolved.service... May 17 00:39:01.066772 systemd[1]: Starting systemd-vconsole-setup.service... May 17 00:39:01.066795 systemd[1]: Finished kmod-static-nodes.service. May 17 00:39:01.066811 kernel: audit: type=1130 audit(1747442341.058:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:01.066828 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:39:01.066850 systemd-journald[185]: Journal started May 17 00:39:01.066947 systemd-journald[185]: Runtime Journal (/run/log/journal/ec27c79694c192ff2dd21587b3c300ef) is 4.8M, max 38.3M, 33.5M free. May 17 00:39:01.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:01.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:01.077572 systemd[1]: Started systemd-journald.service. May 17 00:39:01.077648 kernel: audit: type=1130 audit(1747442341.068:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:01.077792 systemd-modules-load[186]: Inserted module 'overlay' May 17 00:39:01.079193 systemd[1]: Finished systemd-vconsole-setup.service. May 17 00:39:01.084028 systemd[1]: Starting dracut-cmdline-ask.service... May 17 00:39:01.089579 systemd-resolved[187]: Positive Trust Anchors: May 17 00:39:01.089713 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 17 00:39:01.090560 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:39:01.138432 kernel: audit: type=1130 audit(1747442341.078:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:01.138474 kernel: audit: type=1130 audit(1747442341.079:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:01.138494 kernel: audit: type=1130 audit(1747442341.120:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:01.138521 kernel: audit: type=1130 audit(1747442341.121:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:01.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:01.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:01.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:01.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:01.090617 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 00:39:01.102944 systemd-resolved[187]: Defaulting to hostname 'linux'. May 17 00:39:01.103704 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 17 00:39:01.120783 systemd[1]: Started systemd-resolved.service. May 17 00:39:01.158440 kernel: audit: type=1130 audit(1747442341.147:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:01.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:01.121911 systemd[1]: Reached target nss-lookup.target. May 17 00:39:01.145795 systemd[1]: Finished dracut-cmdline-ask.service. May 17 00:39:01.157790 systemd[1]: Starting dracut-cmdline.service... May 17 00:39:01.173345 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:39:01.176306 dracut-cmdline[201]: dracut-dracut-053 May 17 00:39:01.180771 dracut-cmdline[201]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:39:01.188434 kernel: Bridge firewalling registered May 17 00:39:01.181741 systemd-modules-load[186]: Inserted module 'br_netfilter' May 17 00:39:01.211338 kernel: SCSI subsystem initialized May 17 00:39:01.232960 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:39:01.233036 kernel: device-mapper: uevent: version 1.0.3 May 17 00:39:01.236158 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 17 00:39:01.240706 systemd-modules-load[186]: Inserted module 'dm_multipath' May 17 00:39:01.242985 systemd[1]: Finished systemd-modules-load.service. May 17 00:39:01.254685 kernel: audit: type=1130 audit(1747442341.243:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:01.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:01.253628 systemd[1]: Starting systemd-sysctl.service... May 17 00:39:01.263607 systemd[1]: Finished systemd-sysctl.service. May 17 00:39:01.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:01.273420 kernel: audit: type=1130 audit(1747442341.264:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:01.278356 kernel: Loading iSCSI transport class v2.0-870. May 17 00:39:01.297348 kernel: iscsi: registered transport (tcp) May 17 00:39:01.323123 kernel: iscsi: registered transport (qla4xxx) May 17 00:39:01.323204 kernel: QLogic iSCSI HBA Driver May 17 00:39:01.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:01.354607 systemd[1]: Finished dracut-cmdline.service. May 17 00:39:01.356171 systemd[1]: Starting dracut-pre-udev.service... May 17 00:39:01.410390 kernel: raid6: avx512x4 gen() 16776 MB/s May 17 00:39:01.428360 kernel: raid6: avx512x4 xor() 7762 MB/s May 17 00:39:01.446382 kernel: raid6: avx512x2 gen() 18186 MB/s May 17 00:39:01.464357 kernel: raid6: avx512x2 xor() 24257 MB/s May 17 00:39:01.482346 kernel: raid6: avx512x1 gen() 18231 MB/s May 17 00:39:01.500348 kernel: raid6: avx512x1 xor() 21963 MB/s May 17 00:39:01.518346 kernel: raid6: avx2x4 gen() 18090 MB/s May 17 00:39:01.536364 kernel: raid6: avx2x4 xor() 7426 MB/s May 17 00:39:01.554349 kernel: raid6: avx2x2 gen() 17941 MB/s May 17 00:39:01.572344 kernel: raid6: avx2x2 xor() 17991 MB/s May 17 00:39:01.590409 kernel: raid6: avx2x1 gen() 13765 MB/s May 17 00:39:01.608347 kernel: raid6: avx2x1 xor() 15651 MB/s May 17 00:39:01.626349 kernel: raid6: sse2x4 gen() 9376 MB/s May 17 00:39:01.644344 kernel: raid6: sse2x4 xor() 5988 MB/s May 17 00:39:01.662416 kernel: raid6: sse2x2 gen() 10478 MB/s May 17 00:39:01.680342 kernel: raid6: sse2x2 xor() 6269 MB/s May 17 00:39:01.698346 kernel: raid6: sse2x1 gen() 9388 MB/s May 17 00:39:01.716998 kernel: raid6: sse2x1 xor() 4764 MB/s May 17 00:39:01.717079 kernel: raid6: using algorithm avx512x1 gen() 18231 MB/s May 17 00:39:01.717102 kernel: raid6: .... xor() 21963 MB/s, rmw enabled May 17 00:39:01.718381 kernel: raid6: using avx512x2 recovery algorithm May 17 00:39:01.734344 kernel: xor: automatically using best checksumming function avx May 17 00:39:01.838343 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 17 00:39:01.847165 systemd[1]: Finished dracut-pre-udev.service. May 17 00:39:01.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:01.847000 audit: BPF prog-id=7 op=LOAD May 17 00:39:01.847000 audit: BPF prog-id=8 op=LOAD May 17 00:39:01.848690 systemd[1]: Starting systemd-udevd.service... May 17 00:39:01.861604 systemd-udevd[384]: Using default interface naming scheme 'v252'. May 17 00:39:01.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:01.866830 systemd[1]: Started systemd-udevd.service. May 17 00:39:01.868586 systemd[1]: Starting dracut-pre-trigger.service... May 17 00:39:01.887648 dracut-pre-trigger[389]: rd.md=0: removing MD RAID activation May 17 00:39:01.918982 systemd[1]: Finished dracut-pre-trigger.service. May 17 00:39:01.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:01.920402 systemd[1]: Starting systemd-udev-trigger.service... May 17 00:39:01.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:01.961706 systemd[1]: Finished systemd-udev-trigger.service. May 17 00:39:02.015337 kernel: cryptd: max_cpu_qlen set to 1000 May 17 00:39:02.049285 kernel: AVX2 version of gcm_enc/dec engaged. May 17 00:39:02.049391 kernel: AES CTR mode by8 optimization enabled May 17 00:39:02.058958 kernel: ena 0000:00:05.0: ENA device version: 0.10 May 17 00:39:02.075198 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 May 17 00:39:02.075392 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. May 17 00:39:02.075521 kernel: nvme nvme0: pci function 0000:00:04.0 May 17 00:39:02.075635 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 17 00:39:02.075648 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:da:1c:fe:f0:b7 May 17 00:39:02.082341 kernel: nvme nvme0: 2/0/0 default/read/poll queues May 17 00:39:02.091933 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 00:39:02.092011 kernel: GPT:9289727 != 16777215 May 17 00:39:02.092031 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 00:39:02.094769 kernel: GPT:9289727 != 16777215 May 17 00:39:02.094827 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:39:02.097533 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:39:02.102761 (udev-worker)[432]: Network interface NamePolicy= disabled on kernel command line. May 17 00:39:02.176339 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (430) May 17 00:39:02.195956 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 17 00:39:02.235852 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 00:39:02.240982 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 17 00:39:02.241767 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 17 00:39:02.248103 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 17 00:39:02.249944 systemd[1]: Starting disk-uuid.service... May 17 00:39:02.258188 disk-uuid[592]: Primary Header is updated. May 17 00:39:02.258188 disk-uuid[592]: Secondary Entries is updated. May 17 00:39:02.258188 disk-uuid[592]: Secondary Header is updated. May 17 00:39:02.268338 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:39:02.276341 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:39:02.285344 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:39:03.284981 disk-uuid[593]: The operation has completed successfully. May 17 00:39:03.286230 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:39:03.402690 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:39:03.402804 systemd[1]: Finished disk-uuid.service. May 17 00:39:03.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.403000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.409466 systemd[1]: Starting verity-setup.service... May 17 00:39:03.438364 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 17 00:39:03.536233 systemd[1]: Found device dev-mapper-usr.device. May 17 00:39:03.538345 systemd[1]: Mounting sysusr-usr.mount... May 17 00:39:03.541591 systemd[1]: Finished verity-setup.service. May 17 00:39:03.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.633351 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 17 00:39:03.633816 systemd[1]: Mounted sysusr-usr.mount. May 17 00:39:03.634658 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 17 00:39:03.635668 systemd[1]: Starting ignition-setup.service... May 17 00:39:03.640434 systemd[1]: Starting parse-ip-for-networkd.service... May 17 00:39:03.669349 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 17 00:39:03.669426 kernel: BTRFS info (device nvme0n1p6): using free space tree May 17 00:39:03.669446 kernel: BTRFS info (device nvme0n1p6): has skinny extents May 17 00:39:03.692345 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 17 00:39:03.706807 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:39:03.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.716626 systemd[1]: Finished ignition-setup.service. May 17 00:39:03.718213 systemd[1]: Starting ignition-fetch-offline.service... May 17 00:39:03.729902 systemd[1]: Finished parse-ip-for-networkd.service. May 17 00:39:03.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.731000 audit: BPF prog-id=9 op=LOAD May 17 00:39:03.732265 systemd[1]: Starting systemd-networkd.service... May 17 00:39:03.755771 systemd-networkd[1104]: lo: Link UP May 17 00:39:03.756573 systemd-networkd[1104]: lo: Gained carrier May 17 00:39:03.757956 systemd-networkd[1104]: Enumeration completed May 17 00:39:03.758693 systemd[1]: Started systemd-networkd.service. May 17 00:39:03.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.759632 systemd[1]: Reached target network.target. May 17 00:39:03.760626 systemd-networkd[1104]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:39:03.762558 systemd[1]: Starting iscsiuio.service... May 17 00:39:03.769717 systemd-networkd[1104]: eth0: Link UP May 17 00:39:03.769728 systemd-networkd[1104]: eth0: Gained carrier May 17 00:39:03.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.769896 systemd[1]: Started iscsiuio.service. May 17 00:39:03.771832 systemd[1]: Starting iscsid.service... May 17 00:39:03.776566 iscsid[1109]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 17 00:39:03.776566 iscsid[1109]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 17 00:39:03.776566 iscsid[1109]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 17 00:39:03.776566 iscsid[1109]: If using hardware iscsi like qla4xxx this message can be ignored. May 17 00:39:03.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.788175 iscsid[1109]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 17 00:39:03.788175 iscsid[1109]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 17 00:39:03.778599 systemd[1]: Started iscsid.service. May 17 00:39:03.780202 systemd[1]: Starting dracut-initqueue.service... May 17 00:39:03.794433 systemd-networkd[1104]: eth0: DHCPv4 address 172.31.16.188/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 17 00:39:03.799743 systemd[1]: Finished dracut-initqueue.service. May 17 00:39:03.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.800783 systemd[1]: Reached target remote-fs-pre.target. May 17 00:39:03.801934 systemd[1]: Reached target remote-cryptsetup.target. May 17 00:39:03.803194 systemd[1]: Reached target remote-fs.target. May 17 00:39:03.805216 systemd[1]: Starting dracut-pre-mount.service... May 17 00:39:03.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:03.815572 systemd[1]: Finished dracut-pre-mount.service. May 17 00:39:04.226011 ignition[1096]: Ignition 2.14.0 May 17 00:39:04.226022 ignition[1096]: Stage: fetch-offline May 17 00:39:04.226127 ignition[1096]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:39:04.226157 ignition[1096]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b May 17 00:39:04.242598 ignition[1096]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:39:04.243120 ignition[1096]: Ignition finished successfully May 17 00:39:04.244905 systemd[1]: Finished ignition-fetch-offline.service. May 17 00:39:04.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:04.246600 systemd[1]: Starting ignition-fetch.service... May 17 00:39:04.255514 ignition[1128]: Ignition 2.14.0 May 17 00:39:04.255530 ignition[1128]: Stage: fetch May 17 00:39:04.255732 ignition[1128]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:39:04.255764 ignition[1128]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b May 17 00:39:04.266084 ignition[1128]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:39:04.267266 ignition[1128]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:39:04.289231 ignition[1128]: INFO : PUT result: OK May 17 00:39:04.290942 ignition[1128]: DEBUG : parsed url from cmdline: "" May 17 00:39:04.290942 ignition[1128]: INFO : no config URL provided May 17 00:39:04.290942 ignition[1128]: INFO : reading system config file "/usr/lib/ignition/user.ign" May 17 00:39:04.290942 ignition[1128]: INFO : no config at "/usr/lib/ignition/user.ign" May 17 00:39:04.293123 ignition[1128]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:39:04.293123 ignition[1128]: INFO : PUT result: OK May 17 00:39:04.293123 ignition[1128]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 May 17 00:39:04.293123 ignition[1128]: INFO : GET result: OK May 17 00:39:04.293123 ignition[1128]: DEBUG : parsing config with SHA512: b10c58959d53831cf5184aa1ecfd4787f427c0571353ce7ff4c715a4ea6ae5b10d9a6ab061e9bacb48431c04a563e05e55fa08aa2bf7a5fe966ab3de78cf7239 May 17 00:39:04.294841 ignition[1128]: fetch: fetch complete May 17 00:39:04.294350 unknown[1128]: fetched base config from "system" May 17 00:39:04.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:04.294845 ignition[1128]: fetch: fetch passed May 17 00:39:04.294357 unknown[1128]: fetched base config from "system" May 17 00:39:04.294894 ignition[1128]: Ignition finished successfully May 17 00:39:04.294363 unknown[1128]: fetched user config from "aws" May 17 00:39:04.296545 systemd[1]: Finished ignition-fetch.service. May 17 00:39:04.298463 systemd[1]: Starting ignition-kargs.service... May 17 00:39:04.309552 ignition[1134]: Ignition 2.14.0 May 17 00:39:04.309565 ignition[1134]: Stage: kargs May 17 00:39:04.309784 ignition[1134]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:39:04.309814 ignition[1134]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b May 17 00:39:04.316958 ignition[1134]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:39:04.317824 ignition[1134]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:39:04.318634 ignition[1134]: INFO : PUT result: OK May 17 00:39:04.320082 ignition[1134]: kargs: kargs passed May 17 00:39:04.320151 ignition[1134]: Ignition finished successfully May 17 00:39:04.321364 systemd[1]: Finished ignition-kargs.service. May 17 00:39:04.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:04.323496 systemd[1]: Starting ignition-disks.service... May 17 00:39:04.332640 ignition[1140]: Ignition 2.14.0 May 17 00:39:04.332653 ignition[1140]: Stage: disks May 17 00:39:04.332856 ignition[1140]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:39:04.332888 ignition[1140]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b May 17 00:39:04.340509 ignition[1140]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:39:04.341394 ignition[1140]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:39:04.342457 ignition[1140]: INFO : PUT result: OK May 17 00:39:04.344477 ignition[1140]: disks: disks passed May 17 00:39:04.344541 ignition[1140]: Ignition finished successfully May 17 00:39:04.345964 systemd[1]: Finished ignition-disks.service. May 17 00:39:04.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:04.347156 systemd[1]: Reached target initrd-root-device.target. May 17 00:39:04.348076 systemd[1]: Reached target local-fs-pre.target. May 17 00:39:04.349019 systemd[1]: Reached target local-fs.target. May 17 00:39:04.349992 systemd[1]: Reached target sysinit.target. May 17 00:39:04.351017 systemd[1]: Reached target basic.target. May 17 00:39:04.353095 systemd[1]: Starting systemd-fsck-root.service... May 17 00:39:04.380348 systemd-fsck[1148]: ROOT: clean, 619/553520 files, 56023/553472 blocks May 17 00:39:04.383433 systemd[1]: Finished systemd-fsck-root.service. May 17 00:39:04.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:04.385296 systemd[1]: Mounting sysroot.mount... May 17 00:39:04.407575 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 17 00:39:04.408844 systemd[1]: Mounted sysroot.mount. May 17 00:39:04.410140 systemd[1]: Reached target initrd-root-fs.target. May 17 00:39:04.413336 systemd[1]: Mounting sysroot-usr.mount... May 17 00:39:04.415112 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 17 00:39:04.415920 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:39:04.415949 systemd[1]: Reached target ignition-diskful.target. May 17 00:39:04.417862 systemd[1]: Mounted sysroot-usr.mount. May 17 00:39:04.421515 systemd[1]: Starting initrd-setup-root.service... May 17 00:39:04.427406 initrd-setup-root[1169]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:39:04.450302 initrd-setup-root[1177]: cut: /sysroot/etc/group: No such file or directory May 17 00:39:04.454636 initrd-setup-root[1185]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:39:04.459295 initrd-setup-root[1193]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:39:04.566445 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 17 00:39:04.587344 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1203) May 17 00:39:04.593560 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 17 00:39:04.593628 kernel: BTRFS info (device nvme0n1p6): using free space tree May 17 00:39:04.593650 kernel: BTRFS info (device nvme0n1p6): has skinny extents May 17 00:39:04.605173 systemd[1]: Finished initrd-setup-root.service. May 17 00:39:04.607001 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 17 00:39:04.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:04.607054 systemd[1]: Starting ignition-mount.service... May 17 00:39:04.609133 systemd[1]: Starting sysroot-boot.service... May 17 00:39:04.617197 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 17 00:39:04.619600 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. May 17 00:39:04.620431 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. May 17 00:39:04.634902 ignition[1230]: INFO : Ignition 2.14.0 May 17 00:39:04.636202 ignition[1230]: INFO : Stage: mount May 17 00:39:04.637447 ignition[1230]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:39:04.639274 ignition[1230]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b May 17 00:39:04.653221 systemd[1]: Finished sysroot-boot.service. May 17 00:39:04.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:04.655076 ignition[1230]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:39:04.655076 ignition[1230]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:39:04.656702 ignition[1230]: INFO : PUT result: OK May 17 00:39:04.657913 ignition[1230]: INFO : mount: mount passed May 17 00:39:04.658853 ignition[1230]: INFO : Ignition finished successfully May 17 00:39:04.659223 systemd[1]: Finished ignition-mount.service. May 17 00:39:04.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:04.661187 systemd[1]: Starting ignition-files.service... May 17 00:39:04.669354 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 17 00:39:04.692355 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1240) May 17 00:39:04.697083 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm May 17 00:39:04.697152 kernel: BTRFS info (device nvme0n1p6): using free space tree May 17 00:39:04.697175 kernel: BTRFS info (device nvme0n1p6): has skinny extents May 17 00:39:04.737336 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 17 00:39:04.742354 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 17 00:39:04.753608 ignition[1259]: INFO : Ignition 2.14.0 May 17 00:39:04.753608 ignition[1259]: INFO : Stage: files May 17 00:39:04.755047 ignition[1259]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:39:04.755047 ignition[1259]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b May 17 00:39:04.760645 ignition[1259]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:39:04.761380 ignition[1259]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:39:04.762031 ignition[1259]: INFO : PUT result: OK May 17 00:39:04.764997 ignition[1259]: DEBUG : files: compiled without relabeling support, skipping May 17 00:39:04.773790 ignition[1259]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:39:04.773790 ignition[1259]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:39:04.786985 ignition[1259]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:39:04.788202 ignition[1259]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:39:04.789613 unknown[1259]: wrote ssh authorized keys file for user: core May 17 00:39:04.790441 ignition[1259]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:39:04.792378 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/eks/bootstrap.sh" May 17 00:39:04.793523 ignition[1259]: INFO : oem config not found in "/usr/share/oem", looking on oem partition May 17 00:39:04.797729 ignition[1259]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem429397338" May 17 00:39:04.799147 ignition[1259]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem429397338": device or resource busy May 17 00:39:04.799147 ignition[1259]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem429397338", trying btrfs: device or resource busy May 17 00:39:04.799147 ignition[1259]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem429397338" May 17 00:39:04.799147 ignition[1259]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem429397338" May 17 00:39:04.811828 ignition[1259]: INFO : op(3): [started] unmounting "/mnt/oem429397338" May 17 00:39:04.811828 ignition[1259]: INFO : op(3): [finished] unmounting "/mnt/oem429397338" May 17 00:39:04.814085 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" May 17 00:39:04.814085 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 17 00:39:04.814085 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:39:04.814085 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:39:04.814085 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:39:04.814085 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:39:04.814085 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:39:04.814085 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" May 17 00:39:04.814085 ignition[1259]: INFO : oem config not found in "/usr/share/oem", looking on oem partition May 17 00:39:04.838441 ignition[1259]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem356614684" May 17 00:39:04.838441 ignition[1259]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem356614684": device or resource busy May 17 00:39:04.838441 ignition[1259]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem356614684", trying btrfs: device or resource busy May 17 00:39:04.838441 ignition[1259]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem356614684" May 17 00:39:04.838441 ignition[1259]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem356614684" May 17 00:39:04.838441 ignition[1259]: INFO : op(6): [started] unmounting "/mnt/oem356614684" May 17 00:39:04.838441 ignition[1259]: INFO : op(6): [finished] unmounting "/mnt/oem356614684" May 17 00:39:04.838441 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" May 17 00:39:04.838441 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" May 17 00:39:04.838441 ignition[1259]: INFO : oem config not found in "/usr/share/oem", looking on oem partition May 17 00:39:04.838441 ignition[1259]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3461733345" May 17 00:39:04.838441 ignition[1259]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3461733345": device or resource busy May 17 00:39:04.838441 ignition[1259]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3461733345", trying btrfs: device or resource busy May 17 00:39:04.838441 ignition[1259]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3461733345" May 17 00:39:04.838441 ignition[1259]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3461733345" May 17 00:39:04.838441 ignition[1259]: INFO : op(9): [started] unmounting "/mnt/oem3461733345" May 17 00:39:04.838441 ignition[1259]: INFO : op(9): [finished] unmounting "/mnt/oem3461733345" May 17 00:39:04.838441 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" May 17 00:39:04.868108 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" May 17 00:39:04.868108 ignition[1259]: INFO : oem config not found in "/usr/share/oem", looking on oem partition May 17 00:39:04.868108 ignition[1259]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1717436019" May 17 00:39:04.868108 ignition[1259]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1717436019": device or resource busy May 17 00:39:04.868108 ignition[1259]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1717436019", trying btrfs: device or resource busy May 17 00:39:04.868108 ignition[1259]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1717436019" May 17 00:39:04.868108 ignition[1259]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1717436019" May 17 00:39:04.868108 ignition[1259]: INFO : op(c): [started] unmounting "/mnt/oem1717436019" May 17 00:39:04.868108 ignition[1259]: INFO : op(c): [finished] unmounting "/mnt/oem1717436019" May 17 00:39:04.868108 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" May 17 00:39:04.868108 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:39:04.868108 ignition[1259]: INFO : GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 May 17 00:39:05.204483 systemd-networkd[1104]: eth0: Gained IPv6LL May 17 00:39:05.544565 ignition[1259]: INFO : GET result: OK May 17 00:39:05.997429 ignition[1259]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:39:05.999513 ignition[1259]: INFO : files: op(b): [started] processing unit "coreos-metadata-sshkeys@.service" May 17 00:39:06.001111 ignition[1259]: INFO : files: op(b): [finished] processing unit "coreos-metadata-sshkeys@.service" May 17 00:39:06.001111 ignition[1259]: INFO : files: op(c): [started] processing unit "amazon-ssm-agent.service" May 17 00:39:06.005853 ignition[1259]: INFO : files: op(c): op(d): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" May 17 00:39:06.005853 ignition[1259]: INFO : files: op(c): op(d): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" May 17 00:39:06.005853 ignition[1259]: INFO : files: op(c): [finished] processing unit "amazon-ssm-agent.service" May 17 00:39:06.005853 ignition[1259]: INFO : files: op(e): [started] processing unit "nvidia.service" May 17 00:39:06.005853 ignition[1259]: INFO : files: op(e): [finished] processing unit "nvidia.service" May 17 00:39:06.005853 ignition[1259]: INFO : files: op(f): [started] setting preset to enabled for "amazon-ssm-agent.service" May 17 00:39:06.005853 ignition[1259]: INFO : files: op(f): [finished] setting preset to enabled for "amazon-ssm-agent.service" May 17 00:39:06.005853 ignition[1259]: INFO : files: op(10): [started] setting preset to enabled for "nvidia.service" May 17 00:39:06.005853 ignition[1259]: INFO : files: op(10): [finished] setting preset to enabled for "nvidia.service" May 17 00:39:06.005853 ignition[1259]: INFO : files: op(11): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " May 17 00:39:06.005853 ignition[1259]: INFO : files: op(11): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " May 17 00:39:06.064928 kernel: kauditd_printk_skb: 26 callbacks suppressed May 17 00:39:06.064974 kernel: audit: type=1130 audit(1747442346.013:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.064995 kernel: audit: type=1130 audit(1747442346.034:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.065014 kernel: audit: type=1131 audit(1747442346.035:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.065032 kernel: audit: type=1130 audit(1747442346.050:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.065252 ignition[1259]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:39:06.065252 ignition[1259]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:39:06.065252 ignition[1259]: INFO : files: files passed May 17 00:39:06.065252 ignition[1259]: INFO : Ignition finished successfully May 17 00:39:06.012146 systemd[1]: Finished ignition-files.service. May 17 00:39:06.022113 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 17 00:39:06.074986 initrd-setup-root-after-ignition[1284]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:39:06.025575 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 17 00:39:06.026778 systemd[1]: Starting ignition-quench.service... May 17 00:39:06.032763 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:39:06.032891 systemd[1]: Finished ignition-quench.service. May 17 00:39:06.095539 kernel: audit: type=1130 audit(1747442346.083:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.095586 kernel: audit: type=1131 audit(1747442346.083:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.043418 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 17 00:39:06.050803 systemd[1]: Reached target ignition-complete.target. May 17 00:39:06.060392 systemd[1]: Starting initrd-parse-etc.service... May 17 00:39:06.082609 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:39:06.082703 systemd[1]: Finished initrd-parse-etc.service. May 17 00:39:06.083573 systemd[1]: Reached target initrd-fs.target. May 17 00:39:06.096421 systemd[1]: Reached target initrd.target. May 17 00:39:06.097792 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 17 00:39:06.099161 systemd[1]: Starting dracut-pre-pivot.service... May 17 00:39:06.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.121454 kernel: audit: type=1130 audit(1747442346.114:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.114076 systemd[1]: Finished dracut-pre-pivot.service. May 17 00:39:06.122672 systemd[1]: Starting initrd-cleanup.service... May 17 00:39:06.134005 systemd[1]: Stopped target nss-lookup.target. May 17 00:39:06.135562 systemd[1]: Stopped target remote-cryptsetup.target. May 17 00:39:06.136416 systemd[1]: Stopped target timers.target. May 17 00:39:06.137678 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:39:06.145093 kernel: audit: type=1131 audit(1747442346.138:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.138000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.137844 systemd[1]: Stopped dracut-pre-pivot.service. May 17 00:39:06.139181 systemd[1]: Stopped target initrd.target. May 17 00:39:06.145900 systemd[1]: Stopped target basic.target. May 17 00:39:06.147209 systemd[1]: Stopped target ignition-complete.target. May 17 00:39:06.148443 systemd[1]: Stopped target ignition-diskful.target. May 17 00:39:06.149611 systemd[1]: Stopped target initrd-root-device.target. May 17 00:39:06.150842 systemd[1]: Stopped target remote-fs.target. May 17 00:39:06.151992 systemd[1]: Stopped target remote-fs-pre.target. May 17 00:39:06.153220 systemd[1]: Stopped target sysinit.target. May 17 00:39:06.154508 systemd[1]: Stopped target local-fs.target. May 17 00:39:06.155672 systemd[1]: Stopped target local-fs-pre.target. May 17 00:39:06.156874 systemd[1]: Stopped target swap.target. May 17 00:39:06.165075 kernel: audit: type=1131 audit(1747442346.158:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.157949 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:39:06.158153 systemd[1]: Stopped dracut-pre-mount.service. May 17 00:39:06.173166 kernel: audit: type=1131 audit(1747442346.166:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.159414 systemd[1]: Stopped target cryptsetup.target. May 17 00:39:06.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.165861 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:39:06.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.166071 systemd[1]: Stopped dracut-initqueue.service. May 17 00:39:06.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.189529 iscsid[1109]: iscsid shutting down. May 17 00:39:06.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.167345 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:39:06.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.194024 ignition[1297]: INFO : Ignition 2.14.0 May 17 00:39:06.194024 ignition[1297]: INFO : Stage: umount May 17 00:39:06.194024 ignition[1297]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:39:06.194024 ignition[1297]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b May 17 00:39:06.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.167551 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 17 00:39:06.174110 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:39:06.174393 systemd[1]: Stopped ignition-files.service. May 17 00:39:06.216037 ignition[1297]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:39:06.216037 ignition[1297]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:39:06.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.176808 systemd[1]: Stopping ignition-mount.service... May 17 00:39:06.220349 ignition[1297]: INFO : PUT result: OK May 17 00:39:06.222000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.224000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.178572 systemd[1]: Stopping iscsid.service... May 17 00:39:06.228547 ignition[1297]: INFO : umount: umount passed May 17 00:39:06.228547 ignition[1297]: INFO : Ignition finished successfully May 17 00:39:06.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.179484 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:39:06.179672 systemd[1]: Stopped kmod-static-nodes.service. May 17 00:39:06.187028 systemd[1]: Stopping sysroot-boot.service... May 17 00:39:06.188015 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:39:06.188278 systemd[1]: Stopped systemd-udev-trigger.service. May 17 00:39:06.190615 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:39:06.191220 systemd[1]: Stopped dracut-pre-trigger.service. May 17 00:39:06.201790 systemd[1]: iscsid.service: Deactivated successfully. May 17 00:39:06.201927 systemd[1]: Stopped iscsid.service. May 17 00:39:06.206584 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:39:06.206709 systemd[1]: Finished initrd-cleanup.service. May 17 00:39:06.210787 systemd[1]: Stopping iscsiuio.service... May 17 00:39:06.215588 systemd[1]: iscsiuio.service: Deactivated successfully. May 17 00:39:06.215708 systemd[1]: Stopped iscsiuio.service. May 17 00:39:06.221704 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:39:06.242000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.221816 systemd[1]: Stopped ignition-mount.service. May 17 00:39:06.222962 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:39:06.223022 systemd[1]: Stopped ignition-disks.service. May 17 00:39:06.224039 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:39:06.224095 systemd[1]: Stopped ignition-kargs.service. May 17 00:39:06.225070 systemd[1]: ignition-fetch.service: Deactivated successfully. May 17 00:39:06.225121 systemd[1]: Stopped ignition-fetch.service. May 17 00:39:06.226122 systemd[1]: Stopped target network.target. May 17 00:39:06.227299 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:39:06.251000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.228224 systemd[1]: Stopped ignition-fetch-offline.service. May 17 00:39:06.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.254000 audit: BPF prog-id=6 op=UNLOAD May 17 00:39:06.229128 systemd[1]: Stopped target paths.target. May 17 00:39:06.231139 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:39:06.236398 systemd[1]: Stopped systemd-ask-password-console.path. May 17 00:39:06.260000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.237445 systemd[1]: Stopped target slices.target. May 17 00:39:06.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.238905 systemd[1]: Stopped target sockets.target. May 17 00:39:06.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.240000 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:39:06.240061 systemd[1]: Closed iscsid.socket. May 17 00:39:06.241066 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:39:06.241114 systemd[1]: Closed iscsiuio.socket. May 17 00:39:06.242102 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:39:06.242167 systemd[1]: Stopped ignition-setup.service. May 17 00:39:06.243688 systemd[1]: Stopping systemd-networkd.service... May 17 00:39:06.275000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.244637 systemd[1]: Stopping systemd-resolved.service... May 17 00:39:06.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.247368 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:39:06.248406 systemd-networkd[1104]: eth0: DHCPv6 lease lost May 17 00:39:06.278000 audit: BPF prog-id=9 op=UNLOAD May 17 00:39:06.251241 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:39:06.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.251416 systemd[1]: Stopped systemd-resolved.service. May 17 00:39:06.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.252723 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:39:06.283000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.252861 systemd[1]: Stopped systemd-networkd.service. May 17 00:39:06.254111 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:39:06.288000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.254159 systemd[1]: Closed systemd-networkd.socket. May 17 00:39:06.256378 systemd[1]: Stopping network-cleanup.service... May 17 00:39:06.259499 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:39:06.259582 systemd[1]: Stopped parse-ip-for-networkd.service. May 17 00:39:06.260773 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:39:06.260834 systemd[1]: Stopped systemd-sysctl.service. May 17 00:39:06.262045 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:39:06.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.262104 systemd[1]: Stopped systemd-modules-load.service. May 17 00:39:06.263351 systemd[1]: Stopping systemd-udevd.service... May 17 00:39:06.269296 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 17 00:39:06.274264 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:39:06.274469 systemd[1]: Stopped systemd-udevd.service. May 17 00:39:06.276083 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:39:06.276212 systemd[1]: Stopped network-cleanup.service. May 17 00:39:06.277258 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:39:06.277325 systemd[1]: Closed systemd-udevd-control.socket. May 17 00:39:06.278300 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:39:06.278372 systemd[1]: Closed systemd-udevd-kernel.socket. May 17 00:39:06.279400 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:39:06.279458 systemd[1]: Stopped dracut-pre-udev.service. May 17 00:39:06.281223 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:39:06.281279 systemd[1]: Stopped dracut-cmdline.service. May 17 00:39:06.282540 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:39:06.282593 systemd[1]: Stopped dracut-cmdline-ask.service. May 17 00:39:06.285274 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 17 00:39:06.285985 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:39:06.286054 systemd[1]: Stopped systemd-vconsole-setup.service. May 17 00:39:06.295053 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:39:06.295159 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 17 00:39:06.358643 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:39:06.358756 systemd[1]: Stopped sysroot-boot.service. May 17 00:39:06.359000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.360034 systemd[1]: Reached target initrd-switch-root.target. May 17 00:39:06.360846 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:39:06.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:06.360904 systemd[1]: Stopped initrd-setup-root.service. May 17 00:39:06.362884 systemd[1]: Starting initrd-switch-root.service... May 17 00:39:06.375503 systemd[1]: Switching root. May 17 00:39:06.395538 systemd-journald[185]: Journal stopped May 17 00:39:11.443945 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). May 17 00:39:11.444032 kernel: SELinux: Class mctp_socket not defined in policy. May 17 00:39:11.444059 kernel: SELinux: Class anon_inode not defined in policy. May 17 00:39:11.444085 kernel: SELinux: the above unknown classes and permissions will be allowed May 17 00:39:11.444106 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:39:11.444126 kernel: SELinux: policy capability open_perms=1 May 17 00:39:11.444148 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:39:11.444168 kernel: SELinux: policy capability always_check_network=0 May 17 00:39:11.444187 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:39:11.444208 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:39:11.444231 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:39:11.444251 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:39:11.444280 systemd[1]: Successfully loaded SELinux policy in 72.955ms. May 17 00:39:11.444339 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.905ms. May 17 00:39:11.444368 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 00:39:11.444390 systemd[1]: Detected virtualization amazon. May 17 00:39:11.444412 systemd[1]: Detected architecture x86-64. May 17 00:39:11.444435 systemd[1]: Detected first boot. May 17 00:39:11.444459 systemd[1]: Initializing machine ID from VM UUID. May 17 00:39:11.444482 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 17 00:39:11.444505 systemd[1]: Populated /etc with preset unit settings. May 17 00:39:11.444528 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:39:11.444558 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:39:11.444582 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:39:11.444605 kernel: kauditd_printk_skb: 48 callbacks suppressed May 17 00:39:11.444629 kernel: audit: type=1334 audit(1747442351.151:88): prog-id=12 op=LOAD May 17 00:39:11.444649 kernel: audit: type=1334 audit(1747442351.151:89): prog-id=3 op=UNLOAD May 17 00:39:11.444669 kernel: audit: type=1334 audit(1747442351.153:90): prog-id=13 op=LOAD May 17 00:39:11.444690 kernel: audit: type=1334 audit(1747442351.155:91): prog-id=14 op=LOAD May 17 00:39:11.444710 kernel: audit: type=1334 audit(1747442351.155:92): prog-id=4 op=UNLOAD May 17 00:39:11.444728 kernel: audit: type=1334 audit(1747442351.155:93): prog-id=5 op=UNLOAD May 17 00:39:11.444749 kernel: audit: type=1334 audit(1747442351.157:94): prog-id=15 op=LOAD May 17 00:39:11.444771 kernel: audit: type=1334 audit(1747442351.157:95): prog-id=12 op=UNLOAD May 17 00:39:11.444791 kernel: audit: type=1334 audit(1747442351.159:96): prog-id=16 op=LOAD May 17 00:39:11.444812 kernel: audit: type=1334 audit(1747442351.159:97): prog-id=17 op=LOAD May 17 00:39:11.444834 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 17 00:39:11.444855 systemd[1]: Stopped initrd-switch-root.service. May 17 00:39:11.444876 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 17 00:39:11.444897 systemd[1]: Created slice system-addon\x2dconfig.slice. May 17 00:39:11.444918 systemd[1]: Created slice system-addon\x2drun.slice. May 17 00:39:11.444942 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. May 17 00:39:11.444969 systemd[1]: Created slice system-getty.slice. May 17 00:39:11.444990 systemd[1]: Created slice system-modprobe.slice. May 17 00:39:11.445014 systemd[1]: Created slice system-serial\x2dgetty.slice. May 17 00:39:11.445036 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 17 00:39:11.445057 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 17 00:39:11.445079 systemd[1]: Created slice user.slice. May 17 00:39:11.445100 systemd[1]: Started systemd-ask-password-console.path. May 17 00:39:11.445122 systemd[1]: Started systemd-ask-password-wall.path. May 17 00:39:11.445147 systemd[1]: Set up automount boot.automount. May 17 00:39:11.445170 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 17 00:39:11.445190 systemd[1]: Stopped target initrd-switch-root.target. May 17 00:39:11.445212 systemd[1]: Stopped target initrd-fs.target. May 17 00:39:11.445234 systemd[1]: Stopped target initrd-root-fs.target. May 17 00:39:11.445255 systemd[1]: Reached target integritysetup.target. May 17 00:39:11.445276 systemd[1]: Reached target remote-cryptsetup.target. May 17 00:39:11.445297 systemd[1]: Reached target remote-fs.target. May 17 00:39:11.455391 systemd[1]: Reached target slices.target. May 17 00:39:11.455435 systemd[1]: Reached target swap.target. May 17 00:39:11.455456 systemd[1]: Reached target torcx.target. May 17 00:39:11.455476 systemd[1]: Reached target veritysetup.target. May 17 00:39:11.455496 systemd[1]: Listening on systemd-coredump.socket. May 17 00:39:11.455515 systemd[1]: Listening on systemd-initctl.socket. May 17 00:39:11.455534 systemd[1]: Listening on systemd-networkd.socket. May 17 00:39:11.455555 systemd[1]: Listening on systemd-udevd-control.socket. May 17 00:39:11.455575 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 00:39:11.455597 systemd[1]: Listening on systemd-userdbd.socket. May 17 00:39:11.455623 systemd[1]: Mounting dev-hugepages.mount... May 17 00:39:11.455646 systemd[1]: Mounting dev-mqueue.mount... May 17 00:39:11.455666 systemd[1]: Mounting media.mount... May 17 00:39:11.455685 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:39:11.455705 systemd[1]: Mounting sys-kernel-debug.mount... May 17 00:39:11.455737 systemd[1]: Mounting sys-kernel-tracing.mount... May 17 00:39:11.455761 systemd[1]: Mounting tmp.mount... May 17 00:39:11.455782 systemd[1]: Starting flatcar-tmpfiles.service... May 17 00:39:11.455803 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:39:11.455825 systemd[1]: Starting kmod-static-nodes.service... May 17 00:39:11.455845 systemd[1]: Starting modprobe@configfs.service... May 17 00:39:11.455864 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:39:11.455883 systemd[1]: Starting modprobe@drm.service... May 17 00:39:11.455903 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:39:11.455926 systemd[1]: Starting modprobe@fuse.service... May 17 00:39:11.455946 systemd[1]: Starting modprobe@loop.service... May 17 00:39:11.455969 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:39:11.455989 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 17 00:39:11.456009 systemd[1]: Stopped systemd-fsck-root.service. May 17 00:39:11.456028 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 17 00:39:11.456050 systemd[1]: Stopped systemd-fsck-usr.service. May 17 00:39:11.456070 systemd[1]: Stopped systemd-journald.service. May 17 00:39:11.456088 kernel: fuse: init (API version 7.34) May 17 00:39:11.456114 systemd[1]: Starting systemd-journald.service... May 17 00:39:11.456135 systemd[1]: Starting systemd-modules-load.service... May 17 00:39:11.456156 systemd[1]: Starting systemd-network-generator.service... May 17 00:39:11.456178 systemd[1]: Starting systemd-remount-fs.service... May 17 00:39:11.456198 systemd[1]: Starting systemd-udev-trigger.service... May 17 00:39:11.456225 systemd[1]: verity-setup.service: Deactivated successfully. May 17 00:39:11.456246 kernel: loop: module loaded May 17 00:39:11.456266 systemd[1]: Stopped verity-setup.service. May 17 00:39:11.456287 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:39:11.456337 systemd[1]: Mounted dev-hugepages.mount. May 17 00:39:11.456358 systemd[1]: Mounted dev-mqueue.mount. May 17 00:39:11.456379 systemd[1]: Mounted media.mount. May 17 00:39:11.456400 systemd[1]: Mounted sys-kernel-debug.mount. May 17 00:39:11.456421 systemd[1]: Mounted sys-kernel-tracing.mount. May 17 00:39:11.456442 systemd[1]: Mounted tmp.mount. May 17 00:39:11.456462 systemd[1]: Finished kmod-static-nodes.service. May 17 00:39:11.456483 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:39:11.456503 systemd[1]: Finished modprobe@configfs.service. May 17 00:39:11.456526 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:39:11.456547 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:39:11.456568 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:39:11.456589 systemd[1]: Finished modprobe@drm.service. May 17 00:39:11.456609 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:39:11.456634 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:39:11.456654 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:39:11.456675 systemd[1]: Finished modprobe@fuse.service. May 17 00:39:11.456704 systemd-journald[1409]: Journal started May 17 00:39:11.456784 systemd-journald[1409]: Runtime Journal (/run/log/journal/ec27c79694c192ff2dd21587b3c300ef) is 4.8M, max 38.3M, 33.5M free. May 17 00:39:07.010000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:39:07.169000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 00:39:07.169000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 00:39:07.169000 audit: BPF prog-id=10 op=LOAD May 17 00:39:07.169000 audit: BPF prog-id=10 op=UNLOAD May 17 00:39:07.169000 audit: BPF prog-id=11 op=LOAD May 17 00:39:07.170000 audit: BPF prog-id=11 op=UNLOAD May 17 00:39:07.406000 audit[1330]: AVC avc: denied { associate } for pid=1330 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 17 00:39:07.406000 audit[1330]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001178d2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=1313 pid=1330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:07.406000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 17 00:39:11.462797 systemd[1]: Started systemd-journald.service. May 17 00:39:07.410000 audit[1330]: AVC avc: denied { associate } for pid=1330 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 17 00:39:07.410000 audit[1330]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001179a9 a2=1ed a3=0 items=2 ppid=1313 pid=1330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:07.410000 audit: CWD cwd="/" May 17 00:39:07.410000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.410000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:07.410000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 17 00:39:11.151000 audit: BPF prog-id=12 op=LOAD May 17 00:39:11.151000 audit: BPF prog-id=3 op=UNLOAD May 17 00:39:11.153000 audit: BPF prog-id=13 op=LOAD May 17 00:39:11.155000 audit: BPF prog-id=14 op=LOAD May 17 00:39:11.155000 audit: BPF prog-id=4 op=UNLOAD May 17 00:39:11.155000 audit: BPF prog-id=5 op=UNLOAD May 17 00:39:11.157000 audit: BPF prog-id=15 op=LOAD May 17 00:39:11.157000 audit: BPF prog-id=12 op=UNLOAD May 17 00:39:11.159000 audit: BPF prog-id=16 op=LOAD May 17 00:39:11.159000 audit: BPF prog-id=17 op=LOAD May 17 00:39:11.159000 audit: BPF prog-id=13 op=UNLOAD May 17 00:39:11.159000 audit: BPF prog-id=14 op=UNLOAD May 17 00:39:11.161000 audit: BPF prog-id=18 op=LOAD May 17 00:39:11.161000 audit: BPF prog-id=15 op=UNLOAD May 17 00:39:11.163000 audit: BPF prog-id=19 op=LOAD May 17 00:39:11.164000 audit: BPF prog-id=20 op=LOAD May 17 00:39:11.164000 audit: BPF prog-id=16 op=UNLOAD May 17 00:39:11.164000 audit: BPF prog-id=17 op=UNLOAD May 17 00:39:11.164000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:11.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:11.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:11.181000 audit: BPF prog-id=18 op=UNLOAD May 17 00:39:11.329000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:11.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:11.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:11.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:11.340000 audit: BPF prog-id=21 op=LOAD May 17 00:39:11.341000 audit: BPF prog-id=22 op=LOAD May 17 00:39:11.341000 audit: BPF prog-id=23 op=LOAD May 17 00:39:11.341000 audit: BPF prog-id=19 op=UNLOAD May 17 00:39:11.341000 audit: BPF prog-id=20 op=UNLOAD May 17 00:39:11.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:11.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:11.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:11.421000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:11.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:11.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:11.441000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 17 00:39:11.441000 audit[1409]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffe32936090 a2=4000 a3=7ffe3293612c items=0 ppid=1 pid=1409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:11.441000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 17 00:39:11.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:11.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:11.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:11.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:11.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:11.458000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:11.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:11.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:11.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:11.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:11.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:11.150167 systemd[1]: Queued start job for default target multi-user.target. May 17 00:39:07.385333 /usr/lib/systemd/system-generators/torcx-generator[1330]: time="2025-05-17T00:39:07Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:39:11.150184 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device. May 17 00:39:07.386582 /usr/lib/systemd/system-generators/torcx-generator[1330]: time="2025-05-17T00:39:07Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 17 00:39:11.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:11.165302 systemd[1]: systemd-journald.service: Deactivated successfully. May 17 00:39:07.386602 /usr/lib/systemd/system-generators/torcx-generator[1330]: time="2025-05-17T00:39:07Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 17 00:39:11.462565 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:39:07.386636 /usr/lib/systemd/system-generators/torcx-generator[1330]: time="2025-05-17T00:39:07Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 17 00:39:11.462745 systemd[1]: Finished modprobe@loop.service. May 17 00:39:07.386646 /usr/lib/systemd/system-generators/torcx-generator[1330]: time="2025-05-17T00:39:07Z" level=debug msg="skipped missing lower profile" missing profile=oem May 17 00:39:11.464066 systemd[1]: Finished systemd-modules-load.service. May 17 00:39:07.386678 /usr/lib/systemd/system-generators/torcx-generator[1330]: time="2025-05-17T00:39:07Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 17 00:39:11.465854 systemd[1]: Finished systemd-network-generator.service. May 17 00:39:07.386692 /usr/lib/systemd/system-generators/torcx-generator[1330]: time="2025-05-17T00:39:07Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 17 00:39:11.467091 systemd[1]: Finished systemd-remount-fs.service. May 17 00:39:07.386884 /usr/lib/systemd/system-generators/torcx-generator[1330]: time="2025-05-17T00:39:07Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 17 00:39:11.468527 systemd[1]: Reached target network-pre.target. May 17 00:39:07.386919 /usr/lib/systemd/system-generators/torcx-generator[1330]: time="2025-05-17T00:39:07Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 17 00:39:07.386931 /usr/lib/systemd/system-generators/torcx-generator[1330]: time="2025-05-17T00:39:07Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 17 00:39:07.389208 /usr/lib/systemd/system-generators/torcx-generator[1330]: time="2025-05-17T00:39:07Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 17 00:39:07.389248 /usr/lib/systemd/system-generators/torcx-generator[1330]: time="2025-05-17T00:39:07Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 17 00:39:07.389268 /usr/lib/systemd/system-generators/torcx-generator[1330]: time="2025-05-17T00:39:07Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 May 17 00:39:07.389283 /usr/lib/systemd/system-generators/torcx-generator[1330]: time="2025-05-17T00:39:07Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 17 00:39:07.389303 /usr/lib/systemd/system-generators/torcx-generator[1330]: time="2025-05-17T00:39:07Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 May 17 00:39:07.389340 /usr/lib/systemd/system-generators/torcx-generator[1330]: time="2025-05-17T00:39:07Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 17 00:39:10.622925 /usr/lib/systemd/system-generators/torcx-generator[1330]: time="2025-05-17T00:39:10Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:39:10.623185 /usr/lib/systemd/system-generators/torcx-generator[1330]: time="2025-05-17T00:39:10Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:39:10.623301 /usr/lib/systemd/system-generators/torcx-generator[1330]: time="2025-05-17T00:39:10Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:39:10.623513 /usr/lib/systemd/system-generators/torcx-generator[1330]: time="2025-05-17T00:39:10Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:39:10.623564 /usr/lib/systemd/system-generators/torcx-generator[1330]: time="2025-05-17T00:39:10Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 17 00:39:10.623622 /usr/lib/systemd/system-generators/torcx-generator[1330]: time="2025-05-17T00:39:10Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 17 00:39:11.472760 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 17 00:39:11.475081 systemd[1]: Mounting sys-kernel-config.mount... May 17 00:39:11.479785 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:39:11.482716 systemd[1]: Starting systemd-hwdb-update.service... May 17 00:39:11.485088 systemd[1]: Starting systemd-journal-flush.service... May 17 00:39:11.486465 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:39:11.490116 systemd[1]: Starting systemd-random-seed.service... May 17 00:39:11.491076 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:39:11.492851 systemd[1]: Starting systemd-sysctl.service... May 17 00:39:11.497825 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 17 00:39:11.499875 systemd[1]: Mounted sys-kernel-config.mount. May 17 00:39:11.512198 systemd-journald[1409]: Time spent on flushing to /var/log/journal/ec27c79694c192ff2dd21587b3c300ef is 56.087ms for 1211 entries. May 17 00:39:11.512198 systemd-journald[1409]: System Journal (/var/log/journal/ec27c79694c192ff2dd21587b3c300ef) is 8.0M, max 195.6M, 187.6M free. May 17 00:39:11.587751 systemd-journald[1409]: Received client request to flush runtime journal. May 17 00:39:11.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:11.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:11.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:11.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:11.519333 systemd[1]: Finished systemd-random-seed.service. May 17 00:39:11.520339 systemd[1]: Reached target first-boot-complete.target. May 17 00:39:11.528708 systemd[1]: Finished flatcar-tmpfiles.service. May 17 00:39:11.589344 udevadm[1448]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 17 00:39:11.531277 systemd[1]: Starting systemd-sysusers.service... May 17 00:39:11.539174 systemd[1]: Finished systemd-sysctl.service. May 17 00:39:11.569133 systemd[1]: Finished systemd-udev-trigger.service. May 17 00:39:11.572629 systemd[1]: Starting systemd-udev-settle.service... May 17 00:39:11.589360 systemd[1]: Finished systemd-journal-flush.service. May 17 00:39:11.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:11.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:11.663714 systemd[1]: Finished systemd-sysusers.service. May 17 00:39:12.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:12.132000 audit: BPF prog-id=24 op=LOAD May 17 00:39:12.132000 audit: BPF prog-id=25 op=LOAD May 17 00:39:12.132000 audit: BPF prog-id=7 op=UNLOAD May 17 00:39:12.132000 audit: BPF prog-id=8 op=UNLOAD May 17 00:39:12.131343 systemd[1]: Finished systemd-hwdb-update.service. May 17 00:39:12.133186 systemd[1]: Starting systemd-udevd.service... May 17 00:39:12.151639 systemd-udevd[1450]: Using default interface naming scheme 'v252'. May 17 00:39:12.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:12.224000 audit: BPF prog-id=26 op=LOAD May 17 00:39:12.223324 systemd[1]: Started systemd-udevd.service. May 17 00:39:12.225457 systemd[1]: Starting systemd-networkd.service... May 17 00:39:12.249000 audit: BPF prog-id=27 op=LOAD May 17 00:39:12.249000 audit: BPF prog-id=28 op=LOAD May 17 00:39:12.249000 audit: BPF prog-id=29 op=LOAD May 17 00:39:12.250257 systemd[1]: Starting systemd-userdbd.service... May 17 00:39:12.250920 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. May 17 00:39:12.291439 (udev-worker)[1457]: Network interface NamePolicy= disabled on kernel command line. May 17 00:39:12.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:12.292346 systemd[1]: Started systemd-userdbd.service. May 17 00:39:12.308360 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 17 00:39:12.324000 audit[1456]: AVC avc: denied { confidentiality } for pid=1456 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 17 00:39:12.360337 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr May 17 00:39:12.324000 audit[1456]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55ebe4c24080 a1=338ac a2=7f1d71195bc5 a3=5 items=110 ppid=1450 pid=1456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:12.324000 audit: CWD cwd="/" May 17 00:39:12.324000 audit: PATH item=0 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=1 name=(null) inode=14823 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=2 name=(null) inode=14823 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=3 name=(null) inode=14824 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=4 name=(null) inode=14823 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=5 name=(null) inode=14825 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=6 name=(null) inode=14823 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=7 name=(null) inode=14826 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=8 name=(null) inode=14826 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=9 name=(null) inode=14827 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=10 name=(null) inode=14826 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=11 name=(null) inode=14828 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=12 name=(null) inode=14826 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=13 name=(null) inode=14829 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=14 name=(null) inode=14826 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=15 name=(null) inode=14830 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=16 name=(null) inode=14826 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=17 name=(null) inode=14831 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=18 name=(null) inode=14823 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=19 name=(null) inode=14832 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=20 name=(null) inode=14832 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=21 name=(null) inode=14833 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=22 name=(null) inode=14832 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=23 name=(null) inode=14834 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=24 name=(null) inode=14832 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=25 name=(null) inode=14835 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=26 name=(null) inode=14832 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=27 name=(null) inode=14836 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=28 name=(null) inode=14832 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=29 name=(null) inode=14837 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=30 name=(null) inode=14823 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=31 name=(null) inode=14098 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=32 name=(null) inode=14098 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=33 name=(null) inode=14099 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=34 name=(null) inode=14098 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=35 name=(null) inode=14100 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=36 name=(null) inode=14098 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=37 name=(null) inode=14101 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=38 name=(null) inode=14098 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=39 name=(null) inode=14102 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=40 name=(null) inode=14098 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=41 name=(null) inode=14103 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=42 name=(null) inode=14823 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=43 name=(null) inode=14104 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=44 name=(null) inode=14104 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=45 name=(null) inode=14105 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=46 name=(null) inode=14104 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=47 name=(null) inode=14106 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=48 name=(null) inode=14104 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=49 name=(null) inode=14107 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=50 name=(null) inode=14104 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=51 name=(null) inode=14108 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=52 name=(null) inode=14104 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=53 name=(null) inode=14109 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=54 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=55 name=(null) inode=14110 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=56 name=(null) inode=14110 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=57 name=(null) inode=14111 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=58 name=(null) inode=14110 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=59 name=(null) inode=14112 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=60 name=(null) inode=14110 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=61 name=(null) inode=14113 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=62 name=(null) inode=14113 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=63 name=(null) inode=14114 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=64 name=(null) inode=14113 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=65 name=(null) inode=14115 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=66 name=(null) inode=14113 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=67 name=(null) inode=14116 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=68 name=(null) inode=14113 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=69 name=(null) inode=14117 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=70 name=(null) inode=14113 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=71 name=(null) inode=14118 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=72 name=(null) inode=14110 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=73 name=(null) inode=14119 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=74 name=(null) inode=14119 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=75 name=(null) inode=14120 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=76 name=(null) inode=14119 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=77 name=(null) inode=14121 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=78 name=(null) inode=14119 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=79 name=(null) inode=14122 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=80 name=(null) inode=14119 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=81 name=(null) inode=14123 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=82 name=(null) inode=14119 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=83 name=(null) inode=14124 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=84 name=(null) inode=14110 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=85 name=(null) inode=14125 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=86 name=(null) inode=14125 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=87 name=(null) inode=14126 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=88 name=(null) inode=14125 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=89 name=(null) inode=14127 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=90 name=(null) inode=14125 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=91 name=(null) inode=14128 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=92 name=(null) inode=14125 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=93 name=(null) inode=14129 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=94 name=(null) inode=14125 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=95 name=(null) inode=14130 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=96 name=(null) inode=14110 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=97 name=(null) inode=14131 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=98 name=(null) inode=14131 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=99 name=(null) inode=14132 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=100 name=(null) inode=14131 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=101 name=(null) inode=14133 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=102 name=(null) inode=14131 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=103 name=(null) inode=14134 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=104 name=(null) inode=14131 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=105 name=(null) inode=14135 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=106 name=(null) inode=14131 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=107 name=(null) inode=14136 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PATH item=109 name=(null) inode=14137 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:39:12.324000 audit: PROCTITLE proctitle="(udev-worker)" May 17 00:39:12.372727 kernel: ACPI: button: Power Button [PWRF] May 17 00:39:12.372801 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 May 17 00:39:12.374546 kernel: ACPI: button: Sleep Button [SLPF] May 17 00:39:12.388342 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 May 17 00:39:12.394368 kernel: mousedev: PS/2 mouse device common for all mice May 17 00:39:12.415778 systemd-networkd[1458]: lo: Link UP May 17 00:39:12.415787 systemd-networkd[1458]: lo: Gained carrier May 17 00:39:12.416222 systemd-networkd[1458]: Enumeration completed May 17 00:39:12.416347 systemd-networkd[1458]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:39:12.416352 systemd[1]: Started systemd-networkd.service. May 17 00:39:12.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:12.418126 systemd[1]: Starting systemd-networkd-wait-online.service... May 17 00:39:12.422337 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 00:39:12.424979 systemd-networkd[1458]: eth0: Link UP May 17 00:39:12.425099 systemd-networkd[1458]: eth0: Gained carrier May 17 00:39:12.436480 systemd-networkd[1458]: eth0: DHCPv4 address 172.31.16.188/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 17 00:39:12.522839 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 00:39:12.523841 systemd[1]: Finished systemd-udev-settle.service. May 17 00:39:12.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:12.525680 systemd[1]: Starting lvm2-activation-early.service... May 17 00:39:12.588344 lvm[1564]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:39:12.614560 systemd[1]: Finished lvm2-activation-early.service. May 17 00:39:12.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:12.615215 systemd[1]: Reached target cryptsetup.target. May 17 00:39:12.617364 systemd[1]: Starting lvm2-activation.service... May 17 00:39:12.622123 lvm[1565]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:39:12.643619 systemd[1]: Finished lvm2-activation.service. May 17 00:39:12.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:12.644284 systemd[1]: Reached target local-fs-pre.target. May 17 00:39:12.644779 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:39:12.644809 systemd[1]: Reached target local-fs.target. May 17 00:39:12.645257 systemd[1]: Reached target machines.target. May 17 00:39:12.646949 systemd[1]: Starting ldconfig.service... May 17 00:39:12.648575 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:39:12.648631 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:39:12.649718 systemd[1]: Starting systemd-boot-update.service... May 17 00:39:12.651220 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 17 00:39:12.652819 systemd[1]: Starting systemd-machine-id-commit.service... May 17 00:39:12.654376 systemd[1]: Starting systemd-sysext.service... May 17 00:39:12.665829 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1567 (bootctl) May 17 00:39:12.667173 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 17 00:39:12.675264 systemd[1]: Unmounting usr-share-oem.mount... May 17 00:39:12.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:12.678885 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 17 00:39:12.682186 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 17 00:39:12.682371 systemd[1]: Unmounted usr-share-oem.mount. May 17 00:39:12.697347 kernel: loop0: detected capacity change from 0 to 221472 May 17 00:39:12.794339 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:39:12.817403 kernel: loop1: detected capacity change from 0 to 221472 May 17 00:39:12.840514 (sd-sysext)[1579]: Using extensions 'kubernetes'. May 17 00:39:12.842244 (sd-sysext)[1579]: Merged extensions into '/usr'. May 17 00:39:12.851610 systemd-fsck[1576]: fsck.fat 4.2 (2021-01-31) May 17 00:39:12.851610 systemd-fsck[1576]: /dev/nvme0n1p1: 790 files, 120726/258078 clusters May 17 00:39:12.854691 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 17 00:39:12.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:12.858762 systemd[1]: Mounting boot.mount... May 17 00:39:12.871553 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:39:12.873503 systemd[1]: Mounting usr-share-oem.mount... May 17 00:39:12.875020 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:39:12.879875 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:39:12.884103 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:39:12.888106 systemd[1]: Starting modprobe@loop.service... May 17 00:39:12.890497 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:39:12.890704 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:39:12.890893 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:39:12.896645 systemd[1]: Mounted boot.mount. May 17 00:39:12.901421 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:39:12.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:12.903661 systemd[1]: Finished systemd-machine-id-commit.service. May 17 00:39:12.904836 systemd[1]: Mounted usr-share-oem.mount. May 17 00:39:12.906787 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:39:12.907488 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:39:12.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:12.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:12.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:12.911000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:12.910179 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:39:12.910503 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:39:12.911876 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:39:12.912035 systemd[1]: Finished modprobe@loop.service. May 17 00:39:12.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:12.913000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:12.918170 systemd[1]: Finished systemd-sysext.service. May 17 00:39:12.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:12.921022 systemd[1]: Starting ensure-sysext.service... May 17 00:39:12.922004 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:39:12.922093 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:39:12.923801 systemd[1]: Starting systemd-tmpfiles-setup.service... May 17 00:39:12.934347 systemd[1]: Reloading. May 17 00:39:12.977841 systemd-tmpfiles[1599]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 17 00:39:13.005483 /usr/lib/systemd/system-generators/torcx-generator[1618]: time="2025-05-17T00:39:13Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:39:13.006980 /usr/lib/systemd/system-generators/torcx-generator[1618]: time="2025-05-17T00:39:13Z" level=info msg="torcx already run" May 17 00:39:13.012581 systemd-tmpfiles[1599]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:39:13.035544 systemd-tmpfiles[1599]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:39:13.088567 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:39:13.088586 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:39:13.108674 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:39:13.194000 audit: BPF prog-id=30 op=LOAD May 17 00:39:13.194000 audit: BPF prog-id=31 op=LOAD May 17 00:39:13.194000 audit: BPF prog-id=24 op=UNLOAD May 17 00:39:13.194000 audit: BPF prog-id=25 op=UNLOAD May 17 00:39:13.194000 audit: BPF prog-id=32 op=LOAD May 17 00:39:13.194000 audit: BPF prog-id=27 op=UNLOAD May 17 00:39:13.194000 audit: BPF prog-id=33 op=LOAD May 17 00:39:13.194000 audit: BPF prog-id=34 op=LOAD May 17 00:39:13.194000 audit: BPF prog-id=28 op=UNLOAD May 17 00:39:13.194000 audit: BPF prog-id=29 op=UNLOAD May 17 00:39:13.197000 audit: BPF prog-id=35 op=LOAD May 17 00:39:13.197000 audit: BPF prog-id=21 op=UNLOAD May 17 00:39:13.197000 audit: BPF prog-id=36 op=LOAD May 17 00:39:13.197000 audit: BPF prog-id=37 op=LOAD May 17 00:39:13.197000 audit: BPF prog-id=22 op=UNLOAD May 17 00:39:13.197000 audit: BPF prog-id=23 op=UNLOAD May 17 00:39:13.199000 audit: BPF prog-id=38 op=LOAD May 17 00:39:13.199000 audit: BPF prog-id=26 op=UNLOAD May 17 00:39:13.208183 systemd[1]: Finished systemd-boot-update.service. May 17 00:39:13.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:13.226109 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:39:13.226531 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:39:13.228816 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:39:13.231188 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:39:13.234190 systemd[1]: Starting modprobe@loop.service... May 17 00:39:13.235619 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:39:13.235809 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:39:13.235987 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:39:13.237262 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:39:13.237693 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:39:13.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:13.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:13.240218 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:39:13.240611 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:39:13.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:13.241000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:13.242706 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:39:13.242889 systemd[1]: Finished modprobe@loop.service. May 17 00:39:13.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:13.243000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:13.244777 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:39:13.244939 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:39:13.247987 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:39:13.248957 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:39:13.251797 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:39:13.254542 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:39:13.256770 systemd[1]: Starting modprobe@loop.service... May 17 00:39:13.257504 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:39:13.257717 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:39:13.257915 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:39:13.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:13.260000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:13.260092 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:39:13.260279 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:39:13.261695 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:39:13.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:13.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:13.262677 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:39:13.264169 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:39:13.264363 systemd[1]: Finished modprobe@loop.service. May 17 00:39:13.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:13.265000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:13.266125 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:39:13.266285 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:39:13.271482 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:39:13.271975 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:39:13.274834 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:39:13.278055 systemd[1]: Starting modprobe@drm.service... May 17 00:39:13.280993 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:39:13.283972 systemd[1]: Starting modprobe@loop.service... May 17 00:39:13.285423 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:39:13.285639 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:39:13.285869 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:39:13.287403 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:39:13.287587 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:39:13.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:13.288000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:13.288965 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:39:13.289137 systemd[1]: Finished modprobe@drm.service. May 17 00:39:13.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:13.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:13.290281 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:39:13.290471 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:39:13.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:13.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:13.291895 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:39:13.292072 systemd[1]: Finished modprobe@loop.service. May 17 00:39:13.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:13.292000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:13.293648 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:39:13.293791 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:39:13.295469 systemd[1]: Finished ensure-sysext.service. May 17 00:39:13.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:13.329133 systemd[1]: Finished systemd-tmpfiles-setup.service. May 17 00:39:13.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:13.334000 audit: BPF prog-id=39 op=LOAD May 17 00:39:13.330982 systemd[1]: Starting audit-rules.service... May 17 00:39:13.332524 systemd[1]: Starting clean-ca-certificates.service... May 17 00:39:13.334008 systemd[1]: Starting systemd-journal-catalog-update.service... May 17 00:39:13.336462 systemd[1]: Starting systemd-resolved.service... May 17 00:39:13.337000 audit: BPF prog-id=40 op=LOAD May 17 00:39:13.338961 systemd[1]: Starting systemd-timesyncd.service... May 17 00:39:13.341472 systemd[1]: Starting systemd-update-utmp.service... May 17 00:39:13.350000 audit[1690]: SYSTEM_BOOT pid=1690 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 17 00:39:13.359061 systemd[1]: Finished systemd-update-utmp.service. May 17 00:39:13.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:13.366746 systemd[1]: Finished clean-ca-certificates.service. May 17 00:39:13.367395 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:39:13.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:13.416035 systemd[1]: Finished systemd-journal-catalog-update.service. May 17 00:39:13.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:13.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:39:13.459521 systemd[1]: Started systemd-timesyncd.service. May 17 00:39:13.459993 systemd[1]: Reached target time-set.target. May 17 00:39:13.470772 systemd-resolved[1688]: Positive Trust Anchors: May 17 00:39:13.470785 systemd-resolved[1688]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:39:13.470817 systemd-resolved[1688]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 00:39:13.479000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 17 00:39:13.479000 audit[1706]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffff6159dc0 a2=420 a3=0 items=0 ppid=1685 pid=1706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:39:13.479000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 17 00:39:13.480024 augenrules[1706]: No rules May 17 00:39:13.480526 systemd[1]: Finished audit-rules.service. May 17 00:39:13.508592 systemd-resolved[1688]: Defaulting to hostname 'linux'. May 17 00:39:13.510235 systemd[1]: Started systemd-resolved.service. May 17 00:39:13.510669 systemd[1]: Reached target network.target. May 17 00:39:13.510970 systemd[1]: Reached target nss-lookup.target. May 17 00:39:13.588449 systemd-networkd[1458]: eth0: Gained IPv6LL May 17 00:39:13.590246 systemd[1]: Finished systemd-networkd-wait-online.service. May 17 00:39:13.590713 systemd[1]: Reached target network-online.target. May 17 00:39:14.752342 systemd-resolved[1688]: Clock change detected. Flushing caches. May 17 00:39:14.752652 systemd-timesyncd[1689]: Contacted time server 66.118.230.14:123 (0.flatcar.pool.ntp.org). May 17 00:39:14.752786 systemd-timesyncd[1689]: Initial clock synchronization to Sat 2025-05-17 00:39:14.752283 UTC. May 17 00:39:14.829288 ldconfig[1566]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:39:14.836174 systemd[1]: Finished ldconfig.service. May 17 00:39:14.837871 systemd[1]: Starting systemd-update-done.service... May 17 00:39:14.845093 systemd[1]: Finished systemd-update-done.service. May 17 00:39:14.845549 systemd[1]: Reached target sysinit.target. May 17 00:39:14.845974 systemd[1]: Started motdgen.path. May 17 00:39:14.846304 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 17 00:39:14.846913 systemd[1]: Started logrotate.timer. May 17 00:39:14.847300 systemd[1]: Started mdadm.timer. May 17 00:39:14.847611 systemd[1]: Started systemd-tmpfiles-clean.timer. May 17 00:39:14.847937 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:39:14.847975 systemd[1]: Reached target paths.target. May 17 00:39:14.848270 systemd[1]: Reached target timers.target. May 17 00:39:14.848923 systemd[1]: Listening on dbus.socket. May 17 00:39:14.850250 systemd[1]: Starting docker.socket... May 17 00:39:14.853903 systemd[1]: Listening on sshd.socket. May 17 00:39:14.854387 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:39:14.854972 systemd[1]: Listening on docker.socket. May 17 00:39:14.855382 systemd[1]: Reached target sockets.target. May 17 00:39:14.855706 systemd[1]: Reached target basic.target. May 17 00:39:14.856140 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 00:39:14.856168 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 00:39:14.857152 systemd[1]: Started amazon-ssm-agent.service. May 17 00:39:14.858709 systemd[1]: Starting containerd.service... May 17 00:39:14.860073 systemd[1]: Starting coreos-metadata-sshkeys@core.service... May 17 00:39:14.862047 systemd[1]: Starting dbus.service... May 17 00:39:14.863740 systemd[1]: Starting enable-oem-cloudinit.service... May 17 00:39:14.865611 systemd[1]: Starting extend-filesystems.service... May 17 00:39:14.866531 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 17 00:39:14.873415 systemd[1]: Starting kubelet.service... May 17 00:39:14.903306 jq[1718]: false May 17 00:39:14.876295 systemd[1]: Starting motdgen.service... May 17 00:39:14.878206 systemd[1]: Started nvidia.service. May 17 00:39:14.882461 systemd[1]: Starting ssh-key-proc-cmdline.service... May 17 00:39:14.884201 systemd[1]: Starting sshd-keygen.service... May 17 00:39:14.888380 systemd[1]: Starting systemd-logind.service... May 17 00:39:14.891252 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:39:14.891336 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 00:39:14.891781 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 17 00:39:14.892636 systemd[1]: Starting update-engine.service... May 17 00:39:14.895095 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 17 00:39:14.900047 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:39:14.900270 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 17 00:39:14.901249 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:39:14.901409 systemd[1]: Finished ssh-key-proc-cmdline.service. May 17 00:39:14.920842 jq[1730]: true May 17 00:39:14.988513 jq[1742]: true May 17 00:39:14.999613 extend-filesystems[1719]: Found loop1 May 17 00:39:15.002029 extend-filesystems[1719]: Found nvme0n1 May 17 00:39:15.026483 extend-filesystems[1719]: Found nvme0n1p1 May 17 00:39:15.026483 extend-filesystems[1719]: Found nvme0n1p2 May 17 00:39:15.026483 extend-filesystems[1719]: Found nvme0n1p3 May 17 00:39:15.026483 extend-filesystems[1719]: Found usr May 17 00:39:15.026483 extend-filesystems[1719]: Found nvme0n1p4 May 17 00:39:15.026483 extend-filesystems[1719]: Found nvme0n1p6 May 17 00:39:15.026483 extend-filesystems[1719]: Found nvme0n1p7 May 17 00:39:15.026483 extend-filesystems[1719]: Found nvme0n1p9 May 17 00:39:15.026483 extend-filesystems[1719]: Checking size of /dev/nvme0n1p9 May 17 00:39:15.029968 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:39:15.030129 systemd[1]: Finished motdgen.service. May 17 00:39:15.036576 dbus-daemon[1717]: [system] SELinux support is enabled May 17 00:39:15.036776 systemd[1]: Started dbus.service. May 17 00:39:15.040195 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:39:15.040237 systemd[1]: Reached target system-config.target. May 17 00:39:15.040889 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:39:15.040921 systemd[1]: Reached target user-config.target. May 17 00:39:15.088017 dbus-daemon[1717]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1458 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 17 00:39:15.094214 systemd[1]: Starting systemd-hostnamed.service... May 17 00:39:15.105334 amazon-ssm-agent[1714]: 2025/05/17 00:39:15 Failed to load instance info from vault. RegistrationKey does not exist. May 17 00:39:15.108724 amazon-ssm-agent[1714]: Initializing new seelog logger May 17 00:39:15.118054 amazon-ssm-agent[1714]: New Seelog Logger Creation Complete May 17 00:39:15.118926 amazon-ssm-agent[1714]: 2025/05/17 00:39:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:39:15.118926 amazon-ssm-agent[1714]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:39:15.118926 amazon-ssm-agent[1714]: 2025/05/17 00:39:15 processing appconfig overrides May 17 00:39:15.142841 update_engine[1729]: I0517 00:39:15.142215 1729 main.cc:92] Flatcar Update Engine starting May 17 00:39:15.149671 extend-filesystems[1719]: Resized partition /dev/nvme0n1p9 May 17 00:39:15.150306 systemd[1]: Started update-engine.service. May 17 00:39:15.156752 update_engine[1729]: I0517 00:39:15.151036 1729 update_check_scheduler.cc:74] Next update check in 5m31s May 17 00:39:15.154857 systemd[1]: Started locksmithd.service. May 17 00:39:15.168695 extend-filesystems[1780]: resize2fs 1.46.5 (30-Dec-2021) May 17 00:39:15.178845 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks May 17 00:39:15.196583 env[1735]: time="2025-05-17T00:39:15.196459837Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 17 00:39:15.205865 bash[1784]: Updated "/home/core/.ssh/authorized_keys" May 17 00:39:15.205103 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 17 00:39:15.284829 systemd-logind[1727]: Watching system buttons on /dev/input/event1 (Power Button) May 17 00:39:15.284865 systemd-logind[1727]: Watching system buttons on /dev/input/event2 (Sleep Button) May 17 00:39:15.284889 systemd-logind[1727]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 17 00:39:15.286287 systemd-logind[1727]: New seat seat0. May 17 00:39:15.295213 systemd[1]: Started systemd-logind.service. May 17 00:39:15.305979 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 May 17 00:39:15.325996 extend-filesystems[1780]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required May 17 00:39:15.325996 extend-filesystems[1780]: old_desc_blocks = 1, new_desc_blocks = 1 May 17 00:39:15.325996 extend-filesystems[1780]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. May 17 00:39:15.332452 extend-filesystems[1719]: Resized filesystem in /dev/nvme0n1p9 May 17 00:39:15.328065 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:39:15.328275 systemd[1]: Finished extend-filesystems.service. May 17 00:39:15.334174 systemd[1]: nvidia.service: Deactivated successfully. May 17 00:39:15.371737 env[1735]: time="2025-05-17T00:39:15.371034576Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:39:15.371737 env[1735]: time="2025-05-17T00:39:15.371227349Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:39:15.378429 env[1735]: time="2025-05-17T00:39:15.378371187Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.182-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:39:15.378429 env[1735]: time="2025-05-17T00:39:15.378425134Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:39:15.378762 env[1735]: time="2025-05-17T00:39:15.378730348Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:39:15.378845 env[1735]: time="2025-05-17T00:39:15.378764806Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:39:15.378845 env[1735]: time="2025-05-17T00:39:15.378785178Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 17 00:39:15.378845 env[1735]: time="2025-05-17T00:39:15.378800808Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:39:15.378966 env[1735]: time="2025-05-17T00:39:15.378952260Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:39:15.385731 env[1735]: time="2025-05-17T00:39:15.385118792Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:39:15.385731 env[1735]: time="2025-05-17T00:39:15.385374956Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:39:15.385731 env[1735]: time="2025-05-17T00:39:15.385400295Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:39:15.385731 env[1735]: time="2025-05-17T00:39:15.385471441Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 17 00:39:15.385731 env[1735]: time="2025-05-17T00:39:15.385488481Z" level=info msg="metadata content store policy set" policy=shared May 17 00:39:15.399934 env[1735]: time="2025-05-17T00:39:15.398437091Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:39:15.399934 env[1735]: time="2025-05-17T00:39:15.398494159Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:39:15.399934 env[1735]: time="2025-05-17T00:39:15.398526669Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:39:15.399934 env[1735]: time="2025-05-17T00:39:15.398586196Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:39:15.399934 env[1735]: time="2025-05-17T00:39:15.398608444Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:39:15.399934 env[1735]: time="2025-05-17T00:39:15.398679718Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:39:15.399934 env[1735]: time="2025-05-17T00:39:15.398701794Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:39:15.399934 env[1735]: time="2025-05-17T00:39:15.398721754Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:39:15.399934 env[1735]: time="2025-05-17T00:39:15.398742983Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 17 00:39:15.399934 env[1735]: time="2025-05-17T00:39:15.398763155Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:39:15.399934 env[1735]: time="2025-05-17T00:39:15.398781775Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:39:15.399934 env[1735]: time="2025-05-17T00:39:15.398803411Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:39:15.399934 env[1735]: time="2025-05-17T00:39:15.398966728Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:39:15.399934 env[1735]: time="2025-05-17T00:39:15.399062690Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:39:15.400523 env[1735]: time="2025-05-17T00:39:15.399537495Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:39:15.400523 env[1735]: time="2025-05-17T00:39:15.399585642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:39:15.400523 env[1735]: time="2025-05-17T00:39:15.399606115Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:39:15.400523 env[1735]: time="2025-05-17T00:39:15.399678889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:39:15.400523 env[1735]: time="2025-05-17T00:39:15.399698835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:39:15.400523 env[1735]: time="2025-05-17T00:39:15.399779313Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:39:15.400523 env[1735]: time="2025-05-17T00:39:15.399798228Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:39:15.400523 env[1735]: time="2025-05-17T00:39:15.399827294Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:39:15.400523 env[1735]: time="2025-05-17T00:39:15.399846361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:39:15.400523 env[1735]: time="2025-05-17T00:39:15.399864837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:39:15.400523 env[1735]: time="2025-05-17T00:39:15.399881477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:39:15.400523 env[1735]: time="2025-05-17T00:39:15.399900665Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:39:15.400523 env[1735]: time="2025-05-17T00:39:15.400043564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:39:15.400523 env[1735]: time="2025-05-17T00:39:15.400062979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:39:15.400523 env[1735]: time="2025-05-17T00:39:15.400081943Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:39:15.401080 env[1735]: time="2025-05-17T00:39:15.400099168Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:39:15.401080 env[1735]: time="2025-05-17T00:39:15.400125537Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 17 00:39:15.401080 env[1735]: time="2025-05-17T00:39:15.400142395Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:39:15.401080 env[1735]: time="2025-05-17T00:39:15.400172394Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 17 00:39:15.401080 env[1735]: time="2025-05-17T00:39:15.400226706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:39:15.401260 env[1735]: time="2025-05-17T00:39:15.400510137Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:39:15.401260 env[1735]: time="2025-05-17T00:39:15.400584382Z" level=info msg="Connect containerd service" May 17 00:39:15.401260 env[1735]: time="2025-05-17T00:39:15.400624706Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:39:15.405576 env[1735]: time="2025-05-17T00:39:15.401461105Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:39:15.405576 env[1735]: time="2025-05-17T00:39:15.401775169Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:39:15.405576 env[1735]: time="2025-05-17T00:39:15.401851535Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:39:15.405576 env[1735]: time="2025-05-17T00:39:15.404330356Z" level=info msg="Start subscribing containerd event" May 17 00:39:15.405576 env[1735]: time="2025-05-17T00:39:15.404407794Z" level=info msg="Start recovering state" May 17 00:39:15.405576 env[1735]: time="2025-05-17T00:39:15.404490277Z" level=info msg="Start event monitor" May 17 00:39:15.405576 env[1735]: time="2025-05-17T00:39:15.404511773Z" level=info msg="Start snapshots syncer" May 17 00:39:15.405576 env[1735]: time="2025-05-17T00:39:15.404526939Z" level=info msg="Start cni network conf syncer for default" May 17 00:39:15.405576 env[1735]: time="2025-05-17T00:39:15.404537832Z" level=info msg="Start streaming server" May 17 00:39:15.403050 dbus-daemon[1717]: [system] Successfully activated service 'org.freedesktop.hostname1' May 17 00:39:15.401989 systemd[1]: Started containerd.service. May 17 00:39:15.404554 dbus-daemon[1717]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1769 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 17 00:39:15.403218 systemd[1]: Started systemd-hostnamed.service. May 17 00:39:15.408579 systemd[1]: Starting polkit.service... May 17 00:39:15.430327 env[1735]: time="2025-05-17T00:39:15.429503685Z" level=info msg="containerd successfully booted in 0.251147s" May 17 00:39:15.443754 polkitd[1808]: Started polkitd version 121 May 17 00:39:15.485130 polkitd[1808]: Loading rules from directory /etc/polkit-1/rules.d May 17 00:39:15.486532 polkitd[1808]: Loading rules from directory /usr/share/polkit-1/rules.d May 17 00:39:15.490933 polkitd[1808]: Finished loading, compiling and executing 2 rules May 17 00:39:15.491536 dbus-daemon[1717]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 17 00:39:15.491732 systemd[1]: Started polkit.service. May 17 00:39:15.493704 polkitd[1808]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 17 00:39:15.509452 systemd-hostnamed[1769]: Hostname set to (transient) May 17 00:39:15.509568 systemd-resolved[1688]: System hostname changed to 'ip-172-31-16-188'. May 17 00:39:15.762095 coreos-metadata[1716]: May 17 00:39:15.761 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 17 00:39:15.768663 coreos-metadata[1716]: May 17 00:39:15.768 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 May 17 00:39:15.770168 coreos-metadata[1716]: May 17 00:39:15.769 INFO Fetch successful May 17 00:39:15.770168 coreos-metadata[1716]: May 17 00:39:15.770 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 May 17 00:39:15.772119 coreos-metadata[1716]: May 17 00:39:15.771 INFO Fetch successful May 17 00:39:15.774416 unknown[1716]: wrote ssh authorized keys file for user: core May 17 00:39:15.785242 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO Create new startup processor May 17 00:39:15.790030 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO [LongRunningPluginsManager] registered plugins: {} May 17 00:39:15.790030 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO Initializing bookkeeping folders May 17 00:39:15.790030 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO removing the completed state files May 17 00:39:15.790030 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO Initializing bookkeeping folders for long running plugins May 17 00:39:15.790030 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO Initializing replies folder for MDS reply requests that couldn't reach the service May 17 00:39:15.790030 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO Initializing healthcheck folders for long running plugins May 17 00:39:15.790030 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO Initializing locations for inventory plugin May 17 00:39:15.790030 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO Initializing default location for custom inventory May 17 00:39:15.790030 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO Initializing default location for file inventory May 17 00:39:15.790030 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO Initializing default location for role inventory May 17 00:39:15.790030 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO Init the cloudwatchlogs publisher May 17 00:39:15.790030 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO [instanceID=i-06fd1b32e13b11bb4] Successfully loaded platform independent plugin aws:runDocument May 17 00:39:15.790030 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO [instanceID=i-06fd1b32e13b11bb4] Successfully loaded platform independent plugin aws:updateSsmAgent May 17 00:39:15.790030 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO [instanceID=i-06fd1b32e13b11bb4] Successfully loaded platform independent plugin aws:refreshAssociation May 17 00:39:15.790030 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO [instanceID=i-06fd1b32e13b11bb4] Successfully loaded platform independent plugin aws:configurePackage May 17 00:39:15.790030 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO [instanceID=i-06fd1b32e13b11bb4] Successfully loaded platform independent plugin aws:downloadContent May 17 00:39:15.790030 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO [instanceID=i-06fd1b32e13b11bb4] Successfully loaded platform independent plugin aws:softwareInventory May 17 00:39:15.790030 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO [instanceID=i-06fd1b32e13b11bb4] Successfully loaded platform independent plugin aws:runPowerShellScript May 17 00:39:15.790030 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO [instanceID=i-06fd1b32e13b11bb4] Successfully loaded platform independent plugin aws:configureDocker May 17 00:39:15.790030 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO [instanceID=i-06fd1b32e13b11bb4] Successfully loaded platform independent plugin aws:runDockerAction May 17 00:39:15.793127 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO [instanceID=i-06fd1b32e13b11bb4] Successfully loaded platform dependent plugin aws:runShellScript May 17 00:39:15.793127 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 May 17 00:39:15.793127 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO OS: linux, Arch: amd64 May 17 00:39:15.793127 amazon-ssm-agent[1714]: datastore file /var/lib/amazon/ssm/i-06fd1b32e13b11bb4/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute May 17 00:39:15.802645 update-ssh-keys[1889]: Updated "/home/core/.ssh/authorized_keys" May 17 00:39:15.804356 systemd[1]: Finished coreos-metadata-sshkeys@core.service. May 17 00:39:15.835652 systemd[1]: Created slice system-sshd.slice. May 17 00:39:15.885981 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO [MessagingDeliveryService] Starting document processing engine... May 17 00:39:15.980288 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO [MessagingDeliveryService] [EngineProcessor] Starting May 17 00:39:16.074984 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing May 17 00:39:16.118782 locksmithd[1779]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:39:16.169545 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO [MessagingDeliveryService] Starting message polling May 17 00:39:16.264242 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO [MessagingDeliveryService] Starting send replies to MDS May 17 00:39:16.359224 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO [instanceID=i-06fd1b32e13b11bb4] Starting association polling May 17 00:39:16.455066 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting May 17 00:39:16.550550 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO [MessagingDeliveryService] [Association] Launching response handler May 17 00:39:16.646105 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing May 17 00:39:16.741745 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service May 17 00:39:16.837644 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized May 17 00:39:16.933799 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO [MessageGatewayService] Starting session document processing engine... May 17 00:39:17.030036 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO [MessageGatewayService] [EngineProcessor] Starting May 17 00:39:17.127232 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. May 17 00:39:17.224382 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-06fd1b32e13b11bb4, requestId: 95428099-0145-49ca-afa9-d4f0ded53afd May 17 00:39:17.322121 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO [OfflineService] Starting document processing engine... May 17 00:39:17.397866 systemd[1]: Started kubelet.service. May 17 00:39:17.419408 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO [OfflineService] [EngineProcessor] Starting May 17 00:39:17.493102 sshd_keygen[1750]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:39:17.515024 systemd[1]: Finished sshd-keygen.service. May 17 00:39:17.517044 systemd[1]: Starting issuegen.service... May 17 00:39:17.518799 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO [OfflineService] [EngineProcessor] Initial processing May 17 00:39:17.518799 systemd[1]: Started sshd@0-172.31.16.188:22-139.178.68.195:48464.service. May 17 00:39:17.526599 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:39:17.526764 systemd[1]: Finished issuegen.service. May 17 00:39:17.528786 systemd[1]: Starting systemd-user-sessions.service... May 17 00:39:17.537028 systemd[1]: Finished systemd-user-sessions.service. May 17 00:39:17.539251 systemd[1]: Started getty@tty1.service. May 17 00:39:17.541967 systemd[1]: Started serial-getty@ttyS0.service. May 17 00:39:17.543478 systemd[1]: Reached target getty.target. May 17 00:39:17.544337 systemd[1]: Reached target multi-user.target. May 17 00:39:17.547187 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 17 00:39:17.560072 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 17 00:39:17.560303 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 17 00:39:17.561373 systemd[1]: Startup finished in 648ms (kernel) + 6.131s (initrd) + 9.656s (userspace) = 16.436s. May 17 00:39:17.616332 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO [OfflineService] Starting message polling May 17 00:39:17.714049 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO [OfflineService] Starting send replies to MDS May 17 00:39:17.723114 sshd[1919]: Accepted publickey for core from 139.178.68.195 port 48464 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:39:17.726361 sshd[1919]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:39:17.746115 systemd[1]: Created slice user-500.slice. May 17 00:39:17.747328 systemd[1]: Starting user-runtime-dir@500.service... May 17 00:39:17.750893 systemd-logind[1727]: New session 1 of user core. May 17 00:39:17.761704 systemd[1]: Finished user-runtime-dir@500.service. May 17 00:39:17.763258 systemd[1]: Starting user@500.service... May 17 00:39:17.768294 (systemd)[1932]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:39:17.811877 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO [LongRunningPluginsManager] starting long running plugin manager May 17 00:39:17.863801 systemd[1932]: Queued start job for default target default.target. May 17 00:39:17.864967 systemd[1932]: Reached target paths.target. May 17 00:39:17.864991 systemd[1932]: Reached target sockets.target. May 17 00:39:17.865004 systemd[1932]: Reached target timers.target. May 17 00:39:17.865015 systemd[1932]: Reached target basic.target. May 17 00:39:17.865062 systemd[1932]: Reached target default.target. May 17 00:39:17.865093 systemd[1932]: Startup finished in 89ms. May 17 00:39:17.865194 systemd[1]: Started user@500.service. May 17 00:39:17.866162 systemd[1]: Started session-1.scope. May 17 00:39:17.909939 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute May 17 00:39:18.011432 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO [HealthCheck] HealthCheck reporting agent health. May 17 00:39:18.010952 systemd[1]: Started sshd@1-172.31.16.188:22-139.178.68.195:48470.service. May 17 00:39:18.109391 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck May 17 00:39:18.175465 sshd[1941]: Accepted publickey for core from 139.178.68.195 port 48470 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:39:18.176945 sshd[1941]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:39:18.182306 systemd[1]: Started session-2.scope. May 17 00:39:18.182968 systemd-logind[1727]: New session 2 of user core. May 17 00:39:18.208035 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO [MessageGatewayService] listening reply. May 17 00:39:18.307404 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO [StartupProcessor] Executing startup processor tasks May 17 00:39:18.308006 sshd[1941]: pam_unix(sshd:session): session closed for user core May 17 00:39:18.310358 systemd[1]: sshd@1-172.31.16.188:22-139.178.68.195:48470.service: Deactivated successfully. May 17 00:39:18.311455 systemd[1]: session-2.scope: Deactivated successfully. May 17 00:39:18.312024 systemd-logind[1727]: Session 2 logged out. Waiting for processes to exit. May 17 00:39:18.313037 systemd-logind[1727]: Removed session 2. May 17 00:39:18.334210 systemd[1]: Started sshd@2-172.31.16.188:22-139.178.68.195:48476.service. May 17 00:39:18.382841 kubelet[1910]: E0517 00:39:18.382762 1910 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:39:18.384319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:39:18.384449 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:39:18.384688 systemd[1]: kubelet.service: Consumed 1.181s CPU time. May 17 00:39:18.406082 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running May 17 00:39:18.498350 sshd[1947]: Accepted publickey for core from 139.178.68.195 port 48476 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:39:18.499830 sshd[1947]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:39:18.504104 systemd-logind[1727]: New session 3 of user core. May 17 00:39:18.504502 systemd[1]: Started session-3.scope. May 17 00:39:18.505500 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk May 17 00:39:18.605056 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.7 May 17 00:39:18.629441 sshd[1947]: pam_unix(sshd:session): session closed for user core May 17 00:39:18.632125 systemd[1]: sshd@2-172.31.16.188:22-139.178.68.195:48476.service: Deactivated successfully. May 17 00:39:18.632831 systemd[1]: session-3.scope: Deactivated successfully. May 17 00:39:18.633377 systemd-logind[1727]: Session 3 logged out. Waiting for processes to exit. May 17 00:39:18.634179 systemd-logind[1727]: Removed session 3. May 17 00:39:18.654977 systemd[1]: Started sshd@3-172.31.16.188:22-139.178.68.195:48484.service. May 17 00:39:18.704580 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-06fd1b32e13b11bb4?role=subscribe&stream=input May 17 00:39:18.804405 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-06fd1b32e13b11bb4?role=subscribe&stream=input May 17 00:39:18.820562 sshd[1953]: Accepted publickey for core from 139.178.68.195 port 48484 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:39:18.822030 sshd[1953]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:39:18.826933 systemd-logind[1727]: New session 4 of user core. May 17 00:39:18.827487 systemd[1]: Started session-4.scope. May 17 00:39:18.904523 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO [MessageGatewayService] Starting receiving message from control channel May 17 00:39:18.958619 sshd[1953]: pam_unix(sshd:session): session closed for user core May 17 00:39:18.961480 systemd[1]: sshd@3-172.31.16.188:22-139.178.68.195:48484.service: Deactivated successfully. May 17 00:39:18.962349 systemd[1]: session-4.scope: Deactivated successfully. May 17 00:39:18.963074 systemd-logind[1727]: Session 4 logged out. Waiting for processes to exit. May 17 00:39:18.964008 systemd-logind[1727]: Removed session 4. May 17 00:39:18.983796 systemd[1]: Started sshd@4-172.31.16.188:22-139.178.68.195:48490.service. May 17 00:39:19.004664 amazon-ssm-agent[1714]: 2025-05-17 00:39:15 INFO [MessageGatewayService] [EngineProcessor] Initial processing May 17 00:39:19.147296 sshd[1959]: Accepted publickey for core from 139.178.68.195 port 48490 ssh2: RSA SHA256:I5cGDzOOPhNK8a4J4SFPiuUQivu3TK8ocBzhX4AkN30 May 17 00:39:19.148678 sshd[1959]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:39:19.153432 systemd-logind[1727]: New session 5 of user core. May 17 00:39:19.153744 systemd[1]: Started session-5.scope. May 17 00:39:19.305033 sudo[1962]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:39:19.305626 sudo[1962]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 17 00:39:19.319209 systemd[1]: Starting coreos-metadata.service... May 17 00:39:19.396571 coreos-metadata[1966]: May 17 00:39:19.396 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 17 00:39:19.397278 coreos-metadata[1966]: May 17 00:39:19.397 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-id: Attempt #1 May 17 00:39:19.397803 coreos-metadata[1966]: May 17 00:39:19.397 INFO Fetch successful May 17 00:39:19.397864 coreos-metadata[1966]: May 17 00:39:19.397 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-type: Attempt #1 May 17 00:39:19.398380 coreos-metadata[1966]: May 17 00:39:19.398 INFO Fetch successful May 17 00:39:19.398380 coreos-metadata[1966]: May 17 00:39:19.398 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/local-ipv4: Attempt #1 May 17 00:39:19.398893 coreos-metadata[1966]: May 17 00:39:19.398 INFO Fetch successful May 17 00:39:19.398893 coreos-metadata[1966]: May 17 00:39:19.398 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-ipv4: Attempt #1 May 17 00:39:19.399318 coreos-metadata[1966]: May 17 00:39:19.399 INFO Fetch successful May 17 00:39:19.399422 coreos-metadata[1966]: May 17 00:39:19.399 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/placement/availability-zone: Attempt #1 May 17 00:39:19.400006 coreos-metadata[1966]: May 17 00:39:19.399 INFO Fetch successful May 17 00:39:19.400048 coreos-metadata[1966]: May 17 00:39:19.400 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/hostname: Attempt #1 May 17 00:39:19.400527 coreos-metadata[1966]: May 17 00:39:19.400 INFO Fetch successful May 17 00:39:19.400631 coreos-metadata[1966]: May 17 00:39:19.400 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-hostname: Attempt #1 May 17 00:39:19.401159 coreos-metadata[1966]: May 17 00:39:19.401 INFO Fetch successful May 17 00:39:19.401159 coreos-metadata[1966]: May 17 00:39:19.401 INFO Fetching http://169.254.169.254/2019-10-01/dynamic/instance-identity/document: Attempt #1 May 17 00:39:19.401644 coreos-metadata[1966]: May 17 00:39:19.401 INFO Fetch successful May 17 00:39:19.409702 systemd[1]: Finished coreos-metadata.service. May 17 00:39:20.270723 systemd[1]: Stopped kubelet.service. May 17 00:39:20.270900 systemd[1]: kubelet.service: Consumed 1.181s CPU time. May 17 00:39:20.272988 systemd[1]: Starting kubelet.service... May 17 00:39:20.307678 systemd[1]: Reloading. May 17 00:39:20.439051 /usr/lib/systemd/system-generators/torcx-generator[2019]: time="2025-05-17T00:39:20Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:39:20.439091 /usr/lib/systemd/system-generators/torcx-generator[2019]: time="2025-05-17T00:39:20Z" level=info msg="torcx already run" May 17 00:39:20.560996 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:39:20.561022 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:39:20.580007 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:39:20.689138 systemd[1]: Started kubelet.service. May 17 00:39:20.691947 systemd[1]: Stopping kubelet.service... May 17 00:39:20.692346 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:39:20.692577 systemd[1]: Stopped kubelet.service. May 17 00:39:20.694647 systemd[1]: Starting kubelet.service... May 17 00:39:21.182352 systemd[1]: Started kubelet.service. May 17 00:39:21.238525 kubelet[2082]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:39:21.238525 kubelet[2082]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:39:21.238525 kubelet[2082]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:39:21.239005 kubelet[2082]: I0517 00:39:21.238540 2082 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:39:21.470515 kubelet[2082]: I0517 00:39:21.470342 2082 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:39:21.470515 kubelet[2082]: I0517 00:39:21.470453 2082 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:39:21.471159 kubelet[2082]: I0517 00:39:21.471131 2082 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:39:21.514926 kubelet[2082]: I0517 00:39:21.514869 2082 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:39:21.529692 kubelet[2082]: E0517 00:39:21.529648 2082 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:39:21.529692 kubelet[2082]: I0517 00:39:21.529683 2082 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:39:21.544699 kubelet[2082]: I0517 00:39:21.544649 2082 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:39:21.544858 kubelet[2082]: I0517 00:39:21.544762 2082 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:39:21.545302 kubelet[2082]: I0517 00:39:21.544922 2082 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:39:21.545302 kubelet[2082]: I0517 00:39:21.544958 2082 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.16.188","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:39:21.545302 kubelet[2082]: I0517 00:39:21.545198 2082 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:39:21.545302 kubelet[2082]: I0517 00:39:21.545206 2082 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:39:21.545475 kubelet[2082]: I0517 00:39:21.545302 2082 state_mem.go:36] "Initialized new in-memory state store" May 17 00:39:21.553426 kubelet[2082]: I0517 00:39:21.553368 2082 kubelet.go:408] "Attempting to sync node with API server" May 17 00:39:21.553426 kubelet[2082]: I0517 00:39:21.553405 2082 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:39:21.553578 kubelet[2082]: I0517 00:39:21.553451 2082 kubelet.go:314] "Adding apiserver pod source" May 17 00:39:21.553578 kubelet[2082]: I0517 00:39:21.553471 2082 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:39:21.553578 kubelet[2082]: E0517 00:39:21.553535 2082 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:39:21.553578 kubelet[2082]: E0517 00:39:21.553564 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:39:21.559243 kubelet[2082]: W0517 00:39:21.559195 2082 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.31.16.188" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope May 17 00:39:21.559243 kubelet[2082]: E0517 00:39:21.559241 2082 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"172.31.16.188\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" May 17 00:39:21.559399 kubelet[2082]: W0517 00:39:21.559353 2082 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope May 17 00:39:21.559399 kubelet[2082]: E0517 00:39:21.559370 2082 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" May 17 00:39:21.561322 kubelet[2082]: I0517 00:39:21.561297 2082 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 17 00:39:21.561853 kubelet[2082]: I0517 00:39:21.561804 2082 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:39:21.561956 kubelet[2082]: W0517 00:39:21.561872 2082 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:39:21.564621 kubelet[2082]: I0517 00:39:21.564594 2082 server.go:1274] "Started kubelet" May 17 00:39:21.567549 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 17 00:39:21.568275 kubelet[2082]: I0517 00:39:21.567679 2082 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:39:21.577992 kubelet[2082]: I0517 00:39:21.577927 2082 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:39:21.583545 kubelet[2082]: I0517 00:39:21.580569 2082 server.go:449] "Adding debug handlers to kubelet server" May 17 00:39:21.583545 kubelet[2082]: I0517 00:39:21.581578 2082 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:39:21.583545 kubelet[2082]: I0517 00:39:21.581788 2082 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:39:21.583545 kubelet[2082]: I0517 00:39:21.582025 2082 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:39:21.583759 kubelet[2082]: I0517 00:39:21.583742 2082 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:39:21.584096 kubelet[2082]: E0517 00:39:21.584077 2082 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.16.188\" not found" May 17 00:39:21.584880 kubelet[2082]: I0517 00:39:21.584860 2082 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:39:21.584969 kubelet[2082]: I0517 00:39:21.584910 2082 reconciler.go:26] "Reconciler: start to sync state" May 17 00:39:21.587180 kubelet[2082]: I0517 00:39:21.587155 2082 factory.go:221] Registration of the systemd container factory successfully May 17 00:39:21.587287 kubelet[2082]: I0517 00:39:21.587270 2082 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:39:21.588734 kubelet[2082]: E0517 00:39:21.588710 2082 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:39:21.589349 kubelet[2082]: E0517 00:39:21.589331 2082 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.16.188\" not found" node="172.31.16.188" May 17 00:39:21.590145 kubelet[2082]: I0517 00:39:21.590125 2082 factory.go:221] Registration of the containerd container factory successfully May 17 00:39:21.626258 kubelet[2082]: I0517 00:39:21.626234 2082 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:39:21.626499 kubelet[2082]: I0517 00:39:21.626478 2082 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:39:21.626579 kubelet[2082]: I0517 00:39:21.626514 2082 state_mem.go:36] "Initialized new in-memory state store" May 17 00:39:21.631729 kubelet[2082]: I0517 00:39:21.631700 2082 policy_none.go:49] "None policy: Start" May 17 00:39:21.632453 kubelet[2082]: I0517 00:39:21.632424 2082 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:39:21.632566 kubelet[2082]: I0517 00:39:21.632460 2082 state_mem.go:35] "Initializing new in-memory state store" May 17 00:39:21.640322 systemd[1]: Created slice kubepods.slice. May 17 00:39:21.651768 systemd[1]: Created slice kubepods-burstable.slice. May 17 00:39:21.657456 systemd[1]: Created slice kubepods-besteffort.slice. May 17 00:39:21.673728 kubelet[2082]: I0517 00:39:21.673700 2082 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:39:21.680773 kubelet[2082]: I0517 00:39:21.680751 2082 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:39:21.681008 kubelet[2082]: I0517 00:39:21.680954 2082 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:39:21.681659 kubelet[2082]: I0517 00:39:21.681642 2082 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:39:21.684458 kubelet[2082]: E0517 00:39:21.684269 2082 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.16.188\" not found" May 17 00:39:21.733188 kubelet[2082]: I0517 00:39:21.731269 2082 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:39:21.735070 kubelet[2082]: I0517 00:39:21.735024 2082 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:39:21.735070 kubelet[2082]: I0517 00:39:21.735061 2082 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:39:21.735252 kubelet[2082]: I0517 00:39:21.735086 2082 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:39:21.735252 kubelet[2082]: E0517 00:39:21.735140 2082 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" May 17 00:39:21.783175 kubelet[2082]: I0517 00:39:21.783113 2082 kubelet_node_status.go:72] "Attempting to register node" node="172.31.16.188" May 17 00:39:21.787696 kubelet[2082]: I0517 00:39:21.787661 2082 kubelet_node_status.go:75] "Successfully registered node" node="172.31.16.188" May 17 00:39:21.787696 kubelet[2082]: E0517 00:39:21.787699 2082 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.31.16.188\": node \"172.31.16.188\" not found" May 17 00:39:21.799570 kubelet[2082]: E0517 00:39:21.799524 2082 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.16.188\" not found" May 17 00:39:21.899738 kubelet[2082]: E0517 00:39:21.899687 2082 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.16.188\" not found" May 17 00:39:22.000297 kubelet[2082]: E0517 00:39:22.000176 2082 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.16.188\" not found" May 17 00:39:22.100705 kubelet[2082]: E0517 00:39:22.100668 2082 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.16.188\" not found" May 17 00:39:22.201449 kubelet[2082]: E0517 00:39:22.201409 2082 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.16.188\" not found" May 17 00:39:22.211153 sudo[1962]: pam_unix(sudo:session): session closed for user root May 17 00:39:22.236759 sshd[1959]: pam_unix(sshd:session): session closed for user core May 17 00:39:22.239788 systemd[1]: sshd@4-172.31.16.188:22-139.178.68.195:48490.service: Deactivated successfully. May 17 00:39:22.240495 systemd[1]: session-5.scope: Deactivated successfully. May 17 00:39:22.241047 systemd-logind[1727]: Session 5 logged out. Waiting for processes to exit. May 17 00:39:22.242024 systemd-logind[1727]: Removed session 5. May 17 00:39:22.302213 kubelet[2082]: E0517 00:39:22.302092 2082 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.16.188\" not found" May 17 00:39:22.402896 kubelet[2082]: E0517 00:39:22.402844 2082 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.16.188\" not found" May 17 00:39:22.473173 kubelet[2082]: I0517 00:39:22.473132 2082 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" May 17 00:39:22.473329 kubelet[2082]: W0517 00:39:22.473307 2082 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 17 00:39:22.473363 kubelet[2082]: W0517 00:39:22.473338 2082 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 17 00:39:22.503828 kubelet[2082]: E0517 00:39:22.503736 2082 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.16.188\" not found" May 17 00:39:22.554240 kubelet[2082]: E0517 00:39:22.554116 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:39:22.604486 kubelet[2082]: E0517 00:39:22.604433 2082 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.16.188\" not found" May 17 00:39:22.705374 kubelet[2082]: I0517 00:39:22.705339 2082 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.2.0/24" May 17 00:39:22.705707 env[1735]: time="2025-05-17T00:39:22.705664016Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:39:22.706003 kubelet[2082]: I0517 00:39:22.705884 2082 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.2.0/24" May 17 00:39:23.554268 kubelet[2082]: E0517 00:39:23.554218 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:39:23.554268 kubelet[2082]: I0517 00:39:23.554276 2082 apiserver.go:52] "Watching apiserver" May 17 00:39:23.572243 systemd[1]: Created slice kubepods-besteffort-podaa448302_dd04_4482_a0e2_20c7517764fd.slice. May 17 00:39:23.586445 systemd[1]: Created slice kubepods-burstable-pod336f9ddb_68ef_4385_b063_c34da0b06909.slice. May 17 00:39:23.586965 kubelet[2082]: I0517 00:39:23.586943 2082 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:39:23.597743 kubelet[2082]: I0517 00:39:23.597705 2082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/336f9ddb-68ef-4385-b063-c34da0b06909-etc-cni-netd\") pod \"cilium-jw22m\" (UID: \"336f9ddb-68ef-4385-b063-c34da0b06909\") " pod="kube-system/cilium-jw22m" May 17 00:39:23.597743 kubelet[2082]: I0517 00:39:23.597751 2082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6rmv\" (UniqueName: \"kubernetes.io/projected/336f9ddb-68ef-4385-b063-c34da0b06909-kube-api-access-b6rmv\") pod \"cilium-jw22m\" (UID: \"336f9ddb-68ef-4385-b063-c34da0b06909\") " pod="kube-system/cilium-jw22m" May 17 00:39:23.597945 kubelet[2082]: I0517 00:39:23.597778 2082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rkkw\" (UniqueName: \"kubernetes.io/projected/aa448302-dd04-4482-a0e2-20c7517764fd-kube-api-access-6rkkw\") pod \"kube-proxy-4q588\" (UID: \"aa448302-dd04-4482-a0e2-20c7517764fd\") " pod="kube-system/kube-proxy-4q588" May 17 00:39:23.597945 kubelet[2082]: I0517 00:39:23.597795 2082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/aa448302-dd04-4482-a0e2-20c7517764fd-kube-proxy\") pod \"kube-proxy-4q588\" (UID: \"aa448302-dd04-4482-a0e2-20c7517764fd\") " pod="kube-system/kube-proxy-4q588" May 17 00:39:23.597945 kubelet[2082]: I0517 00:39:23.597811 2082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa448302-dd04-4482-a0e2-20c7517764fd-lib-modules\") pod \"kube-proxy-4q588\" (UID: \"aa448302-dd04-4482-a0e2-20c7517764fd\") " pod="kube-system/kube-proxy-4q588" May 17 00:39:23.597945 kubelet[2082]: I0517 00:39:23.597834 2082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/336f9ddb-68ef-4385-b063-c34da0b06909-cilium-run\") pod \"cilium-jw22m\" (UID: \"336f9ddb-68ef-4385-b063-c34da0b06909\") " pod="kube-system/cilium-jw22m" May 17 00:39:23.597945 kubelet[2082]: I0517 00:39:23.597849 2082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/336f9ddb-68ef-4385-b063-c34da0b06909-lib-modules\") pod \"cilium-jw22m\" (UID: \"336f9ddb-68ef-4385-b063-c34da0b06909\") " pod="kube-system/cilium-jw22m" May 17 00:39:23.597945 kubelet[2082]: I0517 00:39:23.597864 2082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/336f9ddb-68ef-4385-b063-c34da0b06909-hubble-tls\") pod \"cilium-jw22m\" (UID: \"336f9ddb-68ef-4385-b063-c34da0b06909\") " pod="kube-system/cilium-jw22m" May 17 00:39:23.598092 kubelet[2082]: I0517 00:39:23.597880 2082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/336f9ddb-68ef-4385-b063-c34da0b06909-cilium-config-path\") pod \"cilium-jw22m\" (UID: \"336f9ddb-68ef-4385-b063-c34da0b06909\") " pod="kube-system/cilium-jw22m" May 17 00:39:23.598092 kubelet[2082]: I0517 00:39:23.597897 2082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/336f9ddb-68ef-4385-b063-c34da0b06909-host-proc-sys-net\") pod \"cilium-jw22m\" (UID: \"336f9ddb-68ef-4385-b063-c34da0b06909\") " pod="kube-system/cilium-jw22m" May 17 00:39:23.598092 kubelet[2082]: I0517 00:39:23.597911 2082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/336f9ddb-68ef-4385-b063-c34da0b06909-bpf-maps\") pod \"cilium-jw22m\" (UID: \"336f9ddb-68ef-4385-b063-c34da0b06909\") " pod="kube-system/cilium-jw22m" May 17 00:39:23.598092 kubelet[2082]: I0517 00:39:23.597925 2082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/336f9ddb-68ef-4385-b063-c34da0b06909-xtables-lock\") pod \"cilium-jw22m\" (UID: \"336f9ddb-68ef-4385-b063-c34da0b06909\") " pod="kube-system/cilium-jw22m" May 17 00:39:23.598092 kubelet[2082]: I0517 00:39:23.597940 2082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/336f9ddb-68ef-4385-b063-c34da0b06909-clustermesh-secrets\") pod \"cilium-jw22m\" (UID: \"336f9ddb-68ef-4385-b063-c34da0b06909\") " pod="kube-system/cilium-jw22m" May 17 00:39:23.598206 kubelet[2082]: I0517 00:39:23.597967 2082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/336f9ddb-68ef-4385-b063-c34da0b06909-host-proc-sys-kernel\") pod \"cilium-jw22m\" (UID: \"336f9ddb-68ef-4385-b063-c34da0b06909\") " pod="kube-system/cilium-jw22m" May 17 00:39:23.598206 kubelet[2082]: I0517 00:39:23.597985 2082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa448302-dd04-4482-a0e2-20c7517764fd-xtables-lock\") pod \"kube-proxy-4q588\" (UID: \"aa448302-dd04-4482-a0e2-20c7517764fd\") " pod="kube-system/kube-proxy-4q588" May 17 00:39:23.598206 kubelet[2082]: I0517 00:39:23.598004 2082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/336f9ddb-68ef-4385-b063-c34da0b06909-hostproc\") pod \"cilium-jw22m\" (UID: \"336f9ddb-68ef-4385-b063-c34da0b06909\") " pod="kube-system/cilium-jw22m" May 17 00:39:23.598206 kubelet[2082]: I0517 00:39:23.598018 2082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/336f9ddb-68ef-4385-b063-c34da0b06909-cilium-cgroup\") pod \"cilium-jw22m\" (UID: \"336f9ddb-68ef-4385-b063-c34da0b06909\") " pod="kube-system/cilium-jw22m" May 17 00:39:23.598206 kubelet[2082]: I0517 00:39:23.598033 2082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/336f9ddb-68ef-4385-b063-c34da0b06909-cni-path\") pod \"cilium-jw22m\" (UID: \"336f9ddb-68ef-4385-b063-c34da0b06909\") " pod="kube-system/cilium-jw22m" May 17 00:39:23.700091 kubelet[2082]: I0517 00:39:23.700037 2082 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 17 00:39:23.883649 env[1735]: time="2025-05-17T00:39:23.883593878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4q588,Uid:aa448302-dd04-4482-a0e2-20c7517764fd,Namespace:kube-system,Attempt:0,}" May 17 00:39:23.895653 env[1735]: time="2025-05-17T00:39:23.895601576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jw22m,Uid:336f9ddb-68ef-4385-b063-c34da0b06909,Namespace:kube-system,Attempt:0,}" May 17 00:39:23.907596 amazon-ssm-agent[1714]: 2025-05-17 00:39:23 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. May 17 00:39:24.428603 env[1735]: time="2025-05-17T00:39:24.428557693Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:24.429977 env[1735]: time="2025-05-17T00:39:24.429936634Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:24.434668 env[1735]: time="2025-05-17T00:39:24.434628394Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:24.436971 env[1735]: time="2025-05-17T00:39:24.436933660Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:24.439254 env[1735]: time="2025-05-17T00:39:24.439216359Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:24.440563 env[1735]: time="2025-05-17T00:39:24.440535818Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:24.441632 env[1735]: time="2025-05-17T00:39:24.441603083Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:24.444125 env[1735]: time="2025-05-17T00:39:24.444101199Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:24.481201 env[1735]: time="2025-05-17T00:39:24.474169431Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:39:24.481201 env[1735]: time="2025-05-17T00:39:24.474215847Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:39:24.481201 env[1735]: time="2025-05-17T00:39:24.474232463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:39:24.481201 env[1735]: time="2025-05-17T00:39:24.474444122Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7e1b8fcd73b5ef0745e91e7813553d086b3c9a659e13e876ca45987652484c16 pid=2139 runtime=io.containerd.runc.v2 May 17 00:39:24.481437 env[1735]: time="2025-05-17T00:39:24.477263223Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:39:24.481437 env[1735]: time="2025-05-17T00:39:24.477310219Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:39:24.481437 env[1735]: time="2025-05-17T00:39:24.477330803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:39:24.481437 env[1735]: time="2025-05-17T00:39:24.477829567Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d1de63ef59b37036b4b1ad906fc119d668a52901e5a9f236bb437a120fe7cfe6 pid=2143 runtime=io.containerd.runc.v2 May 17 00:39:24.495086 systemd[1]: Started cri-containerd-7e1b8fcd73b5ef0745e91e7813553d086b3c9a659e13e876ca45987652484c16.scope. May 17 00:39:24.514452 systemd[1]: Started cri-containerd-d1de63ef59b37036b4b1ad906fc119d668a52901e5a9f236bb437a120fe7cfe6.scope. May 17 00:39:24.537708 env[1735]: time="2025-05-17T00:39:24.537659304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4q588,Uid:aa448302-dd04-4482-a0e2-20c7517764fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e1b8fcd73b5ef0745e91e7813553d086b3c9a659e13e876ca45987652484c16\"" May 17 00:39:24.539803 env[1735]: time="2025-05-17T00:39:24.539766631Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 17 00:39:24.546104 env[1735]: time="2025-05-17T00:39:24.545886785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jw22m,Uid:336f9ddb-68ef-4385-b063-c34da0b06909,Namespace:kube-system,Attempt:0,} returns sandbox id \"d1de63ef59b37036b4b1ad906fc119d668a52901e5a9f236bb437a120fe7cfe6\"" May 17 00:39:24.555248 kubelet[2082]: E0517 00:39:24.555203 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:39:24.709731 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4208354613.mount: Deactivated successfully. May 17 00:39:25.555654 kubelet[2082]: E0517 00:39:25.555599 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:39:25.600730 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount212023369.mount: Deactivated successfully. May 17 00:39:26.269048 env[1735]: time="2025-05-17T00:39:26.268992474Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:26.271288 env[1735]: time="2025-05-17T00:39:26.271242718Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:26.274126 env[1735]: time="2025-05-17T00:39:26.274089329Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:26.276234 env[1735]: time="2025-05-17T00:39:26.276099013Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:26.277124 env[1735]: time="2025-05-17T00:39:26.277088577Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\"" May 17 00:39:26.279103 env[1735]: time="2025-05-17T00:39:26.279073420Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 17 00:39:26.279898 env[1735]: time="2025-05-17T00:39:26.279862788Z" level=info msg="CreateContainer within sandbox \"7e1b8fcd73b5ef0745e91e7813553d086b3c9a659e13e876ca45987652484c16\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:39:26.294783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3756006842.mount: Deactivated successfully. May 17 00:39:26.299797 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount679546559.mount: Deactivated successfully. May 17 00:39:26.309694 env[1735]: time="2025-05-17T00:39:26.309612001Z" level=info msg="CreateContainer within sandbox \"7e1b8fcd73b5ef0745e91e7813553d086b3c9a659e13e876ca45987652484c16\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4778ad5088926c7047360dc3e6aa37a01235f1b5dcfdf98655b967ebd6f4eadf\"" May 17 00:39:26.310627 env[1735]: time="2025-05-17T00:39:26.310585629Z" level=info msg="StartContainer for \"4778ad5088926c7047360dc3e6aa37a01235f1b5dcfdf98655b967ebd6f4eadf\"" May 17 00:39:26.336399 systemd[1]: Started cri-containerd-4778ad5088926c7047360dc3e6aa37a01235f1b5dcfdf98655b967ebd6f4eadf.scope. May 17 00:39:26.376127 env[1735]: time="2025-05-17T00:39:26.376065422Z" level=info msg="StartContainer for \"4778ad5088926c7047360dc3e6aa37a01235f1b5dcfdf98655b967ebd6f4eadf\" returns successfully" May 17 00:39:26.556438 kubelet[2082]: E0517 00:39:26.556297 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:39:27.557528 kubelet[2082]: E0517 00:39:27.557425 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:39:28.557999 kubelet[2082]: E0517 00:39:28.557957 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:39:29.558175 kubelet[2082]: E0517 00:39:29.558097 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:39:30.559300 kubelet[2082]: E0517 00:39:30.559189 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:39:31.548507 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2404662334.mount: Deactivated successfully. May 17 00:39:31.560287 kubelet[2082]: E0517 00:39:31.560212 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:39:32.560609 kubelet[2082]: E0517 00:39:32.560535 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:39:33.561343 kubelet[2082]: E0517 00:39:33.561294 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:39:34.561472 kubelet[2082]: E0517 00:39:34.561430 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:39:35.168348 env[1735]: time="2025-05-17T00:39:35.168197369Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:35.171671 env[1735]: time="2025-05-17T00:39:35.171630075Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:35.175358 env[1735]: time="2025-05-17T00:39:35.175308583Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:35.176208 env[1735]: time="2025-05-17T00:39:35.176166586Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 17 00:39:35.179377 env[1735]: time="2025-05-17T00:39:35.179333025Z" level=info msg="CreateContainer within sandbox \"d1de63ef59b37036b4b1ad906fc119d668a52901e5a9f236bb437a120fe7cfe6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:39:35.193244 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4013079060.mount: Deactivated successfully. May 17 00:39:35.200907 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2989715091.mount: Deactivated successfully. May 17 00:39:35.206439 env[1735]: time="2025-05-17T00:39:35.206368674Z" level=info msg="CreateContainer within sandbox \"d1de63ef59b37036b4b1ad906fc119d668a52901e5a9f236bb437a120fe7cfe6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9c39c88e8d8275e64a7eff54e079deccad6dbb0bac5c1cde8970108ac55419ac\"" May 17 00:39:35.207095 env[1735]: time="2025-05-17T00:39:35.207066107Z" level=info msg="StartContainer for \"9c39c88e8d8275e64a7eff54e079deccad6dbb0bac5c1cde8970108ac55419ac\"" May 17 00:39:35.230023 systemd[1]: Started cri-containerd-9c39c88e8d8275e64a7eff54e079deccad6dbb0bac5c1cde8970108ac55419ac.scope. May 17 00:39:35.273100 env[1735]: time="2025-05-17T00:39:35.273046179Z" level=info msg="StartContainer for \"9c39c88e8d8275e64a7eff54e079deccad6dbb0bac5c1cde8970108ac55419ac\" returns successfully" May 17 00:39:35.280802 systemd[1]: cri-containerd-9c39c88e8d8275e64a7eff54e079deccad6dbb0bac5c1cde8970108ac55419ac.scope: Deactivated successfully. May 17 00:39:35.562780 kubelet[2082]: E0517 00:39:35.562528 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:39:35.784391 kubelet[2082]: I0517 00:39:35.784336 2082 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4q588" podStartSLOduration=13.045389719 podStartE2EDuration="14.784319253s" podCreationTimestamp="2025-05-17 00:39:21 +0000 UTC" firstStartedPulling="2025-05-17 00:39:24.539332611 +0000 UTC m=+3.352641189" lastFinishedPulling="2025-05-17 00:39:26.278262126 +0000 UTC m=+5.091570723" observedRunningTime="2025-05-17 00:39:26.764747059 +0000 UTC m=+5.578055651" watchObservedRunningTime="2025-05-17 00:39:35.784319253 +0000 UTC m=+14.597627848" May 17 00:39:36.190219 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c39c88e8d8275e64a7eff54e079deccad6dbb0bac5c1cde8970108ac55419ac-rootfs.mount: Deactivated successfully. May 17 00:39:36.193352 env[1735]: time="2025-05-17T00:39:36.193300817Z" level=info msg="shim disconnected" id=9c39c88e8d8275e64a7eff54e079deccad6dbb0bac5c1cde8970108ac55419ac May 17 00:39:36.193763 env[1735]: time="2025-05-17T00:39:36.193727553Z" level=warning msg="cleaning up after shim disconnected" id=9c39c88e8d8275e64a7eff54e079deccad6dbb0bac5c1cde8970108ac55419ac namespace=k8s.io May 17 00:39:36.193763 env[1735]: time="2025-05-17T00:39:36.193757741Z" level=info msg="cleaning up dead shim" May 17 00:39:36.202530 env[1735]: time="2025-05-17T00:39:36.202471140Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:39:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2426 runtime=io.containerd.runc.v2\n" May 17 00:39:36.562937 kubelet[2082]: E0517 00:39:36.562795 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:39:36.770846 env[1735]: time="2025-05-17T00:39:36.770795213Z" level=info msg="CreateContainer within sandbox \"d1de63ef59b37036b4b1ad906fc119d668a52901e5a9f236bb437a120fe7cfe6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 00:39:36.796935 env[1735]: time="2025-05-17T00:39:36.796876875Z" level=info msg="CreateContainer within sandbox \"d1de63ef59b37036b4b1ad906fc119d668a52901e5a9f236bb437a120fe7cfe6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d8dcebc20accdbb4d461d939d6df0f8eca4b61e2fa2e6d9e2366bf893c110a46\"" May 17 00:39:36.797944 env[1735]: time="2025-05-17T00:39:36.797895841Z" level=info msg="StartContainer for \"d8dcebc20accdbb4d461d939d6df0f8eca4b61e2fa2e6d9e2366bf893c110a46\"" May 17 00:39:36.817297 systemd[1]: Started cri-containerd-d8dcebc20accdbb4d461d939d6df0f8eca4b61e2fa2e6d9e2366bf893c110a46.scope. May 17 00:39:36.853770 env[1735]: time="2025-05-17T00:39:36.853712151Z" level=info msg="StartContainer for \"d8dcebc20accdbb4d461d939d6df0f8eca4b61e2fa2e6d9e2366bf893c110a46\" returns successfully" May 17 00:39:36.864850 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:39:36.865419 systemd[1]: Stopped systemd-sysctl.service. May 17 00:39:36.865737 systemd[1]: Stopping systemd-sysctl.service... May 17 00:39:36.868658 systemd[1]: Starting systemd-sysctl.service... May 17 00:39:36.874850 systemd[1]: cri-containerd-d8dcebc20accdbb4d461d939d6df0f8eca4b61e2fa2e6d9e2366bf893c110a46.scope: Deactivated successfully. May 17 00:39:36.888250 systemd[1]: Finished systemd-sysctl.service. May 17 00:39:36.914557 env[1735]: time="2025-05-17T00:39:36.914502161Z" level=info msg="shim disconnected" id=d8dcebc20accdbb4d461d939d6df0f8eca4b61e2fa2e6d9e2366bf893c110a46 May 17 00:39:36.914557 env[1735]: time="2025-05-17T00:39:36.914553891Z" level=warning msg="cleaning up after shim disconnected" id=d8dcebc20accdbb4d461d939d6df0f8eca4b61e2fa2e6d9e2366bf893c110a46 namespace=k8s.io May 17 00:39:36.914557 env[1735]: time="2025-05-17T00:39:36.914565823Z" level=info msg="cleaning up dead shim" May 17 00:39:36.923497 env[1735]: time="2025-05-17T00:39:36.923450472Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:39:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2492 runtime=io.containerd.runc.v2\n" May 17 00:39:37.189204 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8dcebc20accdbb4d461d939d6df0f8eca4b61e2fa2e6d9e2366bf893c110a46-rootfs.mount: Deactivated successfully. May 17 00:39:37.563971 kubelet[2082]: E0517 00:39:37.563857 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:39:37.774455 env[1735]: time="2025-05-17T00:39:37.774397905Z" level=info msg="CreateContainer within sandbox \"d1de63ef59b37036b4b1ad906fc119d668a52901e5a9f236bb437a120fe7cfe6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 00:39:37.801223 env[1735]: time="2025-05-17T00:39:37.801172066Z" level=info msg="CreateContainer within sandbox \"d1de63ef59b37036b4b1ad906fc119d668a52901e5a9f236bb437a120fe7cfe6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e726b00b5c3a43580ec8b751bfef5bf88aaad56a9c084628005eff608ec67f37\"" May 17 00:39:37.801827 env[1735]: time="2025-05-17T00:39:37.801789834Z" level=info msg="StartContainer for \"e726b00b5c3a43580ec8b751bfef5bf88aaad56a9c084628005eff608ec67f37\"" May 17 00:39:37.823425 systemd[1]: Started cri-containerd-e726b00b5c3a43580ec8b751bfef5bf88aaad56a9c084628005eff608ec67f37.scope. May 17 00:39:37.857968 systemd[1]: cri-containerd-e726b00b5c3a43580ec8b751bfef5bf88aaad56a9c084628005eff608ec67f37.scope: Deactivated successfully. May 17 00:39:37.860218 env[1735]: time="2025-05-17T00:39:37.860165992Z" level=info msg="StartContainer for \"e726b00b5c3a43580ec8b751bfef5bf88aaad56a9c084628005eff608ec67f37\" returns successfully" May 17 00:39:37.887095 env[1735]: time="2025-05-17T00:39:37.887021954Z" level=info msg="shim disconnected" id=e726b00b5c3a43580ec8b751bfef5bf88aaad56a9c084628005eff608ec67f37 May 17 00:39:37.887095 env[1735]: time="2025-05-17T00:39:37.887069093Z" level=warning msg="cleaning up after shim disconnected" id=e726b00b5c3a43580ec8b751bfef5bf88aaad56a9c084628005eff608ec67f37 namespace=k8s.io May 17 00:39:37.887095 env[1735]: time="2025-05-17T00:39:37.887079056Z" level=info msg="cleaning up dead shim" May 17 00:39:37.895301 env[1735]: time="2025-05-17T00:39:37.895253683Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:39:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2548 runtime=io.containerd.runc.v2\n" May 17 00:39:38.189222 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e726b00b5c3a43580ec8b751bfef5bf88aaad56a9c084628005eff608ec67f37-rootfs.mount: Deactivated successfully. May 17 00:39:38.564109 kubelet[2082]: E0517 00:39:38.563978 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:39:38.778685 env[1735]: time="2025-05-17T00:39:38.778641481Z" level=info msg="CreateContainer within sandbox \"d1de63ef59b37036b4b1ad906fc119d668a52901e5a9f236bb437a120fe7cfe6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 00:39:38.798696 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount726059338.mount: Deactivated successfully. May 17 00:39:38.816286 env[1735]: time="2025-05-17T00:39:38.815879810Z" level=info msg="CreateContainer within sandbox \"d1de63ef59b37036b4b1ad906fc119d668a52901e5a9f236bb437a120fe7cfe6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b49ec695f30d1a3775ecdb8a3e19e91d42a9728314277b4abde519013aa92fef\"" May 17 00:39:38.816700 env[1735]: time="2025-05-17T00:39:38.816633757Z" level=info msg="StartContainer for \"b49ec695f30d1a3775ecdb8a3e19e91d42a9728314277b4abde519013aa92fef\"" May 17 00:39:38.840354 systemd[1]: Started cri-containerd-b49ec695f30d1a3775ecdb8a3e19e91d42a9728314277b4abde519013aa92fef.scope. May 17 00:39:38.872189 systemd[1]: cri-containerd-b49ec695f30d1a3775ecdb8a3e19e91d42a9728314277b4abde519013aa92fef.scope: Deactivated successfully. May 17 00:39:38.878956 env[1735]: time="2025-05-17T00:39:38.878879966Z" level=info msg="StartContainer for \"b49ec695f30d1a3775ecdb8a3e19e91d42a9728314277b4abde519013aa92fef\" returns successfully" May 17 00:39:38.879688 env[1735]: time="2025-05-17T00:39:38.879618872Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod336f9ddb_68ef_4385_b063_c34da0b06909.slice/cri-containerd-b49ec695f30d1a3775ecdb8a3e19e91d42a9728314277b4abde519013aa92fef.scope/memory.events\": no such file or directory" May 17 00:39:38.905120 env[1735]: time="2025-05-17T00:39:38.905071050Z" level=info msg="shim disconnected" id=b49ec695f30d1a3775ecdb8a3e19e91d42a9728314277b4abde519013aa92fef May 17 00:39:38.905120 env[1735]: time="2025-05-17T00:39:38.905118496Z" level=warning msg="cleaning up after shim disconnected" id=b49ec695f30d1a3775ecdb8a3e19e91d42a9728314277b4abde519013aa92fef namespace=k8s.io May 17 00:39:38.905120 env[1735]: time="2025-05-17T00:39:38.905128767Z" level=info msg="cleaning up dead shim" May 17 00:39:38.913123 env[1735]: time="2025-05-17T00:39:38.913086142Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:39:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2605 runtime=io.containerd.runc.v2\n" May 17 00:39:39.189436 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b49ec695f30d1a3775ecdb8a3e19e91d42a9728314277b4abde519013aa92fef-rootfs.mount: Deactivated successfully. May 17 00:39:39.565069 kubelet[2082]: E0517 00:39:39.564939 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:39:39.782781 env[1735]: time="2025-05-17T00:39:39.782743881Z" level=info msg="CreateContainer within sandbox \"d1de63ef59b37036b4b1ad906fc119d668a52901e5a9f236bb437a120fe7cfe6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 00:39:39.799850 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3808358563.mount: Deactivated successfully. May 17 00:39:39.805422 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2279499072.mount: Deactivated successfully. May 17 00:39:39.814804 env[1735]: time="2025-05-17T00:39:39.814740748Z" level=info msg="CreateContainer within sandbox \"d1de63ef59b37036b4b1ad906fc119d668a52901e5a9f236bb437a120fe7cfe6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5d26fbe966e7978f028978a80f51309c1c5e3154664e1381575ef80c2b33c49a\"" May 17 00:39:39.815346 env[1735]: time="2025-05-17T00:39:39.815256410Z" level=info msg="StartContainer for \"5d26fbe966e7978f028978a80f51309c1c5e3154664e1381575ef80c2b33c49a\"" May 17 00:39:39.835437 systemd[1]: Started cri-containerd-5d26fbe966e7978f028978a80f51309c1c5e3154664e1381575ef80c2b33c49a.scope. May 17 00:39:39.877917 env[1735]: time="2025-05-17T00:39:39.877859971Z" level=info msg="StartContainer for \"5d26fbe966e7978f028978a80f51309c1c5e3154664e1381575ef80c2b33c49a\" returns successfully" May 17 00:39:40.060801 kubelet[2082]: I0517 00:39:40.060775 2082 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 17 00:39:40.343848 kernel: Initializing XFRM netlink socket May 17 00:39:40.565479 kubelet[2082]: E0517 00:39:40.565422 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:39:41.554200 kubelet[2082]: E0517 00:39:41.554141 2082 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:39:41.566108 kubelet[2082]: E0517 00:39:41.566023 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:39:42.003031 systemd-networkd[1458]: cilium_host: Link UP May 17 00:39:42.004400 systemd-networkd[1458]: cilium_net: Link UP May 17 00:39:42.007920 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready May 17 00:39:42.008025 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 17 00:39:42.007948 systemd-networkd[1458]: cilium_net: Gained carrier May 17 00:39:42.008195 systemd-networkd[1458]: cilium_host: Gained carrier May 17 00:39:42.008567 (udev-worker)[2710]: Network interface NamePolicy= disabled on kernel command line. May 17 00:39:42.009237 (udev-worker)[2711]: Network interface NamePolicy= disabled on kernel command line. May 17 00:39:42.081150 systemd-networkd[1458]: cilium_host: Gained IPv6LL May 17 00:39:42.125599 (udev-worker)[2771]: Network interface NamePolicy= disabled on kernel command line. May 17 00:39:42.132200 systemd-networkd[1458]: cilium_vxlan: Link UP May 17 00:39:42.132208 systemd-networkd[1458]: cilium_vxlan: Gained carrier May 17 00:39:42.361856 kernel: NET: Registered PF_ALG protocol family May 17 00:39:42.566945 kubelet[2082]: E0517 00:39:42.566872 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:39:42.665248 systemd-networkd[1458]: cilium_net: Gained IPv6LL May 17 00:39:43.040873 systemd-networkd[1458]: lxc_health: Link UP May 17 00:39:43.063582 systemd-networkd[1458]: lxc_health: Gained carrier May 17 00:39:43.064286 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 17 00:39:43.498804 systemd-networkd[1458]: cilium_vxlan: Gained IPv6LL May 17 00:39:43.567719 kubelet[2082]: E0517 00:39:43.567666 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:39:43.916993 kubelet[2082]: I0517 00:39:43.916923 2082 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jw22m" podStartSLOduration=12.286334878 podStartE2EDuration="22.916883937s" podCreationTimestamp="2025-05-17 00:39:21 +0000 UTC" firstStartedPulling="2025-05-17 00:39:24.546976063 +0000 UTC m=+3.360284642" lastFinishedPulling="2025-05-17 00:39:35.177525107 +0000 UTC m=+13.990833701" observedRunningTime="2025-05-17 00:39:40.814935624 +0000 UTC m=+19.628244219" watchObservedRunningTime="2025-05-17 00:39:43.916883937 +0000 UTC m=+22.730192533" May 17 00:39:44.329050 systemd-networkd[1458]: lxc_health: Gained IPv6LL May 17 00:39:44.568603 kubelet[2082]: E0517 00:39:44.568550 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:39:44.933538 systemd[1]: Created slice kubepods-besteffort-pode047bef0_ee4a_4207_9dd6_60fc12ab30c5.slice. May 17 00:39:44.953507 kubelet[2082]: I0517 00:39:44.953464 2082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn9cx\" (UniqueName: \"kubernetes.io/projected/e047bef0-ee4a-4207-9dd6-60fc12ab30c5-kube-api-access-bn9cx\") pod \"nginx-deployment-8587fbcb89-t5p96\" (UID: \"e047bef0-ee4a-4207-9dd6-60fc12ab30c5\") " pod="default/nginx-deployment-8587fbcb89-t5p96" May 17 00:39:45.239976 env[1735]: time="2025-05-17T00:39:45.238937269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-t5p96,Uid:e047bef0-ee4a-4207-9dd6-60fc12ab30c5,Namespace:default,Attempt:0,}" May 17 00:39:45.306664 systemd-networkd[1458]: lxc5629d6843e48: Link UP May 17 00:39:45.322172 kernel: eth0: renamed from tmpadfb7 May 17 00:39:45.331476 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 00:39:45.331614 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc5629d6843e48: link becomes ready May 17 00:39:45.331875 systemd-networkd[1458]: lxc5629d6843e48: Gained carrier May 17 00:39:45.336139 kubelet[2082]: I0517 00:39:45.334572 2082 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:39:45.541659 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 17 00:39:45.569679 kubelet[2082]: E0517 00:39:45.569638 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:39:46.571222 kubelet[2082]: E0517 00:39:46.571167 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:39:46.889085 systemd-networkd[1458]: lxc5629d6843e48: Gained IPv6LL May 17 00:39:47.571791 kubelet[2082]: E0517 00:39:47.571738 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:39:48.160171 env[1735]: time="2025-05-17T00:39:48.160079685Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:39:48.160738 env[1735]: time="2025-05-17T00:39:48.160134302Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:39:48.160738 env[1735]: time="2025-05-17T00:39:48.160149784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:39:48.160738 env[1735]: time="2025-05-17T00:39:48.160344721Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/adfb7b3847b820a1e1351cf92f00cc895b54ad8a11ccb6f9bccb6b9bc2d643d8 pid=3127 runtime=io.containerd.runc.v2 May 17 00:39:48.180015 systemd[1]: Started cri-containerd-adfb7b3847b820a1e1351cf92f00cc895b54ad8a11ccb6f9bccb6b9bc2d643d8.scope. May 17 00:39:48.232593 env[1735]: time="2025-05-17T00:39:48.232546281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-t5p96,Uid:e047bef0-ee4a-4207-9dd6-60fc12ab30c5,Namespace:default,Attempt:0,} returns sandbox id \"adfb7b3847b820a1e1351cf92f00cc895b54ad8a11ccb6f9bccb6b9bc2d643d8\"" May 17 00:39:48.235432 env[1735]: time="2025-05-17T00:39:48.235398907Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 17 00:39:48.572886 kubelet[2082]: E0517 00:39:48.572754 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:39:49.573772 kubelet[2082]: E0517 00:39:49.573689 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:39:50.574544 kubelet[2082]: E0517 00:39:50.574471 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:39:50.747130 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2844896025.mount: Deactivated successfully. May 17 00:39:51.575035 kubelet[2082]: E0517 00:39:51.574963 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:39:52.370447 env[1735]: time="2025-05-17T00:39:52.370380089Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:52.373959 env[1735]: time="2025-05-17T00:39:52.373912309Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:52.376303 env[1735]: time="2025-05-17T00:39:52.376260093Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:52.378446 env[1735]: time="2025-05-17T00:39:52.378403585Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:39:52.379146 env[1735]: time="2025-05-17T00:39:52.379104923Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\"" May 17 00:39:52.382331 env[1735]: time="2025-05-17T00:39:52.382289880Z" level=info msg="CreateContainer within sandbox \"adfb7b3847b820a1e1351cf92f00cc895b54ad8a11ccb6f9bccb6b9bc2d643d8\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" May 17 00:39:52.402398 env[1735]: time="2025-05-17T00:39:52.402342158Z" level=info msg="CreateContainer within sandbox \"adfb7b3847b820a1e1351cf92f00cc895b54ad8a11ccb6f9bccb6b9bc2d643d8\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"fe9ac0713f5bfd0c73bc1b5c423614bdbfcdb8afad8a1be00d3069299ae8e0f1\"" May 17 00:39:52.403169 env[1735]: time="2025-05-17T00:39:52.403136929Z" level=info msg="StartContainer for \"fe9ac0713f5bfd0c73bc1b5c423614bdbfcdb8afad8a1be00d3069299ae8e0f1\"" May 17 00:39:52.427336 systemd[1]: Started cri-containerd-fe9ac0713f5bfd0c73bc1b5c423614bdbfcdb8afad8a1be00d3069299ae8e0f1.scope. May 17 00:39:52.462875 env[1735]: time="2025-05-17T00:39:52.462806312Z" level=info msg="StartContainer for \"fe9ac0713f5bfd0c73bc1b5c423614bdbfcdb8afad8a1be00d3069299ae8e0f1\" returns successfully" May 17 00:39:52.575699 kubelet[2082]: E0517 00:39:52.575645 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:39:52.828752 kubelet[2082]: I0517 00:39:52.828598 2082 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-t5p96" podStartSLOduration=4.682882114 podStartE2EDuration="8.828576335s" podCreationTimestamp="2025-05-17 00:39:44 +0000 UTC" firstStartedPulling="2025-05-17 00:39:48.234859141 +0000 UTC m=+27.048167717" lastFinishedPulling="2025-05-17 00:39:52.380553348 +0000 UTC m=+31.193861938" observedRunningTime="2025-05-17 00:39:52.828480115 +0000 UTC m=+31.641788711" watchObservedRunningTime="2025-05-17 00:39:52.828576335 +0000 UTC m=+31.641884932" May 17 00:39:53.576116 kubelet[2082]: E0517 00:39:53.576009 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:39:53.932490 amazon-ssm-agent[1714]: 2025-05-17 00:39:53 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated May 17 00:39:54.576709 kubelet[2082]: E0517 00:39:54.576651 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:39:55.577093 kubelet[2082]: E0517 00:39:55.577032 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:39:56.578043 kubelet[2082]: E0517 00:39:56.577980 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:39:57.578692 kubelet[2082]: E0517 00:39:57.578632 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:39:58.579460 kubelet[2082]: E0517 00:39:58.579398 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:39:59.219780 systemd[1]: Created slice kubepods-besteffort-pod1d249c50_af15_49c4_8c3b_af7189589fd5.slice. May 17 00:39:59.265653 kubelet[2082]: I0517 00:39:59.265617 2082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/1d249c50-af15-49c4-8c3b-af7189589fd5-data\") pod \"nfs-server-provisioner-0\" (UID: \"1d249c50-af15-49c4-8c3b-af7189589fd5\") " pod="default/nfs-server-provisioner-0" May 17 00:39:59.265898 kubelet[2082]: I0517 00:39:59.265881 2082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf989\" (UniqueName: \"kubernetes.io/projected/1d249c50-af15-49c4-8c3b-af7189589fd5-kube-api-access-bf989\") pod \"nfs-server-provisioner-0\" (UID: \"1d249c50-af15-49c4-8c3b-af7189589fd5\") " pod="default/nfs-server-provisioner-0" May 17 00:39:59.523655 env[1735]: time="2025-05-17T00:39:59.523542296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:1d249c50-af15-49c4-8c3b-af7189589fd5,Namespace:default,Attempt:0,}" May 17 00:39:59.579908 kubelet[2082]: E0517 00:39:59.579859 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:39:59.591521 (udev-worker)[3222]: Network interface NamePolicy= disabled on kernel command line. May 17 00:39:59.593909 (udev-worker)[3238]: Network interface NamePolicy= disabled on kernel command line. May 17 00:39:59.598275 systemd-networkd[1458]: lxc6d7ae93870d7: Link UP May 17 00:39:59.605847 kernel: eth0: renamed from tmpa1703 May 17 00:39:59.610215 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 00:39:59.610316 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc6d7ae93870d7: link becomes ready May 17 00:39:59.610658 systemd-networkd[1458]: lxc6d7ae93870d7: Gained carrier May 17 00:39:59.768813 env[1735]: time="2025-05-17T00:39:59.768721294Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:39:59.768813 env[1735]: time="2025-05-17T00:39:59.768766713Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:39:59.769128 env[1735]: time="2025-05-17T00:39:59.768783896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:39:59.769128 env[1735]: time="2025-05-17T00:39:59.768961476Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a1703038bd76a36c7666e8bae0364a006e9b1d572dc492ba8ed31ed2d25d67c7 pid=3252 runtime=io.containerd.runc.v2 May 17 00:39:59.791618 systemd[1]: run-containerd-runc-k8s.io-a1703038bd76a36c7666e8bae0364a006e9b1d572dc492ba8ed31ed2d25d67c7-runc.Kallvy.mount: Deactivated successfully. May 17 00:39:59.798094 systemd[1]: Started cri-containerd-a1703038bd76a36c7666e8bae0364a006e9b1d572dc492ba8ed31ed2d25d67c7.scope. May 17 00:39:59.846422 env[1735]: time="2025-05-17T00:39:59.846292949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:1d249c50-af15-49c4-8c3b-af7189589fd5,Namespace:default,Attempt:0,} returns sandbox id \"a1703038bd76a36c7666e8bae0364a006e9b1d572dc492ba8ed31ed2d25d67c7\"" May 17 00:39:59.848372 env[1735]: time="2025-05-17T00:39:59.848331957Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" May 17 00:40:00.125965 update_engine[1729]: I0517 00:40:00.125900 1729 update_attempter.cc:509] Updating boot flags... May 17 00:40:00.580877 kubelet[2082]: E0517 00:40:00.580401 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:00.970013 systemd-networkd[1458]: lxc6d7ae93870d7: Gained IPv6LL May 17 00:40:01.553937 kubelet[2082]: E0517 00:40:01.553881 2082 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:01.581384 kubelet[2082]: E0517 00:40:01.581343 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:02.435407 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1334422027.mount: Deactivated successfully. May 17 00:40:02.581777 kubelet[2082]: E0517 00:40:02.581731 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:03.582807 kubelet[2082]: E0517 00:40:03.582765 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:04.568920 env[1735]: time="2025-05-17T00:40:04.568862197Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:04.572076 env[1735]: time="2025-05-17T00:40:04.572024748Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:04.574798 env[1735]: time="2025-05-17T00:40:04.574759681Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:04.576574 env[1735]: time="2025-05-17T00:40:04.576535104Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:04.577237 env[1735]: time="2025-05-17T00:40:04.577205052Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" May 17 00:40:04.580354 env[1735]: time="2025-05-17T00:40:04.580310057Z" level=info msg="CreateContainer within sandbox \"a1703038bd76a36c7666e8bae0364a006e9b1d572dc492ba8ed31ed2d25d67c7\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" May 17 00:40:04.583754 kubelet[2082]: E0517 00:40:04.583708 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:04.591677 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount183584513.mount: Deactivated successfully. May 17 00:40:04.600951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount625744314.mount: Deactivated successfully. May 17 00:40:04.605000 env[1735]: time="2025-05-17T00:40:04.604948221Z" level=info msg="CreateContainer within sandbox \"a1703038bd76a36c7666e8bae0364a006e9b1d572dc492ba8ed31ed2d25d67c7\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"5af56115c976a9bda35bf9908cdd96d86e368bfb4fea22ffb9ec75e807a9e997\"" May 17 00:40:04.605683 env[1735]: time="2025-05-17T00:40:04.605649131Z" level=info msg="StartContainer for \"5af56115c976a9bda35bf9908cdd96d86e368bfb4fea22ffb9ec75e807a9e997\"" May 17 00:40:04.632582 systemd[1]: Started cri-containerd-5af56115c976a9bda35bf9908cdd96d86e368bfb4fea22ffb9ec75e807a9e997.scope. May 17 00:40:04.666743 env[1735]: time="2025-05-17T00:40:04.666200273Z" level=info msg="StartContainer for \"5af56115c976a9bda35bf9908cdd96d86e368bfb4fea22ffb9ec75e807a9e997\" returns successfully" May 17 00:40:04.858960 kubelet[2082]: I0517 00:40:04.858882 2082 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.127961824 podStartE2EDuration="5.858864975s" podCreationTimestamp="2025-05-17 00:39:59 +0000 UTC" firstStartedPulling="2025-05-17 00:39:59.847737068 +0000 UTC m=+38.661045642" lastFinishedPulling="2025-05-17 00:40:04.578640208 +0000 UTC m=+43.391948793" observedRunningTime="2025-05-17 00:40:04.858317463 +0000 UTC m=+43.671626066" watchObservedRunningTime="2025-05-17 00:40:04.858864975 +0000 UTC m=+43.672173571" May 17 00:40:05.584595 kubelet[2082]: E0517 00:40:05.584534 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:06.585318 kubelet[2082]: E0517 00:40:06.585262 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:07.586154 kubelet[2082]: E0517 00:40:07.586076 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:08.586561 kubelet[2082]: E0517 00:40:08.586384 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:09.587479 kubelet[2082]: E0517 00:40:09.587425 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:10.588063 kubelet[2082]: E0517 00:40:10.588021 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:11.588263 kubelet[2082]: E0517 00:40:11.588195 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:12.588841 kubelet[2082]: E0517 00:40:12.588783 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:13.589515 kubelet[2082]: E0517 00:40:13.589458 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:14.551056 systemd[1]: Created slice kubepods-besteffort-pod6ea28649_db13_4ae1_ba59_75c7aae4ec16.slice. May 17 00:40:14.573587 kubelet[2082]: I0517 00:40:14.573536 2082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j42jz\" (UniqueName: \"kubernetes.io/projected/6ea28649-db13-4ae1-ba59-75c7aae4ec16-kube-api-access-j42jz\") pod \"test-pod-1\" (UID: \"6ea28649-db13-4ae1-ba59-75c7aae4ec16\") " pod="default/test-pod-1" May 17 00:40:14.573804 kubelet[2082]: I0517 00:40:14.573785 2082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-60e1b868-203d-4cb0-98d7-a8314c01fedf\" (UniqueName: \"kubernetes.io/nfs/6ea28649-db13-4ae1-ba59-75c7aae4ec16-pvc-60e1b868-203d-4cb0-98d7-a8314c01fedf\") pod \"test-pod-1\" (UID: \"6ea28649-db13-4ae1-ba59-75c7aae4ec16\") " pod="default/test-pod-1" May 17 00:40:14.590137 kubelet[2082]: E0517 00:40:14.590096 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:14.728852 kernel: FS-Cache: Loaded May 17 00:40:14.781516 kernel: RPC: Registered named UNIX socket transport module. May 17 00:40:14.781705 kernel: RPC: Registered udp transport module. May 17 00:40:14.781746 kernel: RPC: Registered tcp transport module. May 17 00:40:14.782355 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. May 17 00:40:14.852846 kernel: FS-Cache: Netfs 'nfs' registered for caching May 17 00:40:15.050408 kernel: NFS: Registering the id_resolver key type May 17 00:40:15.050536 kernel: Key type id_resolver registered May 17 00:40:15.051391 kernel: Key type id_legacy registered May 17 00:40:15.099879 nfsidmap[3463]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' May 17 00:40:15.103976 nfsidmap[3464]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' May 17 00:40:15.155351 env[1735]: time="2025-05-17T00:40:15.154999311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:6ea28649-db13-4ae1-ba59-75c7aae4ec16,Namespace:default,Attempt:0,}" May 17 00:40:15.199158 systemd-networkd[1458]: lxcef42562cb08e: Link UP May 17 00:40:15.204666 (udev-worker)[3450]: Network interface NamePolicy= disabled on kernel command line. May 17 00:40:15.207342 kernel: eth0: renamed from tmp03a08 May 17 00:40:15.213087 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 00:40:15.213205 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcef42562cb08e: link becomes ready May 17 00:40:15.213400 systemd-networkd[1458]: lxcef42562cb08e: Gained carrier May 17 00:40:15.392269 env[1735]: time="2025-05-17T00:40:15.392111070Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:40:15.392269 env[1735]: time="2025-05-17T00:40:15.392157835Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:40:15.392269 env[1735]: time="2025-05-17T00:40:15.392168816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:40:15.392688 env[1735]: time="2025-05-17T00:40:15.392656230Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/03a08c6f18df38f5daad237950f219b894bdeb2ac1c74885c9478c22a93995f8 pid=3487 runtime=io.containerd.runc.v2 May 17 00:40:15.407110 systemd[1]: Started cri-containerd-03a08c6f18df38f5daad237950f219b894bdeb2ac1c74885c9478c22a93995f8.scope. May 17 00:40:15.456960 env[1735]: time="2025-05-17T00:40:15.456915347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:6ea28649-db13-4ae1-ba59-75c7aae4ec16,Namespace:default,Attempt:0,} returns sandbox id \"03a08c6f18df38f5daad237950f219b894bdeb2ac1c74885c9478c22a93995f8\"" May 17 00:40:15.459131 env[1735]: time="2025-05-17T00:40:15.459100253Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 17 00:40:15.590648 kubelet[2082]: E0517 00:40:15.590591 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:15.762614 env[1735]: time="2025-05-17T00:40:15.762477913Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:15.766614 env[1735]: time="2025-05-17T00:40:15.766574741Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:15.769500 env[1735]: time="2025-05-17T00:40:15.769456929Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:15.772435 env[1735]: time="2025-05-17T00:40:15.772394848Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:15.773144 env[1735]: time="2025-05-17T00:40:15.773105455Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\"" May 17 00:40:15.775703 env[1735]: time="2025-05-17T00:40:15.775650519Z" level=info msg="CreateContainer within sandbox \"03a08c6f18df38f5daad237950f219b894bdeb2ac1c74885c9478c22a93995f8\" for container &ContainerMetadata{Name:test,Attempt:0,}" May 17 00:40:15.806138 env[1735]: time="2025-05-17T00:40:15.806072179Z" level=info msg="CreateContainer within sandbox \"03a08c6f18df38f5daad237950f219b894bdeb2ac1c74885c9478c22a93995f8\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"0325d0de9370e332be1e06b39faf6f4596fc7ae81584c11becb5e6ef8c5594ee\"" May 17 00:40:15.806913 env[1735]: time="2025-05-17T00:40:15.806870241Z" level=info msg="StartContainer for \"0325d0de9370e332be1e06b39faf6f4596fc7ae81584c11becb5e6ef8c5594ee\"" May 17 00:40:15.824342 systemd[1]: Started cri-containerd-0325d0de9370e332be1e06b39faf6f4596fc7ae81584c11becb5e6ef8c5594ee.scope. May 17 00:40:15.858498 env[1735]: time="2025-05-17T00:40:15.858371563Z" level=info msg="StartContainer for \"0325d0de9370e332be1e06b39faf6f4596fc7ae81584c11becb5e6ef8c5594ee\" returns successfully" May 17 00:40:16.590890 kubelet[2082]: E0517 00:40:16.590836 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:16.905082 systemd-networkd[1458]: lxcef42562cb08e: Gained IPv6LL May 17 00:40:17.591660 kubelet[2082]: E0517 00:40:17.591612 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:18.592176 kubelet[2082]: E0517 00:40:18.592121 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:18.721751 kubelet[2082]: I0517 00:40:18.721698 2082 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=19.405840084 podStartE2EDuration="19.721678858s" podCreationTimestamp="2025-05-17 00:39:59 +0000 UTC" firstStartedPulling="2025-05-17 00:40:15.458467076 +0000 UTC m=+54.271775649" lastFinishedPulling="2025-05-17 00:40:15.774305846 +0000 UTC m=+54.587614423" observedRunningTime="2025-05-17 00:40:15.885389029 +0000 UTC m=+54.698697624" watchObservedRunningTime="2025-05-17 00:40:18.721678858 +0000 UTC m=+57.534987508" May 17 00:40:18.740183 systemd[1]: run-containerd-runc-k8s.io-5d26fbe966e7978f028978a80f51309c1c5e3154664e1381575ef80c2b33c49a-runc.4MqZ2t.mount: Deactivated successfully. May 17 00:40:18.763658 env[1735]: time="2025-05-17T00:40:18.763603657Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:40:18.769730 env[1735]: time="2025-05-17T00:40:18.769680633Z" level=info msg="StopContainer for \"5d26fbe966e7978f028978a80f51309c1c5e3154664e1381575ef80c2b33c49a\" with timeout 2 (s)" May 17 00:40:18.770174 env[1735]: time="2025-05-17T00:40:18.770148428Z" level=info msg="Stop container \"5d26fbe966e7978f028978a80f51309c1c5e3154664e1381575ef80c2b33c49a\" with signal terminated" May 17 00:40:18.776636 systemd-networkd[1458]: lxc_health: Link DOWN May 17 00:40:18.776642 systemd-networkd[1458]: lxc_health: Lost carrier May 17 00:40:18.804303 systemd[1]: cri-containerd-5d26fbe966e7978f028978a80f51309c1c5e3154664e1381575ef80c2b33c49a.scope: Deactivated successfully. May 17 00:40:18.804637 systemd[1]: cri-containerd-5d26fbe966e7978f028978a80f51309c1c5e3154664e1381575ef80c2b33c49a.scope: Consumed 7.316s CPU time. May 17 00:40:18.830517 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d26fbe966e7978f028978a80f51309c1c5e3154664e1381575ef80c2b33c49a-rootfs.mount: Deactivated successfully. May 17 00:40:18.861367 env[1735]: time="2025-05-17T00:40:18.861317705Z" level=info msg="shim disconnected" id=5d26fbe966e7978f028978a80f51309c1c5e3154664e1381575ef80c2b33c49a May 17 00:40:18.861367 env[1735]: time="2025-05-17T00:40:18.861363667Z" level=warning msg="cleaning up after shim disconnected" id=5d26fbe966e7978f028978a80f51309c1c5e3154664e1381575ef80c2b33c49a namespace=k8s.io May 17 00:40:18.861367 env[1735]: time="2025-05-17T00:40:18.861373540Z" level=info msg="cleaning up dead shim" May 17 00:40:18.870010 env[1735]: time="2025-05-17T00:40:18.869967502Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:40:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3620 runtime=io.containerd.runc.v2\n" May 17 00:40:18.874114 env[1735]: time="2025-05-17T00:40:18.874057446Z" level=info msg="StopContainer for \"5d26fbe966e7978f028978a80f51309c1c5e3154664e1381575ef80c2b33c49a\" returns successfully" May 17 00:40:18.874905 env[1735]: time="2025-05-17T00:40:18.874869629Z" level=info msg="StopPodSandbox for \"d1de63ef59b37036b4b1ad906fc119d668a52901e5a9f236bb437a120fe7cfe6\"" May 17 00:40:18.875024 env[1735]: time="2025-05-17T00:40:18.874937281Z" level=info msg="Container to stop \"9c39c88e8d8275e64a7eff54e079deccad6dbb0bac5c1cde8970108ac55419ac\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:40:18.875024 env[1735]: time="2025-05-17T00:40:18.874957640Z" level=info msg="Container to stop \"d8dcebc20accdbb4d461d939d6df0f8eca4b61e2fa2e6d9e2366bf893c110a46\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:40:18.875024 env[1735]: time="2025-05-17T00:40:18.874973371Z" level=info msg="Container to stop \"e726b00b5c3a43580ec8b751bfef5bf88aaad56a9c084628005eff608ec67f37\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:40:18.875024 env[1735]: time="2025-05-17T00:40:18.874989190Z" level=info msg="Container to stop \"b49ec695f30d1a3775ecdb8a3e19e91d42a9728314277b4abde519013aa92fef\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:40:18.875024 env[1735]: time="2025-05-17T00:40:18.875004516Z" level=info msg="Container to stop \"5d26fbe966e7978f028978a80f51309c1c5e3154664e1381575ef80c2b33c49a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:40:18.877546 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d1de63ef59b37036b4b1ad906fc119d668a52901e5a9f236bb437a120fe7cfe6-shm.mount: Deactivated successfully. May 17 00:40:18.885099 systemd[1]: cri-containerd-d1de63ef59b37036b4b1ad906fc119d668a52901e5a9f236bb437a120fe7cfe6.scope: Deactivated successfully. May 17 00:40:18.918780 env[1735]: time="2025-05-17T00:40:18.918723362Z" level=info msg="shim disconnected" id=d1de63ef59b37036b4b1ad906fc119d668a52901e5a9f236bb437a120fe7cfe6 May 17 00:40:18.919100 env[1735]: time="2025-05-17T00:40:18.918783689Z" level=warning msg="cleaning up after shim disconnected" id=d1de63ef59b37036b4b1ad906fc119d668a52901e5a9f236bb437a120fe7cfe6 namespace=k8s.io May 17 00:40:18.919100 env[1735]: time="2025-05-17T00:40:18.918796078Z" level=info msg="cleaning up dead shim" May 17 00:40:18.927776 env[1735]: time="2025-05-17T00:40:18.927717200Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:40:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3651 runtime=io.containerd.runc.v2\n" May 17 00:40:18.928748 env[1735]: time="2025-05-17T00:40:18.928709130Z" level=info msg="TearDown network for sandbox \"d1de63ef59b37036b4b1ad906fc119d668a52901e5a9f236bb437a120fe7cfe6\" successfully" May 17 00:40:18.928748 env[1735]: time="2025-05-17T00:40:18.928742555Z" level=info msg="StopPodSandbox for \"d1de63ef59b37036b4b1ad906fc119d668a52901e5a9f236bb437a120fe7cfe6\" returns successfully" May 17 00:40:19.006285 kubelet[2082]: I0517 00:40:19.006234 2082 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/336f9ddb-68ef-4385-b063-c34da0b06909-lib-modules\") pod \"336f9ddb-68ef-4385-b063-c34da0b06909\" (UID: \"336f9ddb-68ef-4385-b063-c34da0b06909\") " May 17 00:40:19.006285 kubelet[2082]: I0517 00:40:19.006289 2082 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/336f9ddb-68ef-4385-b063-c34da0b06909-hubble-tls\") pod \"336f9ddb-68ef-4385-b063-c34da0b06909\" (UID: \"336f9ddb-68ef-4385-b063-c34da0b06909\") " May 17 00:40:19.006654 kubelet[2082]: I0517 00:40:19.006309 2082 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/336f9ddb-68ef-4385-b063-c34da0b06909-cilium-config-path\") pod \"336f9ddb-68ef-4385-b063-c34da0b06909\" (UID: \"336f9ddb-68ef-4385-b063-c34da0b06909\") " May 17 00:40:19.006654 kubelet[2082]: I0517 00:40:19.006327 2082 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/336f9ddb-68ef-4385-b063-c34da0b06909-xtables-lock\") pod \"336f9ddb-68ef-4385-b063-c34da0b06909\" (UID: \"336f9ddb-68ef-4385-b063-c34da0b06909\") " May 17 00:40:19.006654 kubelet[2082]: I0517 00:40:19.006343 2082 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/336f9ddb-68ef-4385-b063-c34da0b06909-bpf-maps\") pod \"336f9ddb-68ef-4385-b063-c34da0b06909\" (UID: \"336f9ddb-68ef-4385-b063-c34da0b06909\") " May 17 00:40:19.006654 kubelet[2082]: I0517 00:40:19.006421 2082 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/336f9ddb-68ef-4385-b063-c34da0b06909-cilium-run\") pod \"336f9ddb-68ef-4385-b063-c34da0b06909\" (UID: \"336f9ddb-68ef-4385-b063-c34da0b06909\") " May 17 00:40:19.006654 kubelet[2082]: I0517 00:40:19.006441 2082 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/336f9ddb-68ef-4385-b063-c34da0b06909-host-proc-sys-net\") pod \"336f9ddb-68ef-4385-b063-c34da0b06909\" (UID: \"336f9ddb-68ef-4385-b063-c34da0b06909\") " May 17 00:40:19.006654 kubelet[2082]: I0517 00:40:19.006462 2082 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/336f9ddb-68ef-4385-b063-c34da0b06909-clustermesh-secrets\") pod \"336f9ddb-68ef-4385-b063-c34da0b06909\" (UID: \"336f9ddb-68ef-4385-b063-c34da0b06909\") " May 17 00:40:19.006872 kubelet[2082]: I0517 00:40:19.006478 2082 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/336f9ddb-68ef-4385-b063-c34da0b06909-cni-path\") pod \"336f9ddb-68ef-4385-b063-c34da0b06909\" (UID: \"336f9ddb-68ef-4385-b063-c34da0b06909\") " May 17 00:40:19.006872 kubelet[2082]: I0517 00:40:19.006491 2082 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/336f9ddb-68ef-4385-b063-c34da0b06909-etc-cni-netd\") pod \"336f9ddb-68ef-4385-b063-c34da0b06909\" (UID: \"336f9ddb-68ef-4385-b063-c34da0b06909\") " May 17 00:40:19.006872 kubelet[2082]: I0517 00:40:19.006504 2082 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/336f9ddb-68ef-4385-b063-c34da0b06909-hostproc\") pod \"336f9ddb-68ef-4385-b063-c34da0b06909\" (UID: \"336f9ddb-68ef-4385-b063-c34da0b06909\") " May 17 00:40:19.006872 kubelet[2082]: I0517 00:40:19.006518 2082 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/336f9ddb-68ef-4385-b063-c34da0b06909-cilium-cgroup\") pod \"336f9ddb-68ef-4385-b063-c34da0b06909\" (UID: \"336f9ddb-68ef-4385-b063-c34da0b06909\") " May 17 00:40:19.006872 kubelet[2082]: I0517 00:40:19.006535 2082 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6rmv\" (UniqueName: \"kubernetes.io/projected/336f9ddb-68ef-4385-b063-c34da0b06909-kube-api-access-b6rmv\") pod \"336f9ddb-68ef-4385-b063-c34da0b06909\" (UID: \"336f9ddb-68ef-4385-b063-c34da0b06909\") " May 17 00:40:19.006872 kubelet[2082]: I0517 00:40:19.006550 2082 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/336f9ddb-68ef-4385-b063-c34da0b06909-host-proc-sys-kernel\") pod \"336f9ddb-68ef-4385-b063-c34da0b06909\" (UID: \"336f9ddb-68ef-4385-b063-c34da0b06909\") " May 17 00:40:19.007031 kubelet[2082]: I0517 00:40:19.006615 2082 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/336f9ddb-68ef-4385-b063-c34da0b06909-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "336f9ddb-68ef-4385-b063-c34da0b06909" (UID: "336f9ddb-68ef-4385-b063-c34da0b06909"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:40:19.007031 kubelet[2082]: I0517 00:40:19.006657 2082 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/336f9ddb-68ef-4385-b063-c34da0b06909-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "336f9ddb-68ef-4385-b063-c34da0b06909" (UID: "336f9ddb-68ef-4385-b063-c34da0b06909"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:40:19.008157 kubelet[2082]: I0517 00:40:19.008127 2082 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/336f9ddb-68ef-4385-b063-c34da0b06909-cni-path" (OuterVolumeSpecName: "cni-path") pod "336f9ddb-68ef-4385-b063-c34da0b06909" (UID: "336f9ddb-68ef-4385-b063-c34da0b06909"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:40:19.008324 kubelet[2082]: I0517 00:40:19.008310 2082 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/336f9ddb-68ef-4385-b063-c34da0b06909-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "336f9ddb-68ef-4385-b063-c34da0b06909" (UID: "336f9ddb-68ef-4385-b063-c34da0b06909"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:40:19.008414 kubelet[2082]: I0517 00:40:19.008404 2082 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/336f9ddb-68ef-4385-b063-c34da0b06909-hostproc" (OuterVolumeSpecName: "hostproc") pod "336f9ddb-68ef-4385-b063-c34da0b06909" (UID: "336f9ddb-68ef-4385-b063-c34da0b06909"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:40:19.008501 kubelet[2082]: I0517 00:40:19.008491 2082 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/336f9ddb-68ef-4385-b063-c34da0b06909-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "336f9ddb-68ef-4385-b063-c34da0b06909" (UID: "336f9ddb-68ef-4385-b063-c34da0b06909"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:40:19.010015 kubelet[2082]: I0517 00:40:19.009986 2082 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/336f9ddb-68ef-4385-b063-c34da0b06909-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "336f9ddb-68ef-4385-b063-c34da0b06909" (UID: "336f9ddb-68ef-4385-b063-c34da0b06909"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 17 00:40:19.010119 kubelet[2082]: I0517 00:40:19.010043 2082 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/336f9ddb-68ef-4385-b063-c34da0b06909-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "336f9ddb-68ef-4385-b063-c34da0b06909" (UID: "336f9ddb-68ef-4385-b063-c34da0b06909"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:40:19.010119 kubelet[2082]: I0517 00:40:19.010060 2082 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/336f9ddb-68ef-4385-b063-c34da0b06909-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "336f9ddb-68ef-4385-b063-c34da0b06909" (UID: "336f9ddb-68ef-4385-b063-c34da0b06909"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:40:19.010119 kubelet[2082]: I0517 00:40:19.010073 2082 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/336f9ddb-68ef-4385-b063-c34da0b06909-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "336f9ddb-68ef-4385-b063-c34da0b06909" (UID: "336f9ddb-68ef-4385-b063-c34da0b06909"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:40:19.010119 kubelet[2082]: I0517 00:40:19.010087 2082 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/336f9ddb-68ef-4385-b063-c34da0b06909-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "336f9ddb-68ef-4385-b063-c34da0b06909" (UID: "336f9ddb-68ef-4385-b063-c34da0b06909"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:40:19.012633 kubelet[2082]: I0517 00:40:19.012603 2082 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/336f9ddb-68ef-4385-b063-c34da0b06909-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "336f9ddb-68ef-4385-b063-c34da0b06909" (UID: "336f9ddb-68ef-4385-b063-c34da0b06909"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 17 00:40:19.012738 kubelet[2082]: I0517 00:40:19.012682 2082 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/336f9ddb-68ef-4385-b063-c34da0b06909-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "336f9ddb-68ef-4385-b063-c34da0b06909" (UID: "336f9ddb-68ef-4385-b063-c34da0b06909"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:40:19.015077 kubelet[2082]: I0517 00:40:19.015032 2082 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/336f9ddb-68ef-4385-b063-c34da0b06909-kube-api-access-b6rmv" (OuterVolumeSpecName: "kube-api-access-b6rmv") pod "336f9ddb-68ef-4385-b063-c34da0b06909" (UID: "336f9ddb-68ef-4385-b063-c34da0b06909"). InnerVolumeSpecName "kube-api-access-b6rmv". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:40:19.106831 kubelet[2082]: I0517 00:40:19.106764 2082 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/336f9ddb-68ef-4385-b063-c34da0b06909-lib-modules\") on node \"172.31.16.188\" DevicePath \"\"" May 17 00:40:19.106831 kubelet[2082]: I0517 00:40:19.106809 2082 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/336f9ddb-68ef-4385-b063-c34da0b06909-hubble-tls\") on node \"172.31.16.188\" DevicePath \"\"" May 17 00:40:19.106831 kubelet[2082]: I0517 00:40:19.106855 2082 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/336f9ddb-68ef-4385-b063-c34da0b06909-cilium-config-path\") on node \"172.31.16.188\" DevicePath \"\"" May 17 00:40:19.107083 kubelet[2082]: I0517 00:40:19.106867 2082 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/336f9ddb-68ef-4385-b063-c34da0b06909-bpf-maps\") on node \"172.31.16.188\" DevicePath \"\"" May 17 00:40:19.107083 kubelet[2082]: I0517 00:40:19.106876 2082 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/336f9ddb-68ef-4385-b063-c34da0b06909-xtables-lock\") on node \"172.31.16.188\" DevicePath \"\"" May 17 00:40:19.107083 kubelet[2082]: I0517 00:40:19.106883 2082 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/336f9ddb-68ef-4385-b063-c34da0b06909-cni-path\") on node \"172.31.16.188\" DevicePath \"\"" May 17 00:40:19.107083 kubelet[2082]: I0517 00:40:19.106890 2082 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/336f9ddb-68ef-4385-b063-c34da0b06909-etc-cni-netd\") on node \"172.31.16.188\" DevicePath \"\"" May 17 00:40:19.107083 kubelet[2082]: I0517 00:40:19.106897 2082 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/336f9ddb-68ef-4385-b063-c34da0b06909-cilium-run\") on node \"172.31.16.188\" DevicePath \"\"" May 17 00:40:19.107083 kubelet[2082]: I0517 00:40:19.106905 2082 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/336f9ddb-68ef-4385-b063-c34da0b06909-host-proc-sys-net\") on node \"172.31.16.188\" DevicePath \"\"" May 17 00:40:19.107083 kubelet[2082]: I0517 00:40:19.106914 2082 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/336f9ddb-68ef-4385-b063-c34da0b06909-clustermesh-secrets\") on node \"172.31.16.188\" DevicePath \"\"" May 17 00:40:19.107083 kubelet[2082]: I0517 00:40:19.106922 2082 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6rmv\" (UniqueName: \"kubernetes.io/projected/336f9ddb-68ef-4385-b063-c34da0b06909-kube-api-access-b6rmv\") on node \"172.31.16.188\" DevicePath \"\"" May 17 00:40:19.107292 kubelet[2082]: I0517 00:40:19.106930 2082 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/336f9ddb-68ef-4385-b063-c34da0b06909-host-proc-sys-kernel\") on node \"172.31.16.188\" DevicePath \"\"" May 17 00:40:19.107292 kubelet[2082]: I0517 00:40:19.106937 2082 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/336f9ddb-68ef-4385-b063-c34da0b06909-hostproc\") on node \"172.31.16.188\" DevicePath \"\"" May 17 00:40:19.107292 kubelet[2082]: I0517 00:40:19.106944 2082 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/336f9ddb-68ef-4385-b063-c34da0b06909-cilium-cgroup\") on node \"172.31.16.188\" DevicePath \"\"" May 17 00:40:19.592767 kubelet[2082]: E0517 00:40:19.592714 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:19.734479 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d1de63ef59b37036b4b1ad906fc119d668a52901e5a9f236bb437a120fe7cfe6-rootfs.mount: Deactivated successfully. May 17 00:40:19.734600 systemd[1]: var-lib-kubelet-pods-336f9ddb\x2d68ef\x2d4385\x2db063\x2dc34da0b06909-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2db6rmv.mount: Deactivated successfully. May 17 00:40:19.734678 systemd[1]: var-lib-kubelet-pods-336f9ddb\x2d68ef\x2d4385\x2db063\x2dc34da0b06909-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 17 00:40:19.734736 systemd[1]: var-lib-kubelet-pods-336f9ddb\x2d68ef\x2d4385\x2db063\x2dc34da0b06909-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 17 00:40:19.742177 systemd[1]: Removed slice kubepods-burstable-pod336f9ddb_68ef_4385_b063_c34da0b06909.slice. May 17 00:40:19.742309 systemd[1]: kubepods-burstable-pod336f9ddb_68ef_4385_b063_c34da0b06909.slice: Consumed 7.430s CPU time. May 17 00:40:19.876762 kubelet[2082]: I0517 00:40:19.876734 2082 scope.go:117] "RemoveContainer" containerID="5d26fbe966e7978f028978a80f51309c1c5e3154664e1381575ef80c2b33c49a" May 17 00:40:19.879054 env[1735]: time="2025-05-17T00:40:19.879010346Z" level=info msg="RemoveContainer for \"5d26fbe966e7978f028978a80f51309c1c5e3154664e1381575ef80c2b33c49a\"" May 17 00:40:19.884654 env[1735]: time="2025-05-17T00:40:19.884604565Z" level=info msg="RemoveContainer for \"5d26fbe966e7978f028978a80f51309c1c5e3154664e1381575ef80c2b33c49a\" returns successfully" May 17 00:40:19.885904 kubelet[2082]: I0517 00:40:19.885876 2082 scope.go:117] "RemoveContainer" containerID="b49ec695f30d1a3775ecdb8a3e19e91d42a9728314277b4abde519013aa92fef" May 17 00:40:19.887415 env[1735]: time="2025-05-17T00:40:19.887374881Z" level=info msg="RemoveContainer for \"b49ec695f30d1a3775ecdb8a3e19e91d42a9728314277b4abde519013aa92fef\"" May 17 00:40:19.892783 env[1735]: time="2025-05-17T00:40:19.892734523Z" level=info msg="RemoveContainer for \"b49ec695f30d1a3775ecdb8a3e19e91d42a9728314277b4abde519013aa92fef\" returns successfully" May 17 00:40:19.893264 kubelet[2082]: I0517 00:40:19.893244 2082 scope.go:117] "RemoveContainer" containerID="e726b00b5c3a43580ec8b751bfef5bf88aaad56a9c084628005eff608ec67f37" May 17 00:40:19.894948 env[1735]: time="2025-05-17T00:40:19.894909981Z" level=info msg="RemoveContainer for \"e726b00b5c3a43580ec8b751bfef5bf88aaad56a9c084628005eff608ec67f37\"" May 17 00:40:19.901761 env[1735]: time="2025-05-17T00:40:19.901625866Z" level=info msg="RemoveContainer for \"e726b00b5c3a43580ec8b751bfef5bf88aaad56a9c084628005eff608ec67f37\" returns successfully" May 17 00:40:19.901927 kubelet[2082]: I0517 00:40:19.901859 2082 scope.go:117] "RemoveContainer" containerID="d8dcebc20accdbb4d461d939d6df0f8eca4b61e2fa2e6d9e2366bf893c110a46" May 17 00:40:19.903218 env[1735]: time="2025-05-17T00:40:19.903167646Z" level=info msg="RemoveContainer for \"d8dcebc20accdbb4d461d939d6df0f8eca4b61e2fa2e6d9e2366bf893c110a46\"" May 17 00:40:19.908279 env[1735]: time="2025-05-17T00:40:19.908217445Z" level=info msg="RemoveContainer for \"d8dcebc20accdbb4d461d939d6df0f8eca4b61e2fa2e6d9e2366bf893c110a46\" returns successfully" May 17 00:40:19.908548 kubelet[2082]: I0517 00:40:19.908527 2082 scope.go:117] "RemoveContainer" containerID="9c39c88e8d8275e64a7eff54e079deccad6dbb0bac5c1cde8970108ac55419ac" May 17 00:40:19.910512 env[1735]: time="2025-05-17T00:40:19.910459200Z" level=info msg="RemoveContainer for \"9c39c88e8d8275e64a7eff54e079deccad6dbb0bac5c1cde8970108ac55419ac\"" May 17 00:40:19.915420 env[1735]: time="2025-05-17T00:40:19.915371817Z" level=info msg="RemoveContainer for \"9c39c88e8d8275e64a7eff54e079deccad6dbb0bac5c1cde8970108ac55419ac\" returns successfully" May 17 00:40:19.915713 kubelet[2082]: I0517 00:40:19.915687 2082 scope.go:117] "RemoveContainer" containerID="5d26fbe966e7978f028978a80f51309c1c5e3154664e1381575ef80c2b33c49a" May 17 00:40:19.916018 env[1735]: time="2025-05-17T00:40:19.915945809Z" level=error msg="ContainerStatus for \"5d26fbe966e7978f028978a80f51309c1c5e3154664e1381575ef80c2b33c49a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5d26fbe966e7978f028978a80f51309c1c5e3154664e1381575ef80c2b33c49a\": not found" May 17 00:40:19.916184 kubelet[2082]: E0517 00:40:19.916139 2082 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5d26fbe966e7978f028978a80f51309c1c5e3154664e1381575ef80c2b33c49a\": not found" containerID="5d26fbe966e7978f028978a80f51309c1c5e3154664e1381575ef80c2b33c49a" May 17 00:40:19.916286 kubelet[2082]: I0517 00:40:19.916178 2082 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5d26fbe966e7978f028978a80f51309c1c5e3154664e1381575ef80c2b33c49a"} err="failed to get container status \"5d26fbe966e7978f028978a80f51309c1c5e3154664e1381575ef80c2b33c49a\": rpc error: code = NotFound desc = an error occurred when try to find container \"5d26fbe966e7978f028978a80f51309c1c5e3154664e1381575ef80c2b33c49a\": not found" May 17 00:40:19.916286 kubelet[2082]: I0517 00:40:19.916248 2082 scope.go:117] "RemoveContainer" containerID="b49ec695f30d1a3775ecdb8a3e19e91d42a9728314277b4abde519013aa92fef" May 17 00:40:19.916449 env[1735]: time="2025-05-17T00:40:19.916402285Z" level=error msg="ContainerStatus for \"b49ec695f30d1a3775ecdb8a3e19e91d42a9728314277b4abde519013aa92fef\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b49ec695f30d1a3775ecdb8a3e19e91d42a9728314277b4abde519013aa92fef\": not found" May 17 00:40:19.916601 kubelet[2082]: E0517 00:40:19.916567 2082 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b49ec695f30d1a3775ecdb8a3e19e91d42a9728314277b4abde519013aa92fef\": not found" containerID="b49ec695f30d1a3775ecdb8a3e19e91d42a9728314277b4abde519013aa92fef" May 17 00:40:19.916681 kubelet[2082]: I0517 00:40:19.916608 2082 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b49ec695f30d1a3775ecdb8a3e19e91d42a9728314277b4abde519013aa92fef"} err="failed to get container status \"b49ec695f30d1a3775ecdb8a3e19e91d42a9728314277b4abde519013aa92fef\": rpc error: code = NotFound desc = an error occurred when try to find container \"b49ec695f30d1a3775ecdb8a3e19e91d42a9728314277b4abde519013aa92fef\": not found" May 17 00:40:19.916681 kubelet[2082]: I0517 00:40:19.916624 2082 scope.go:117] "RemoveContainer" containerID="e726b00b5c3a43580ec8b751bfef5bf88aaad56a9c084628005eff608ec67f37" May 17 00:40:19.916880 env[1735]: time="2025-05-17T00:40:19.916806348Z" level=error msg="ContainerStatus for \"e726b00b5c3a43580ec8b751bfef5bf88aaad56a9c084628005eff608ec67f37\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e726b00b5c3a43580ec8b751bfef5bf88aaad56a9c084628005eff608ec67f37\": not found" May 17 00:40:19.917095 kubelet[2082]: E0517 00:40:19.917078 2082 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e726b00b5c3a43580ec8b751bfef5bf88aaad56a9c084628005eff608ec67f37\": not found" containerID="e726b00b5c3a43580ec8b751bfef5bf88aaad56a9c084628005eff608ec67f37" May 17 00:40:19.917216 kubelet[2082]: I0517 00:40:19.917169 2082 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e726b00b5c3a43580ec8b751bfef5bf88aaad56a9c084628005eff608ec67f37"} err="failed to get container status \"e726b00b5c3a43580ec8b751bfef5bf88aaad56a9c084628005eff608ec67f37\": rpc error: code = NotFound desc = an error occurred when try to find container \"e726b00b5c3a43580ec8b751bfef5bf88aaad56a9c084628005eff608ec67f37\": not found" May 17 00:40:19.917216 kubelet[2082]: I0517 00:40:19.917200 2082 scope.go:117] "RemoveContainer" containerID="d8dcebc20accdbb4d461d939d6df0f8eca4b61e2fa2e6d9e2366bf893c110a46" May 17 00:40:19.917396 env[1735]: time="2025-05-17T00:40:19.917346716Z" level=error msg="ContainerStatus for \"d8dcebc20accdbb4d461d939d6df0f8eca4b61e2fa2e6d9e2366bf893c110a46\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d8dcebc20accdbb4d461d939d6df0f8eca4b61e2fa2e6d9e2366bf893c110a46\": not found" May 17 00:40:19.917506 kubelet[2082]: E0517 00:40:19.917485 2082 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d8dcebc20accdbb4d461d939d6df0f8eca4b61e2fa2e6d9e2366bf893c110a46\": not found" containerID="d8dcebc20accdbb4d461d939d6df0f8eca4b61e2fa2e6d9e2366bf893c110a46" May 17 00:40:19.917580 kubelet[2082]: I0517 00:40:19.917541 2082 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d8dcebc20accdbb4d461d939d6df0f8eca4b61e2fa2e6d9e2366bf893c110a46"} err="failed to get container status \"d8dcebc20accdbb4d461d939d6df0f8eca4b61e2fa2e6d9e2366bf893c110a46\": rpc error: code = NotFound desc = an error occurred when try to find container \"d8dcebc20accdbb4d461d939d6df0f8eca4b61e2fa2e6d9e2366bf893c110a46\": not found" May 17 00:40:19.917580 kubelet[2082]: I0517 00:40:19.917554 2082 scope.go:117] "RemoveContainer" containerID="9c39c88e8d8275e64a7eff54e079deccad6dbb0bac5c1cde8970108ac55419ac" May 17 00:40:19.917731 env[1735]: time="2025-05-17T00:40:19.917693568Z" level=error msg="ContainerStatus for \"9c39c88e8d8275e64a7eff54e079deccad6dbb0bac5c1cde8970108ac55419ac\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9c39c88e8d8275e64a7eff54e079deccad6dbb0bac5c1cde8970108ac55419ac\": not found" May 17 00:40:19.917855 kubelet[2082]: E0517 00:40:19.917832 2082 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9c39c88e8d8275e64a7eff54e079deccad6dbb0bac5c1cde8970108ac55419ac\": not found" containerID="9c39c88e8d8275e64a7eff54e079deccad6dbb0bac5c1cde8970108ac55419ac" May 17 00:40:19.917891 kubelet[2082]: I0517 00:40:19.917854 2082 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9c39c88e8d8275e64a7eff54e079deccad6dbb0bac5c1cde8970108ac55419ac"} err="failed to get container status \"9c39c88e8d8275e64a7eff54e079deccad6dbb0bac5c1cde8970108ac55419ac\": rpc error: code = NotFound desc = an error occurred when try to find container \"9c39c88e8d8275e64a7eff54e079deccad6dbb0bac5c1cde8970108ac55419ac\": not found" May 17 00:40:20.593408 kubelet[2082]: E0517 00:40:20.593351 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:21.554110 kubelet[2082]: E0517 00:40:21.554051 2082 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:21.588955 env[1735]: time="2025-05-17T00:40:21.588908786Z" level=info msg="StopPodSandbox for \"d1de63ef59b37036b4b1ad906fc119d668a52901e5a9f236bb437a120fe7cfe6\"" May 17 00:40:21.589353 env[1735]: time="2025-05-17T00:40:21.589028050Z" level=info msg="TearDown network for sandbox \"d1de63ef59b37036b4b1ad906fc119d668a52901e5a9f236bb437a120fe7cfe6\" successfully" May 17 00:40:21.589353 env[1735]: time="2025-05-17T00:40:21.589073614Z" level=info msg="StopPodSandbox for \"d1de63ef59b37036b4b1ad906fc119d668a52901e5a9f236bb437a120fe7cfe6\" returns successfully" May 17 00:40:21.589794 env[1735]: time="2025-05-17T00:40:21.589759935Z" level=info msg="RemovePodSandbox for \"d1de63ef59b37036b4b1ad906fc119d668a52901e5a9f236bb437a120fe7cfe6\"" May 17 00:40:21.589920 env[1735]: time="2025-05-17T00:40:21.589801723Z" level=info msg="Forcibly stopping sandbox \"d1de63ef59b37036b4b1ad906fc119d668a52901e5a9f236bb437a120fe7cfe6\"" May 17 00:40:21.589920 env[1735]: time="2025-05-17T00:40:21.589906383Z" level=info msg="TearDown network for sandbox \"d1de63ef59b37036b4b1ad906fc119d668a52901e5a9f236bb437a120fe7cfe6\" successfully" May 17 00:40:21.594268 kubelet[2082]: E0517 00:40:21.594234 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:21.595871 env[1735]: time="2025-05-17T00:40:21.595828134Z" level=info msg="RemovePodSandbox \"d1de63ef59b37036b4b1ad906fc119d668a52901e5a9f236bb437a120fe7cfe6\" returns successfully" May 17 00:40:21.699263 kubelet[2082]: E0517 00:40:21.699087 2082 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 00:40:21.705105 kubelet[2082]: E0517 00:40:21.705070 2082 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="336f9ddb-68ef-4385-b063-c34da0b06909" containerName="mount-bpf-fs" May 17 00:40:21.705105 kubelet[2082]: E0517 00:40:21.705096 2082 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="336f9ddb-68ef-4385-b063-c34da0b06909" containerName="cilium-agent" May 17 00:40:21.705105 kubelet[2082]: E0517 00:40:21.705102 2082 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="336f9ddb-68ef-4385-b063-c34da0b06909" containerName="mount-cgroup" May 17 00:40:21.705105 kubelet[2082]: E0517 00:40:21.705108 2082 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="336f9ddb-68ef-4385-b063-c34da0b06909" containerName="apply-sysctl-overwrites" May 17 00:40:21.705105 kubelet[2082]: E0517 00:40:21.705114 2082 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="336f9ddb-68ef-4385-b063-c34da0b06909" containerName="clean-cilium-state" May 17 00:40:21.705377 kubelet[2082]: I0517 00:40:21.705137 2082 memory_manager.go:354] "RemoveStaleState removing state" podUID="336f9ddb-68ef-4385-b063-c34da0b06909" containerName="cilium-agent" May 17 00:40:21.711220 systemd[1]: Created slice kubepods-burstable-pod67f6c2c6_b28b_40d1_862f_0f12fcd5ea9f.slice. May 17 00:40:21.719299 systemd[1]: Created slice kubepods-besteffort-pod8e1b2380_cc9a_4094_9c70_671813a6e3b0.slice. May 17 00:40:21.723671 kubelet[2082]: I0517 00:40:21.723600 2082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-clustermesh-secrets\") pod \"cilium-n28v9\" (UID: \"67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f\") " pod="kube-system/cilium-n28v9" May 17 00:40:21.723959 kubelet[2082]: I0517 00:40:21.723934 2082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-cilium-config-path\") pod \"cilium-n28v9\" (UID: \"67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f\") " pod="kube-system/cilium-n28v9" May 17 00:40:21.724054 kubelet[2082]: I0517 00:40:21.723970 2082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8e1b2380-cc9a-4094-9c70-671813a6e3b0-cilium-config-path\") pod \"cilium-operator-5d85765b45-d92md\" (UID: \"8e1b2380-cc9a-4094-9c70-671813a6e3b0\") " pod="kube-system/cilium-operator-5d85765b45-d92md" May 17 00:40:21.724054 kubelet[2082]: I0517 00:40:21.724002 2082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dct22\" (UniqueName: \"kubernetes.io/projected/8e1b2380-cc9a-4094-9c70-671813a6e3b0-kube-api-access-dct22\") pod \"cilium-operator-5d85765b45-d92md\" (UID: \"8e1b2380-cc9a-4094-9c70-671813a6e3b0\") " pod="kube-system/cilium-operator-5d85765b45-d92md" May 17 00:40:21.724054 kubelet[2082]: I0517 00:40:21.724026 2082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-bpf-maps\") pod \"cilium-n28v9\" (UID: \"67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f\") " pod="kube-system/cilium-n28v9" May 17 00:40:21.724054 kubelet[2082]: I0517 00:40:21.724047 2082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-etc-cni-netd\") pod \"cilium-n28v9\" (UID: \"67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f\") " pod="kube-system/cilium-n28v9" May 17 00:40:21.724249 kubelet[2082]: I0517 00:40:21.724077 2082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-cilium-cgroup\") pod \"cilium-n28v9\" (UID: \"67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f\") " pod="kube-system/cilium-n28v9" May 17 00:40:21.724249 kubelet[2082]: I0517 00:40:21.724100 2082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-cilium-ipsec-secrets\") pod \"cilium-n28v9\" (UID: \"67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f\") " pod="kube-system/cilium-n28v9" May 17 00:40:21.724249 kubelet[2082]: I0517 00:40:21.724126 2082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-hubble-tls\") pod \"cilium-n28v9\" (UID: \"67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f\") " pod="kube-system/cilium-n28v9" May 17 00:40:21.724249 kubelet[2082]: I0517 00:40:21.724150 2082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4zfz\" (UniqueName: \"kubernetes.io/projected/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-kube-api-access-l4zfz\") pod \"cilium-n28v9\" (UID: \"67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f\") " pod="kube-system/cilium-n28v9" May 17 00:40:21.724249 kubelet[2082]: I0517 00:40:21.724173 2082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-xtables-lock\") pod \"cilium-n28v9\" (UID: \"67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f\") " pod="kube-system/cilium-n28v9" May 17 00:40:21.724249 kubelet[2082]: I0517 00:40:21.724198 2082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-cilium-run\") pod \"cilium-n28v9\" (UID: \"67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f\") " pod="kube-system/cilium-n28v9" May 17 00:40:21.724485 kubelet[2082]: I0517 00:40:21.724222 2082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-hostproc\") pod \"cilium-n28v9\" (UID: \"67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f\") " pod="kube-system/cilium-n28v9" May 17 00:40:21.724485 kubelet[2082]: I0517 00:40:21.724246 2082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-cni-path\") pod \"cilium-n28v9\" (UID: \"67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f\") " pod="kube-system/cilium-n28v9" May 17 00:40:21.724485 kubelet[2082]: I0517 00:40:21.724272 2082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-lib-modules\") pod \"cilium-n28v9\" (UID: \"67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f\") " pod="kube-system/cilium-n28v9" May 17 00:40:21.724485 kubelet[2082]: I0517 00:40:21.724296 2082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-host-proc-sys-net\") pod \"cilium-n28v9\" (UID: \"67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f\") " pod="kube-system/cilium-n28v9" May 17 00:40:21.724485 kubelet[2082]: I0517 00:40:21.724320 2082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-host-proc-sys-kernel\") pod \"cilium-n28v9\" (UID: \"67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f\") " pod="kube-system/cilium-n28v9" May 17 00:40:21.738022 kubelet[2082]: I0517 00:40:21.737969 2082 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="336f9ddb-68ef-4385-b063-c34da0b06909" path="/var/lib/kubelet/pods/336f9ddb-68ef-4385-b063-c34da0b06909/volumes" May 17 00:40:22.018153 env[1735]: time="2025-05-17T00:40:22.018095667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n28v9,Uid:67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f,Namespace:kube-system,Attempt:0,}" May 17 00:40:22.023190 env[1735]: time="2025-05-17T00:40:22.023143895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-d92md,Uid:8e1b2380-cc9a-4094-9c70-671813a6e3b0,Namespace:kube-system,Attempt:0,}" May 17 00:40:22.043637 env[1735]: time="2025-05-17T00:40:22.043554864Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:40:22.044002 env[1735]: time="2025-05-17T00:40:22.043933040Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:40:22.044251 env[1735]: time="2025-05-17T00:40:22.044193515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:40:22.044673 env[1735]: time="2025-05-17T00:40:22.044623302Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/01746e28e8a3107f0610594c4e1051630eeb96669703b4c3cb75063c63b915e7 pid=3680 runtime=io.containerd.runc.v2 May 17 00:40:22.051899 env[1735]: time="2025-05-17T00:40:22.051751375Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:40:22.051899 env[1735]: time="2025-05-17T00:40:22.051883812Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:40:22.052112 env[1735]: time="2025-05-17T00:40:22.051918004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:40:22.053964 env[1735]: time="2025-05-17T00:40:22.053863647Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/343a58ab1475de01f776993a0c1db0208a300a8f36cc004b839f08394f0ac6a5 pid=3697 runtime=io.containerd.runc.v2 May 17 00:40:22.062226 systemd[1]: Started cri-containerd-01746e28e8a3107f0610594c4e1051630eeb96669703b4c3cb75063c63b915e7.scope. May 17 00:40:22.081147 systemd[1]: Started cri-containerd-343a58ab1475de01f776993a0c1db0208a300a8f36cc004b839f08394f0ac6a5.scope. May 17 00:40:22.119853 env[1735]: time="2025-05-17T00:40:22.119789430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n28v9,Uid:67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f,Namespace:kube-system,Attempt:0,} returns sandbox id \"01746e28e8a3107f0610594c4e1051630eeb96669703b4c3cb75063c63b915e7\"" May 17 00:40:22.123666 env[1735]: time="2025-05-17T00:40:22.123621919Z" level=info msg="CreateContainer within sandbox \"01746e28e8a3107f0610594c4e1051630eeb96669703b4c3cb75063c63b915e7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:40:22.145599 env[1735]: time="2025-05-17T00:40:22.145550581Z" level=info msg="CreateContainer within sandbox \"01746e28e8a3107f0610594c4e1051630eeb96669703b4c3cb75063c63b915e7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b70e50c3df7d37ec7fc0bfe728c55b82619311924326c4b53e5d39b5abee3a98\"" May 17 00:40:22.146392 env[1735]: time="2025-05-17T00:40:22.146295086Z" level=info msg="StartContainer for \"b70e50c3df7d37ec7fc0bfe728c55b82619311924326c4b53e5d39b5abee3a98\"" May 17 00:40:22.159437 env[1735]: time="2025-05-17T00:40:22.159387524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-d92md,Uid:8e1b2380-cc9a-4094-9c70-671813a6e3b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"343a58ab1475de01f776993a0c1db0208a300a8f36cc004b839f08394f0ac6a5\"" May 17 00:40:22.161830 env[1735]: time="2025-05-17T00:40:22.161781534Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 17 00:40:22.171165 systemd[1]: Started cri-containerd-b70e50c3df7d37ec7fc0bfe728c55b82619311924326c4b53e5d39b5abee3a98.scope. May 17 00:40:22.187904 systemd[1]: cri-containerd-b70e50c3df7d37ec7fc0bfe728c55b82619311924326c4b53e5d39b5abee3a98.scope: Deactivated successfully. May 17 00:40:22.208786 env[1735]: time="2025-05-17T00:40:22.208740367Z" level=info msg="shim disconnected" id=b70e50c3df7d37ec7fc0bfe728c55b82619311924326c4b53e5d39b5abee3a98 May 17 00:40:22.209002 env[1735]: time="2025-05-17T00:40:22.208972768Z" level=warning msg="cleaning up after shim disconnected" id=b70e50c3df7d37ec7fc0bfe728c55b82619311924326c4b53e5d39b5abee3a98 namespace=k8s.io May 17 00:40:22.209002 env[1735]: time="2025-05-17T00:40:22.208995990Z" level=info msg="cleaning up dead shim" May 17 00:40:22.217597 env[1735]: time="2025-05-17T00:40:22.217531013Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:40:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3777 runtime=io.containerd.runc.v2\ntime=\"2025-05-17T00:40:22Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/b70e50c3df7d37ec7fc0bfe728c55b82619311924326c4b53e5d39b5abee3a98/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" May 17 00:40:22.217979 env[1735]: time="2025-05-17T00:40:22.217866516Z" level=error msg="copy shim log" error="read /proc/self/fd/64: file already closed" May 17 00:40:22.219948 env[1735]: time="2025-05-17T00:40:22.219895241Z" level=error msg="Failed to pipe stdout of container \"b70e50c3df7d37ec7fc0bfe728c55b82619311924326c4b53e5d39b5abee3a98\"" error="reading from a closed fifo" May 17 00:40:22.220342 env[1735]: time="2025-05-17T00:40:22.220094225Z" level=error msg="Failed to pipe stderr of container \"b70e50c3df7d37ec7fc0bfe728c55b82619311924326c4b53e5d39b5abee3a98\"" error="reading from a closed fifo" May 17 00:40:22.223419 env[1735]: time="2025-05-17T00:40:22.223354475Z" level=error msg="StartContainer for \"b70e50c3df7d37ec7fc0bfe728c55b82619311924326c4b53e5d39b5abee3a98\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" May 17 00:40:22.223700 kubelet[2082]: E0517 00:40:22.223663 2082 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="b70e50c3df7d37ec7fc0bfe728c55b82619311924326c4b53e5d39b5abee3a98" May 17 00:40:22.225571 kubelet[2082]: E0517 00:40:22.225539 2082 kuberuntime_manager.go:1274] "Unhandled Error" err=< May 17 00:40:22.225571 kubelet[2082]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; May 17 00:40:22.225571 kubelet[2082]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; May 17 00:40:22.225571 kubelet[2082]: rm /hostbin/cilium-mount May 17 00:40:22.225786 kubelet[2082]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l4zfz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-n28v9_kube-system(67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown May 17 00:40:22.225786 kubelet[2082]: > logger="UnhandledError" May 17 00:40:22.226996 kubelet[2082]: E0517 00:40:22.226966 2082 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-n28v9" podUID="67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f" May 17 00:40:22.595359 kubelet[2082]: E0517 00:40:22.595306 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:22.885216 env[1735]: time="2025-05-17T00:40:22.885174382Z" level=info msg="StopPodSandbox for \"01746e28e8a3107f0610594c4e1051630eeb96669703b4c3cb75063c63b915e7\"" May 17 00:40:22.885840 env[1735]: time="2025-05-17T00:40:22.885800922Z" level=info msg="Container to stop \"b70e50c3df7d37ec7fc0bfe728c55b82619311924326c4b53e5d39b5abee3a98\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:40:22.889442 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-01746e28e8a3107f0610594c4e1051630eeb96669703b4c3cb75063c63b915e7-shm.mount: Deactivated successfully. May 17 00:40:22.892682 systemd[1]: cri-containerd-01746e28e8a3107f0610594c4e1051630eeb96669703b4c3cb75063c63b915e7.scope: Deactivated successfully. May 17 00:40:22.918706 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-01746e28e8a3107f0610594c4e1051630eeb96669703b4c3cb75063c63b915e7-rootfs.mount: Deactivated successfully. May 17 00:40:22.927261 env[1735]: time="2025-05-17T00:40:22.927214833Z" level=info msg="shim disconnected" id=01746e28e8a3107f0610594c4e1051630eeb96669703b4c3cb75063c63b915e7 May 17 00:40:22.927261 env[1735]: time="2025-05-17T00:40:22.927260203Z" level=warning msg="cleaning up after shim disconnected" id=01746e28e8a3107f0610594c4e1051630eeb96669703b4c3cb75063c63b915e7 namespace=k8s.io May 17 00:40:22.927261 env[1735]: time="2025-05-17T00:40:22.927269717Z" level=info msg="cleaning up dead shim" May 17 00:40:22.936593 env[1735]: time="2025-05-17T00:40:22.936499808Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:40:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3808 runtime=io.containerd.runc.v2\n" May 17 00:40:22.936938 env[1735]: time="2025-05-17T00:40:22.936902893Z" level=info msg="TearDown network for sandbox \"01746e28e8a3107f0610594c4e1051630eeb96669703b4c3cb75063c63b915e7\" successfully" May 17 00:40:22.936938 env[1735]: time="2025-05-17T00:40:22.936933313Z" level=info msg="StopPodSandbox for \"01746e28e8a3107f0610594c4e1051630eeb96669703b4c3cb75063c63b915e7\" returns successfully" May 17 00:40:23.037771 kubelet[2082]: I0517 00:40:23.037719 2082 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-hostproc\") pod \"67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f\" (UID: \"67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f\") " May 17 00:40:23.037771 kubelet[2082]: I0517 00:40:23.037757 2082 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-cni-path\") pod \"67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f\" (UID: \"67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f\") " May 17 00:40:23.037771 kubelet[2082]: I0517 00:40:23.037774 2082 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-etc-cni-netd\") pod \"67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f\" (UID: \"67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f\") " May 17 00:40:23.038025 kubelet[2082]: I0517 00:40:23.037792 2082 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-cilium-cgroup\") pod \"67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f\" (UID: \"67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f\") " May 17 00:40:23.038025 kubelet[2082]: I0517 00:40:23.037841 2082 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4zfz\" (UniqueName: \"kubernetes.io/projected/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-kube-api-access-l4zfz\") pod \"67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f\" (UID: \"67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f\") " May 17 00:40:23.038025 kubelet[2082]: I0517 00:40:23.037856 2082 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-lib-modules\") pod \"67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f\" (UID: \"67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f\") " May 17 00:40:23.038025 kubelet[2082]: I0517 00:40:23.037873 2082 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-host-proc-sys-net\") pod \"67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f\" (UID: \"67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f\") " May 17 00:40:23.038025 kubelet[2082]: I0517 00:40:23.037949 2082 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-host-proc-sys-kernel\") pod \"67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f\" (UID: \"67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f\") " May 17 00:40:23.038025 kubelet[2082]: I0517 00:40:23.037971 2082 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-cilium-config-path\") pod \"67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f\" (UID: \"67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f\") " May 17 00:40:23.038025 kubelet[2082]: I0517 00:40:23.037989 2082 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-cilium-ipsec-secrets\") pod \"67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f\" (UID: \"67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f\") " May 17 00:40:23.038025 kubelet[2082]: I0517 00:40:23.038005 2082 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-clustermesh-secrets\") pod \"67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f\" (UID: \"67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f\") " May 17 00:40:23.038025 kubelet[2082]: I0517 00:40:23.038021 2082 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-xtables-lock\") pod \"67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f\" (UID: \"67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f\") " May 17 00:40:23.038266 kubelet[2082]: I0517 00:40:23.038039 2082 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-cilium-run\") pod \"67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f\" (UID: \"67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f\") " May 17 00:40:23.038266 kubelet[2082]: I0517 00:40:23.038067 2082 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-bpf-maps\") pod \"67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f\" (UID: \"67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f\") " May 17 00:40:23.038266 kubelet[2082]: I0517 00:40:23.038085 2082 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-hubble-tls\") pod \"67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f\" (UID: \"67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f\") " May 17 00:40:23.040669 kubelet[2082]: I0517 00:40:23.038607 2082 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f" (UID: "67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:40:23.040669 kubelet[2082]: I0517 00:40:23.038641 2082 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-hostproc" (OuterVolumeSpecName: "hostproc") pod "67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f" (UID: "67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:40:23.040669 kubelet[2082]: I0517 00:40:23.038661 2082 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-cni-path" (OuterVolumeSpecName: "cni-path") pod "67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f" (UID: "67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:40:23.040669 kubelet[2082]: I0517 00:40:23.038675 2082 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f" (UID: "67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:40:23.040669 kubelet[2082]: I0517 00:40:23.038687 2082 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f" (UID: "67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:40:23.040669 kubelet[2082]: I0517 00:40:23.039011 2082 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f" (UID: "67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:40:23.040669 kubelet[2082]: I0517 00:40:23.039337 2082 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f" (UID: "67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:40:23.040669 kubelet[2082]: I0517 00:40:23.039371 2082 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f" (UID: "67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:40:23.042900 systemd[1]: var-lib-kubelet-pods-67f6c2c6\x2db28b\x2d40d1\x2d862f\x2d0f12fcd5ea9f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl4zfz.mount: Deactivated successfully. May 17 00:40:23.048080 kubelet[2082]: I0517 00:40:23.048034 2082 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f" (UID: "67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:40:23.049567 kubelet[2082]: I0517 00:40:23.048249 2082 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f" (UID: "67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 17 00:40:23.049708 kubelet[2082]: I0517 00:40:23.048279 2082 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f" (UID: "67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 17 00:40:23.049783 kubelet[2082]: I0517 00:40:23.049307 2082 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-kube-api-access-l4zfz" (OuterVolumeSpecName: "kube-api-access-l4zfz") pod "67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f" (UID: "67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f"). InnerVolumeSpecName "kube-api-access-l4zfz". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:40:23.049875 kubelet[2082]: I0517 00:40:23.049012 2082 setters.go:600] "Node became not ready" node="172.31.16.188" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-17T00:40:23Z","lastTransitionTime":"2025-05-17T00:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 17 00:40:23.053109 systemd[1]: var-lib-kubelet-pods-67f6c2c6\x2db28b\x2d40d1\x2d862f\x2d0f12fcd5ea9f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 17 00:40:23.056875 kubelet[2082]: I0517 00:40:23.056839 2082 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f" (UID: "67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 17 00:40:23.056969 kubelet[2082]: I0517 00:40:23.056909 2082 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f" (UID: "67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:40:23.059665 kubelet[2082]: I0517 00:40:23.059623 2082 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f" (UID: "67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 17 00:40:23.140382 kubelet[2082]: I0517 00:40:23.139281 2082 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-host-proc-sys-kernel\") on node \"172.31.16.188\" DevicePath \"\"" May 17 00:40:23.140382 kubelet[2082]: I0517 00:40:23.139324 2082 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-cilium-config-path\") on node \"172.31.16.188\" DevicePath \"\"" May 17 00:40:23.140382 kubelet[2082]: I0517 00:40:23.139335 2082 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-cilium-ipsec-secrets\") on node \"172.31.16.188\" DevicePath \"\"" May 17 00:40:23.140382 kubelet[2082]: I0517 00:40:23.139343 2082 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-host-proc-sys-net\") on node \"172.31.16.188\" DevicePath \"\"" May 17 00:40:23.140382 kubelet[2082]: I0517 00:40:23.139351 2082 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-clustermesh-secrets\") on node \"172.31.16.188\" DevicePath \"\"" May 17 00:40:23.140382 kubelet[2082]: I0517 00:40:23.139359 2082 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-xtables-lock\") on node \"172.31.16.188\" DevicePath \"\"" May 17 00:40:23.140382 kubelet[2082]: I0517 00:40:23.139368 2082 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-cilium-run\") on node \"172.31.16.188\" DevicePath \"\"" May 17 00:40:23.140382 kubelet[2082]: I0517 00:40:23.139375 2082 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-bpf-maps\") on node \"172.31.16.188\" DevicePath \"\"" May 17 00:40:23.140382 kubelet[2082]: I0517 00:40:23.139382 2082 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-hubble-tls\") on node \"172.31.16.188\" DevicePath \"\"" May 17 00:40:23.140382 kubelet[2082]: I0517 00:40:23.139389 2082 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-etc-cni-netd\") on node \"172.31.16.188\" DevicePath \"\"" May 17 00:40:23.140382 kubelet[2082]: I0517 00:40:23.139397 2082 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-cilium-cgroup\") on node \"172.31.16.188\" DevicePath \"\"" May 17 00:40:23.140382 kubelet[2082]: I0517 00:40:23.139404 2082 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4zfz\" (UniqueName: \"kubernetes.io/projected/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-kube-api-access-l4zfz\") on node \"172.31.16.188\" DevicePath \"\"" May 17 00:40:23.140382 kubelet[2082]: I0517 00:40:23.139411 2082 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-hostproc\") on node \"172.31.16.188\" DevicePath \"\"" May 17 00:40:23.140382 kubelet[2082]: I0517 00:40:23.139418 2082 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-cni-path\") on node \"172.31.16.188\" DevicePath \"\"" May 17 00:40:23.140382 kubelet[2082]: I0517 00:40:23.139425 2082 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f-lib-modules\") on node \"172.31.16.188\" DevicePath \"\"" May 17 00:40:23.596064 kubelet[2082]: E0517 00:40:23.595944 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:23.741253 systemd[1]: Removed slice kubepods-burstable-pod67f6c2c6_b28b_40d1_862f_0f12fcd5ea9f.slice. May 17 00:40:23.832598 systemd[1]: var-lib-kubelet-pods-67f6c2c6\x2db28b\x2d40d1\x2d862f\x2d0f12fcd5ea9f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 17 00:40:23.832696 systemd[1]: var-lib-kubelet-pods-67f6c2c6\x2db28b\x2d40d1\x2d862f\x2d0f12fcd5ea9f-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 17 00:40:23.888649 kubelet[2082]: I0517 00:40:23.888620 2082 scope.go:117] "RemoveContainer" containerID="b70e50c3df7d37ec7fc0bfe728c55b82619311924326c4b53e5d39b5abee3a98" May 17 00:40:23.892864 env[1735]: time="2025-05-17T00:40:23.892168721Z" level=info msg="RemoveContainer for \"b70e50c3df7d37ec7fc0bfe728c55b82619311924326c4b53e5d39b5abee3a98\"" May 17 00:40:23.898348 env[1735]: time="2025-05-17T00:40:23.898285406Z" level=info msg="RemoveContainer for \"b70e50c3df7d37ec7fc0bfe728c55b82619311924326c4b53e5d39b5abee3a98\" returns successfully" May 17 00:40:23.967291 kubelet[2082]: E0517 00:40:23.966260 2082 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f" containerName="mount-cgroup" May 17 00:40:23.967291 kubelet[2082]: I0517 00:40:23.966408 2082 memory_manager.go:354] "RemoveStaleState removing state" podUID="67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f" containerName="mount-cgroup" May 17 00:40:23.973782 systemd[1]: Created slice kubepods-burstable-pod6c4d85b6_4eae_4b39_82c2_fa645e91d000.slice. May 17 00:40:24.045642 kubelet[2082]: I0517 00:40:24.044990 2082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6c4d85b6-4eae-4b39-82c2-fa645e91d000-cilium-cgroup\") pod \"cilium-jd572\" (UID: \"6c4d85b6-4eae-4b39-82c2-fa645e91d000\") " pod="kube-system/cilium-jd572" May 17 00:40:24.045642 kubelet[2082]: I0517 00:40:24.045027 2082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c4d85b6-4eae-4b39-82c2-fa645e91d000-lib-modules\") pod \"cilium-jd572\" (UID: \"6c4d85b6-4eae-4b39-82c2-fa645e91d000\") " pod="kube-system/cilium-jd572" May 17 00:40:24.045642 kubelet[2082]: I0517 00:40:24.045047 2082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c4d85b6-4eae-4b39-82c2-fa645e91d000-xtables-lock\") pod \"cilium-jd572\" (UID: \"6c4d85b6-4eae-4b39-82c2-fa645e91d000\") " pod="kube-system/cilium-jd572" May 17 00:40:24.045642 kubelet[2082]: I0517 00:40:24.045063 2082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6c4d85b6-4eae-4b39-82c2-fa645e91d000-clustermesh-secrets\") pod \"cilium-jd572\" (UID: \"6c4d85b6-4eae-4b39-82c2-fa645e91d000\") " pod="kube-system/cilium-jd572" May 17 00:40:24.045642 kubelet[2082]: I0517 00:40:24.045078 2082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6c4d85b6-4eae-4b39-82c2-fa645e91d000-cilium-ipsec-secrets\") pod \"cilium-jd572\" (UID: \"6c4d85b6-4eae-4b39-82c2-fa645e91d000\") " pod="kube-system/cilium-jd572" May 17 00:40:24.045642 kubelet[2082]: I0517 00:40:24.045095 2082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9js7s\" (UniqueName: \"kubernetes.io/projected/6c4d85b6-4eae-4b39-82c2-fa645e91d000-kube-api-access-9js7s\") pod \"cilium-jd572\" (UID: \"6c4d85b6-4eae-4b39-82c2-fa645e91d000\") " pod="kube-system/cilium-jd572" May 17 00:40:24.046087 kubelet[2082]: I0517 00:40:24.046036 2082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6c4d85b6-4eae-4b39-82c2-fa645e91d000-bpf-maps\") pod \"cilium-jd572\" (UID: \"6c4d85b6-4eae-4b39-82c2-fa645e91d000\") " pod="kube-system/cilium-jd572" May 17 00:40:24.046087 kubelet[2082]: I0517 00:40:24.046077 2082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6c4d85b6-4eae-4b39-82c2-fa645e91d000-hostproc\") pod \"cilium-jd572\" (UID: \"6c4d85b6-4eae-4b39-82c2-fa645e91d000\") " pod="kube-system/cilium-jd572" May 17 00:40:24.046263 kubelet[2082]: I0517 00:40:24.046097 2082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6c4d85b6-4eae-4b39-82c2-fa645e91d000-cilium-run\") pod \"cilium-jd572\" (UID: \"6c4d85b6-4eae-4b39-82c2-fa645e91d000\") " pod="kube-system/cilium-jd572" May 17 00:40:24.046263 kubelet[2082]: I0517 00:40:24.046120 2082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6c4d85b6-4eae-4b39-82c2-fa645e91d000-host-proc-sys-net\") pod \"cilium-jd572\" (UID: \"6c4d85b6-4eae-4b39-82c2-fa645e91d000\") " pod="kube-system/cilium-jd572" May 17 00:40:24.046263 kubelet[2082]: I0517 00:40:24.046135 2082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6c4d85b6-4eae-4b39-82c2-fa645e91d000-hubble-tls\") pod \"cilium-jd572\" (UID: \"6c4d85b6-4eae-4b39-82c2-fa645e91d000\") " pod="kube-system/cilium-jd572" May 17 00:40:24.046263 kubelet[2082]: I0517 00:40:24.046149 2082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6c4d85b6-4eae-4b39-82c2-fa645e91d000-etc-cni-netd\") pod \"cilium-jd572\" (UID: \"6c4d85b6-4eae-4b39-82c2-fa645e91d000\") " pod="kube-system/cilium-jd572" May 17 00:40:24.046263 kubelet[2082]: I0517 00:40:24.046162 2082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6c4d85b6-4eae-4b39-82c2-fa645e91d000-cilium-config-path\") pod \"cilium-jd572\" (UID: \"6c4d85b6-4eae-4b39-82c2-fa645e91d000\") " pod="kube-system/cilium-jd572" May 17 00:40:24.046263 kubelet[2082]: I0517 00:40:24.046179 2082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6c4d85b6-4eae-4b39-82c2-fa645e91d000-cni-path\") pod \"cilium-jd572\" (UID: \"6c4d85b6-4eae-4b39-82c2-fa645e91d000\") " pod="kube-system/cilium-jd572" May 17 00:40:24.046263 kubelet[2082]: I0517 00:40:24.046194 2082 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6c4d85b6-4eae-4b39-82c2-fa645e91d000-host-proc-sys-kernel\") pod \"cilium-jd572\" (UID: \"6c4d85b6-4eae-4b39-82c2-fa645e91d000\") " pod="kube-system/cilium-jd572" May 17 00:40:24.064732 env[1735]: time="2025-05-17T00:40:24.064680423Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:24.068891 env[1735]: time="2025-05-17T00:40:24.068844802Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:24.071747 env[1735]: time="2025-05-17T00:40:24.071707265Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:24.072229 env[1735]: time="2025-05-17T00:40:24.072200950Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 17 00:40:24.075264 env[1735]: time="2025-05-17T00:40:24.075227407Z" level=info msg="CreateContainer within sandbox \"343a58ab1475de01f776993a0c1db0208a300a8f36cc004b839f08394f0ac6a5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 17 00:40:24.100635 env[1735]: time="2025-05-17T00:40:24.100564790Z" level=info msg="CreateContainer within sandbox \"343a58ab1475de01f776993a0c1db0208a300a8f36cc004b839f08394f0ac6a5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f7e7d3cb7210e77126202ac6c12d319b81b6517a52664f6448652225eb66629a\"" May 17 00:40:24.101241 env[1735]: time="2025-05-17T00:40:24.101187102Z" level=info msg="StartContainer for \"f7e7d3cb7210e77126202ac6c12d319b81b6517a52664f6448652225eb66629a\"" May 17 00:40:24.131395 systemd[1]: Started cri-containerd-f7e7d3cb7210e77126202ac6c12d319b81b6517a52664f6448652225eb66629a.scope. May 17 00:40:24.177462 env[1735]: time="2025-05-17T00:40:24.177345225Z" level=info msg="StartContainer for \"f7e7d3cb7210e77126202ac6c12d319b81b6517a52664f6448652225eb66629a\" returns successfully" May 17 00:40:24.282670 env[1735]: time="2025-05-17T00:40:24.282632366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jd572,Uid:6c4d85b6-4eae-4b39-82c2-fa645e91d000,Namespace:kube-system,Attempt:0,}" May 17 00:40:24.302515 env[1735]: time="2025-05-17T00:40:24.302431577Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:40:24.302515 env[1735]: time="2025-05-17T00:40:24.302473412Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:40:24.302515 env[1735]: time="2025-05-17T00:40:24.302488855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:40:24.303051 env[1735]: time="2025-05-17T00:40:24.302940648Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a1b639ef254fe3ddd29267278b00b08fde62fd345293d881a95d1639fee94216 pid=3874 runtime=io.containerd.runc.v2 May 17 00:40:24.324225 systemd[1]: Started cri-containerd-a1b639ef254fe3ddd29267278b00b08fde62fd345293d881a95d1639fee94216.scope. May 17 00:40:24.369281 env[1735]: time="2025-05-17T00:40:24.369228564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jd572,Uid:6c4d85b6-4eae-4b39-82c2-fa645e91d000,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1b639ef254fe3ddd29267278b00b08fde62fd345293d881a95d1639fee94216\"" May 17 00:40:24.372490 env[1735]: time="2025-05-17T00:40:24.372449167Z" level=info msg="CreateContainer within sandbox \"a1b639ef254fe3ddd29267278b00b08fde62fd345293d881a95d1639fee94216\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:40:24.392077 env[1735]: time="2025-05-17T00:40:24.392032237Z" level=info msg="CreateContainer within sandbox \"a1b639ef254fe3ddd29267278b00b08fde62fd345293d881a95d1639fee94216\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e5bc71eb281c834f1486744ccd5ee1e70606fc91d0c8badc76089208ad8b2eb4\"" May 17 00:40:24.393082 env[1735]: time="2025-05-17T00:40:24.393051125Z" level=info msg="StartContainer for \"e5bc71eb281c834f1486744ccd5ee1e70606fc91d0c8badc76089208ad8b2eb4\"" May 17 00:40:24.408832 systemd[1]: Started cri-containerd-e5bc71eb281c834f1486744ccd5ee1e70606fc91d0c8badc76089208ad8b2eb4.scope. May 17 00:40:24.440233 env[1735]: time="2025-05-17T00:40:24.440135161Z" level=info msg="StartContainer for \"e5bc71eb281c834f1486744ccd5ee1e70606fc91d0c8badc76089208ad8b2eb4\" returns successfully" May 17 00:40:24.466105 systemd[1]: cri-containerd-e5bc71eb281c834f1486744ccd5ee1e70606fc91d0c8badc76089208ad8b2eb4.scope: Deactivated successfully. May 17 00:40:24.510739 env[1735]: time="2025-05-17T00:40:24.510692641Z" level=info msg="shim disconnected" id=e5bc71eb281c834f1486744ccd5ee1e70606fc91d0c8badc76089208ad8b2eb4 May 17 00:40:24.510739 env[1735]: time="2025-05-17T00:40:24.510735352Z" level=warning msg="cleaning up after shim disconnected" id=e5bc71eb281c834f1486744ccd5ee1e70606fc91d0c8badc76089208ad8b2eb4 namespace=k8s.io May 17 00:40:24.510739 env[1735]: time="2025-05-17T00:40:24.510744655Z" level=info msg="cleaning up dead shim" May 17 00:40:24.518931 env[1735]: time="2025-05-17T00:40:24.518884404Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:40:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3958 runtime=io.containerd.runc.v2\n" May 17 00:40:24.596961 kubelet[2082]: E0517 00:40:24.596906 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:24.836798 systemd[1]: run-containerd-runc-k8s.io-f7e7d3cb7210e77126202ac6c12d319b81b6517a52664f6448652225eb66629a-runc.vwNBSj.mount: Deactivated successfully. May 17 00:40:24.894270 env[1735]: time="2025-05-17T00:40:24.894223910Z" level=info msg="CreateContainer within sandbox \"a1b639ef254fe3ddd29267278b00b08fde62fd345293d881a95d1639fee94216\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 00:40:24.913511 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1964087062.mount: Deactivated successfully. May 17 00:40:24.919220 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4148609864.mount: Deactivated successfully. May 17 00:40:24.928227 env[1735]: time="2025-05-17T00:40:24.928175455Z" level=info msg="CreateContainer within sandbox \"a1b639ef254fe3ddd29267278b00b08fde62fd345293d881a95d1639fee94216\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b2a10de2567f62171b8dde1ff3badf5847b37aa3d070994ece621c214a88ee9d\"" May 17 00:40:24.928899 env[1735]: time="2025-05-17T00:40:24.928863365Z" level=info msg="StartContainer for \"b2a10de2567f62171b8dde1ff3badf5847b37aa3d070994ece621c214a88ee9d\"" May 17 00:40:24.948200 systemd[1]: Started cri-containerd-b2a10de2567f62171b8dde1ff3badf5847b37aa3d070994ece621c214a88ee9d.scope. May 17 00:40:24.984428 env[1735]: time="2025-05-17T00:40:24.984357204Z" level=info msg="StartContainer for \"b2a10de2567f62171b8dde1ff3badf5847b37aa3d070994ece621c214a88ee9d\" returns successfully" May 17 00:40:25.001772 systemd[1]: cri-containerd-b2a10de2567f62171b8dde1ff3badf5847b37aa3d070994ece621c214a88ee9d.scope: Deactivated successfully. May 17 00:40:25.038639 env[1735]: time="2025-05-17T00:40:25.038581957Z" level=info msg="shim disconnected" id=b2a10de2567f62171b8dde1ff3badf5847b37aa3d070994ece621c214a88ee9d May 17 00:40:25.038639 env[1735]: time="2025-05-17T00:40:25.038624797Z" level=warning msg="cleaning up after shim disconnected" id=b2a10de2567f62171b8dde1ff3badf5847b37aa3d070994ece621c214a88ee9d namespace=k8s.io May 17 00:40:25.038639 env[1735]: time="2025-05-17T00:40:25.038641292Z" level=info msg="cleaning up dead shim" May 17 00:40:25.047097 env[1735]: time="2025-05-17T00:40:25.047052956Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:40:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4016 runtime=io.containerd.runc.v2\n" May 17 00:40:25.315574 kubelet[2082]: W0517 00:40:25.315519 2082 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67f6c2c6_b28b_40d1_862f_0f12fcd5ea9f.slice/cri-containerd-b70e50c3df7d37ec7fc0bfe728c55b82619311924326c4b53e5d39b5abee3a98.scope WatchSource:0}: container "b70e50c3df7d37ec7fc0bfe728c55b82619311924326c4b53e5d39b5abee3a98" in namespace "k8s.io": not found May 17 00:40:25.597524 kubelet[2082]: E0517 00:40:25.597400 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:25.738492 kubelet[2082]: I0517 00:40:25.738432 2082 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f" path="/var/lib/kubelet/pods/67f6c2c6-b28b-40d1-862f-0f12fcd5ea9f/volumes" May 17 00:40:25.902780 env[1735]: time="2025-05-17T00:40:25.902738806Z" level=info msg="CreateContainer within sandbox \"a1b639ef254fe3ddd29267278b00b08fde62fd345293d881a95d1639fee94216\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 00:40:25.922415 kubelet[2082]: I0517 00:40:25.920991 2082 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-d92md" podStartSLOduration=3.008849843 podStartE2EDuration="4.920974824s" podCreationTimestamp="2025-05-17 00:40:21 +0000 UTC" firstStartedPulling="2025-05-17 00:40:22.161234226 +0000 UTC m=+60.974542815" lastFinishedPulling="2025-05-17 00:40:24.073359207 +0000 UTC m=+62.886667796" observedRunningTime="2025-05-17 00:40:24.940533654 +0000 UTC m=+63.753842251" watchObservedRunningTime="2025-05-17 00:40:25.920974824 +0000 UTC m=+64.734283448" May 17 00:40:25.925096 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3631798092.mount: Deactivated successfully. May 17 00:40:25.936791 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2929379298.mount: Deactivated successfully. May 17 00:40:25.948360 env[1735]: time="2025-05-17T00:40:25.948290542Z" level=info msg="CreateContainer within sandbox \"a1b639ef254fe3ddd29267278b00b08fde62fd345293d881a95d1639fee94216\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"780455f0154775792fddc70755da014ef66fa731d21762b13567cfe68136b214\"" May 17 00:40:25.949046 env[1735]: time="2025-05-17T00:40:25.949008075Z" level=info msg="StartContainer for \"780455f0154775792fddc70755da014ef66fa731d21762b13567cfe68136b214\"" May 17 00:40:25.970682 systemd[1]: Started cri-containerd-780455f0154775792fddc70755da014ef66fa731d21762b13567cfe68136b214.scope. May 17 00:40:26.011797 env[1735]: time="2025-05-17T00:40:26.011725899Z" level=info msg="StartContainer for \"780455f0154775792fddc70755da014ef66fa731d21762b13567cfe68136b214\" returns successfully" May 17 00:40:26.028490 systemd[1]: cri-containerd-780455f0154775792fddc70755da014ef66fa731d21762b13567cfe68136b214.scope: Deactivated successfully. May 17 00:40:26.067591 env[1735]: time="2025-05-17T00:40:26.067534965Z" level=info msg="shim disconnected" id=780455f0154775792fddc70755da014ef66fa731d21762b13567cfe68136b214 May 17 00:40:26.067591 env[1735]: time="2025-05-17T00:40:26.067591134Z" level=warning msg="cleaning up after shim disconnected" id=780455f0154775792fddc70755da014ef66fa731d21762b13567cfe68136b214 namespace=k8s.io May 17 00:40:26.067941 env[1735]: time="2025-05-17T00:40:26.067601320Z" level=info msg="cleaning up dead shim" May 17 00:40:26.075559 env[1735]: time="2025-05-17T00:40:26.075516037Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:40:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4074 runtime=io.containerd.runc.v2\n" May 17 00:40:26.597861 kubelet[2082]: E0517 00:40:26.597773 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:26.700334 kubelet[2082]: E0517 00:40:26.700289 2082 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 00:40:26.905685 env[1735]: time="2025-05-17T00:40:26.905640494Z" level=info msg="CreateContainer within sandbox \"a1b639ef254fe3ddd29267278b00b08fde62fd345293d881a95d1639fee94216\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 00:40:26.924740 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1174757606.mount: Deactivated successfully. May 17 00:40:26.930511 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2994815707.mount: Deactivated successfully. May 17 00:40:26.939340 env[1735]: time="2025-05-17T00:40:26.939278278Z" level=info msg="CreateContainer within sandbox \"a1b639ef254fe3ddd29267278b00b08fde62fd345293d881a95d1639fee94216\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"01e47924f6270e27762983f70df1f15ecdfd85d63699cda26fb7e76071603000\"" May 17 00:40:26.940044 env[1735]: time="2025-05-17T00:40:26.940008360Z" level=info msg="StartContainer for \"01e47924f6270e27762983f70df1f15ecdfd85d63699cda26fb7e76071603000\"" May 17 00:40:26.961609 systemd[1]: Started cri-containerd-01e47924f6270e27762983f70df1f15ecdfd85d63699cda26fb7e76071603000.scope. May 17 00:40:26.994003 systemd[1]: cri-containerd-01e47924f6270e27762983f70df1f15ecdfd85d63699cda26fb7e76071603000.scope: Deactivated successfully. May 17 00:40:26.996387 env[1735]: time="2025-05-17T00:40:26.996346036Z" level=info msg="StartContainer for \"01e47924f6270e27762983f70df1f15ecdfd85d63699cda26fb7e76071603000\" returns successfully" May 17 00:40:27.026979 env[1735]: time="2025-05-17T00:40:27.026926036Z" level=info msg="shim disconnected" id=01e47924f6270e27762983f70df1f15ecdfd85d63699cda26fb7e76071603000 May 17 00:40:27.026979 env[1735]: time="2025-05-17T00:40:27.026974193Z" level=warning msg="cleaning up after shim disconnected" id=01e47924f6270e27762983f70df1f15ecdfd85d63699cda26fb7e76071603000 namespace=k8s.io May 17 00:40:27.026979 env[1735]: time="2025-05-17T00:40:27.026983340Z" level=info msg="cleaning up dead shim" May 17 00:40:27.036856 env[1735]: time="2025-05-17T00:40:27.036799510Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:40:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4133 runtime=io.containerd.runc.v2\n" May 17 00:40:27.598637 kubelet[2082]: E0517 00:40:27.598594 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:27.910362 env[1735]: time="2025-05-17T00:40:27.910219216Z" level=info msg="CreateContainer within sandbox \"a1b639ef254fe3ddd29267278b00b08fde62fd345293d881a95d1639fee94216\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 00:40:27.931390 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4286489713.mount: Deactivated successfully. May 17 00:40:27.937735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3558764535.mount: Deactivated successfully. May 17 00:40:27.946566 env[1735]: time="2025-05-17T00:40:27.946407614Z" level=info msg="CreateContainer within sandbox \"a1b639ef254fe3ddd29267278b00b08fde62fd345293d881a95d1639fee94216\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"025847d14c3bd85ed36d4cad92acc65bf1fde40e0a9b01f9cb69cb67ef9cb590\"" May 17 00:40:27.947139 env[1735]: time="2025-05-17T00:40:27.947063483Z" level=info msg="StartContainer for \"025847d14c3bd85ed36d4cad92acc65bf1fde40e0a9b01f9cb69cb67ef9cb590\"" May 17 00:40:27.963488 systemd[1]: Started cri-containerd-025847d14c3bd85ed36d4cad92acc65bf1fde40e0a9b01f9cb69cb67ef9cb590.scope. May 17 00:40:28.000202 env[1735]: time="2025-05-17T00:40:28.000157423Z" level=info msg="StartContainer for \"025847d14c3bd85ed36d4cad92acc65bf1fde40e0a9b01f9cb69cb67ef9cb590\" returns successfully" May 17 00:40:28.427735 kubelet[2082]: W0517 00:40:28.427530 2082 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c4d85b6_4eae_4b39_82c2_fa645e91d000.slice/cri-containerd-e5bc71eb281c834f1486744ccd5ee1e70606fc91d0c8badc76089208ad8b2eb4.scope WatchSource:0}: task e5bc71eb281c834f1486744ccd5ee1e70606fc91d0c8badc76089208ad8b2eb4 not found: not found May 17 00:40:28.584854 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 17 00:40:28.599367 kubelet[2082]: E0517 00:40:28.599323 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:29.599976 kubelet[2082]: E0517 00:40:29.599918 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:30.600397 kubelet[2082]: E0517 00:40:30.600344 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:30.801656 systemd[1]: run-containerd-runc-k8s.io-025847d14c3bd85ed36d4cad92acc65bf1fde40e0a9b01f9cb69cb67ef9cb590-runc.UvnK7n.mount: Deactivated successfully. May 17 00:40:31.400556 systemd-networkd[1458]: lxc_health: Link UP May 17 00:40:31.406807 (udev-worker)[4691]: Network interface NamePolicy= disabled on kernel command line. May 17 00:40:31.409173 systemd-networkd[1458]: lxc_health: Gained carrier May 17 00:40:31.409844 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 17 00:40:31.536575 kubelet[2082]: W0517 00:40:31.536535 2082 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c4d85b6_4eae_4b39_82c2_fa645e91d000.slice/cri-containerd-b2a10de2567f62171b8dde1ff3badf5847b37aa3d070994ece621c214a88ee9d.scope WatchSource:0}: task b2a10de2567f62171b8dde1ff3badf5847b37aa3d070994ece621c214a88ee9d not found: not found May 17 00:40:31.601502 kubelet[2082]: E0517 00:40:31.601430 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:32.311207 kubelet[2082]: I0517 00:40:32.311142 2082 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jd572" podStartSLOduration=9.311118722 podStartE2EDuration="9.311118722s" podCreationTimestamp="2025-05-17 00:40:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:40:28.941680816 +0000 UTC m=+67.754989412" watchObservedRunningTime="2025-05-17 00:40:32.311118722 +0000 UTC m=+71.124427319" May 17 00:40:32.602037 kubelet[2082]: E0517 00:40:32.601998 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:32.906972 systemd-networkd[1458]: lxc_health: Gained IPv6LL May 17 00:40:33.216230 systemd[1]: run-containerd-runc-k8s.io-025847d14c3bd85ed36d4cad92acc65bf1fde40e0a9b01f9cb69cb67ef9cb590-runc.IGg9Bt.mount: Deactivated successfully. May 17 00:40:33.602792 kubelet[2082]: E0517 00:40:33.602753 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:34.604149 kubelet[2082]: E0517 00:40:34.604106 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:34.646869 kubelet[2082]: W0517 00:40:34.646803 2082 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c4d85b6_4eae_4b39_82c2_fa645e91d000.slice/cri-containerd-780455f0154775792fddc70755da014ef66fa731d21762b13567cfe68136b214.scope WatchSource:0}: task 780455f0154775792fddc70755da014ef66fa731d21762b13567cfe68136b214 not found: not found May 17 00:40:35.445040 systemd[1]: run-containerd-runc-k8s.io-025847d14c3bd85ed36d4cad92acc65bf1fde40e0a9b01f9cb69cb67ef9cb590-runc.D6g90J.mount: Deactivated successfully. May 17 00:40:35.605059 kubelet[2082]: E0517 00:40:35.605016 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:36.605893 kubelet[2082]: E0517 00:40:36.605835 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:37.606753 kubelet[2082]: E0517 00:40:37.606681 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:37.772605 kubelet[2082]: W0517 00:40:37.772532 2082 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c4d85b6_4eae_4b39_82c2_fa645e91d000.slice/cri-containerd-01e47924f6270e27762983f70df1f15ecdfd85d63699cda26fb7e76071603000.scope WatchSource:0}: task 01e47924f6270e27762983f70df1f15ecdfd85d63699cda26fb7e76071603000 not found: not found May 17 00:40:38.607064 kubelet[2082]: E0517 00:40:38.606984 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:39.607714 kubelet[2082]: E0517 00:40:39.607658 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:40.608048 kubelet[2082]: E0517 00:40:40.608002 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:41.554591 kubelet[2082]: E0517 00:40:41.554532 2082 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:41.608618 kubelet[2082]: E0517 00:40:41.608573 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:42.609645 kubelet[2082]: E0517 00:40:42.609524 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:43.610076 kubelet[2082]: E0517 00:40:43.610031 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:44.610945 kubelet[2082]: E0517 00:40:44.610889 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:45.611054 kubelet[2082]: E0517 00:40:45.610997 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:46.611230 kubelet[2082]: E0517 00:40:46.611159 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:47.611955 kubelet[2082]: E0517 00:40:47.611895 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:48.612583 kubelet[2082]: E0517 00:40:48.612528 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:49.613300 kubelet[2082]: E0517 00:40:49.613246 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:50.613900 kubelet[2082]: E0517 00:40:50.613765 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:51.614977 kubelet[2082]: E0517 00:40:51.614924 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:52.616062 kubelet[2082]: E0517 00:40:52.616017 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:53.180408 kubelet[2082]: E0517 00:40:53.180344 2082 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.188?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" May 17 00:40:53.616908 kubelet[2082]: E0517 00:40:53.616855 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:54.617665 kubelet[2082]: E0517 00:40:54.617601 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:55.618524 kubelet[2082]: E0517 00:40:55.618445 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:56.619428 kubelet[2082]: E0517 00:40:56.619383 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:57.620298 kubelet[2082]: E0517 00:40:57.620215 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:58.620779 kubelet[2082]: E0517 00:40:58.620733 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:59.621760 kubelet[2082]: E0517 00:40:59.621687 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:00.622460 kubelet[2082]: E0517 00:41:00.622376 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:01.554205 kubelet[2082]: E0517 00:41:01.554154 2082 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:01.622960 kubelet[2082]: E0517 00:41:01.622917 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:02.623536 kubelet[2082]: E0517 00:41:02.623479 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:03.180604 kubelet[2082]: E0517 00:41:03.180561 2082 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.188?timeout=10s\": context deadline exceeded" May 17 00:41:03.441635 kubelet[2082]: E0517 00:41:03.441397 2082 kubelet_node_status.go:535] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-05-17T00:40:53Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-05-17T00:40:53Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-05-17T00:40:53Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-05-17T00:40:53Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\\\"],\\\"sizeBytes\\\":166719855},{\\\"names\\\":[\\\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\\\",\\\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\\\"],\\\"sizeBytes\\\":91036984},{\\\"names\\\":[\\\"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\\\",\\\"ghcr.io/flatcar/nginx:latest\\\"],\\\"sizeBytes\\\":73306098},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\\\",\\\"registry.k8s.io/kube-proxy:v1.31.9\\\"],\\\"sizeBytes\\\":30354642},{\\\"names\\\":[\\\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\\\"],\\\"sizeBytes\\\":18897442},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db\\\",\\\"registry.k8s.io/pause:3.6\\\"],\\\"sizeBytes\\\":301773}]}}\" for node \"172.31.16.188\": Patch \"https://172.31.24.80:6443/api/v1/nodes/172.31.16.188/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" May 17 00:41:03.624105 kubelet[2082]: E0517 00:41:03.624041 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:04.624865 kubelet[2082]: E0517 00:41:04.624786 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:05.625019 kubelet[2082]: E0517 00:41:05.624952 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:06.625178 kubelet[2082]: E0517 00:41:06.625109 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:07.625729 kubelet[2082]: E0517 00:41:07.625683 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:08.626606 kubelet[2082]: E0517 00:41:08.626540 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:09.627538 kubelet[2082]: E0517 00:41:09.627447 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:10.628001 kubelet[2082]: E0517 00:41:10.627953 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:11.628328 kubelet[2082]: E0517 00:41:11.628274 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:12.629266 kubelet[2082]: E0517 00:41:12.629185 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:13.181678 kubelet[2082]: E0517 00:41:13.181571 2082 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.188?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" May 17 00:41:13.442473 kubelet[2082]: E0517 00:41:13.442349 2082 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.31.16.188\": Get \"https://172.31.24.80:6443/api/v1/nodes/172.31.16.188?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" May 17 00:41:13.630150 kubelet[2082]: E0517 00:41:13.630104 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:14.631272 kubelet[2082]: E0517 00:41:14.631224 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:15.632297 kubelet[2082]: E0517 00:41:15.632250 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:16.633150 kubelet[2082]: E0517 00:41:16.633107 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:17.634012 kubelet[2082]: E0517 00:41:17.633951 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:18.635144 kubelet[2082]: E0517 00:41:18.635098 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:19.636022 kubelet[2082]: E0517 00:41:19.635978 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:20.545052 kubelet[2082]: E0517 00:41:20.545013 2082 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.188?timeout=10s\": unexpected EOF" May 17 00:41:20.554810 kubelet[2082]: E0517 00:41:20.554779 2082 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.188?timeout=10s\": read tcp 172.31.16.188:58960->172.31.24.80:6443: read: connection reset by peer" May 17 00:41:20.555033 kubelet[2082]: I0517 00:41:20.554997 2082 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" May 17 00:41:20.555733 kubelet[2082]: E0517 00:41:20.555630 2082 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.188?timeout=10s\": dial tcp 172.31.24.80:6443: connect: connection refused" interval="200ms" May 17 00:41:20.636584 kubelet[2082]: E0517 00:41:20.636518 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:20.756967 kubelet[2082]: E0517 00:41:20.756918 2082 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.188?timeout=10s\": dial tcp 172.31.24.80:6443: connect: connection refused" interval="400ms" May 17 00:41:21.158909 kubelet[2082]: E0517 00:41:21.158790 2082 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.188?timeout=10s\": dial tcp 172.31.24.80:6443: connect: connection refused" interval="800ms" May 17 00:41:21.546723 kubelet[2082]: E0517 00:41:21.546494 2082 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.31.16.188\": Get \"https://172.31.24.80:6443/api/v1/nodes/172.31.16.188?timeout=10s\": dial tcp 172.31.24.80:6443: connect: connection refused - error from a previous attempt: unexpected EOF" May 17 00:41:21.547100 kubelet[2082]: E0517 00:41:21.547065 2082 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.31.16.188\": Get \"https://172.31.24.80:6443/api/v1/nodes/172.31.16.188?timeout=10s\": dial tcp 172.31.24.80:6443: connect: connection refused" May 17 00:41:21.547737 kubelet[2082]: E0517 00:41:21.547689 2082 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.31.16.188\": Get \"https://172.31.24.80:6443/api/v1/nodes/172.31.16.188?timeout=10s\": dial tcp 172.31.24.80:6443: connect: connection refused" May 17 00:41:21.547737 kubelet[2082]: E0517 00:41:21.547721 2082 kubelet_node_status.go:522] "Unable to update node status" err="update node status exceeds retry count" May 17 00:41:21.553917 kubelet[2082]: E0517 00:41:21.553854 2082 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:21.599995 env[1735]: time="2025-05-17T00:41:21.599894235Z" level=info msg="StopPodSandbox for \"01746e28e8a3107f0610594c4e1051630eeb96669703b4c3cb75063c63b915e7\"" May 17 00:41:21.600707 env[1735]: time="2025-05-17T00:41:21.600069560Z" level=info msg="TearDown network for sandbox \"01746e28e8a3107f0610594c4e1051630eeb96669703b4c3cb75063c63b915e7\" successfully" May 17 00:41:21.600707 env[1735]: time="2025-05-17T00:41:21.600119542Z" level=info msg="StopPodSandbox for \"01746e28e8a3107f0610594c4e1051630eeb96669703b4c3cb75063c63b915e7\" returns successfully" May 17 00:41:21.601133 env[1735]: time="2025-05-17T00:41:21.601031496Z" level=info msg="RemovePodSandbox for \"01746e28e8a3107f0610594c4e1051630eeb96669703b4c3cb75063c63b915e7\"" May 17 00:41:21.601236 env[1735]: time="2025-05-17T00:41:21.601136172Z" level=info msg="Forcibly stopping sandbox \"01746e28e8a3107f0610594c4e1051630eeb96669703b4c3cb75063c63b915e7\"" May 17 00:41:21.601296 env[1735]: time="2025-05-17T00:41:21.601240426Z" level=info msg="TearDown network for sandbox \"01746e28e8a3107f0610594c4e1051630eeb96669703b4c3cb75063c63b915e7\" successfully" May 17 00:41:21.613016 env[1735]: time="2025-05-17T00:41:21.612962675Z" level=info msg="RemovePodSandbox \"01746e28e8a3107f0610594c4e1051630eeb96669703b4c3cb75063c63b915e7\" returns successfully" May 17 00:41:21.637006 kubelet[2082]: E0517 00:41:21.636951 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:22.637164 kubelet[2082]: E0517 00:41:22.637108 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:23.638032 kubelet[2082]: E0517 00:41:23.637979 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:24.638959 kubelet[2082]: E0517 00:41:24.638917 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:25.639525 kubelet[2082]: E0517 00:41:25.639450 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:26.640128 kubelet[2082]: E0517 00:41:26.640064 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:27.641003 kubelet[2082]: E0517 00:41:27.640933 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:28.642097 kubelet[2082]: E0517 00:41:28.642018 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:29.642481 kubelet[2082]: E0517 00:41:29.642428 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:30.643046 kubelet[2082]: E0517 00:41:30.642989 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:31.644176 kubelet[2082]: E0517 00:41:31.644108 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:31.960295 kubelet[2082]: E0517 00:41:31.960167 2082 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.188?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="1.6s" May 17 00:41:32.645195 kubelet[2082]: E0517 00:41:32.645144 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:33.645988 kubelet[2082]: E0517 00:41:33.645943 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:34.646792 kubelet[2082]: E0517 00:41:34.646733 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:35.647873 kubelet[2082]: E0517 00:41:35.647799 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:36.649007 kubelet[2082]: E0517 00:41:36.648946 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:37.649955 kubelet[2082]: E0517 00:41:37.649898 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:38.650955 kubelet[2082]: E0517 00:41:38.650918 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:39.652053 kubelet[2082]: E0517 00:41:39.651979 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:40.652999 kubelet[2082]: E0517 00:41:40.652955 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:41.553781 kubelet[2082]: E0517 00:41:41.553738 2082 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:41.653628 kubelet[2082]: E0517 00:41:41.653575 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:41.734036 kubelet[2082]: E0517 00:41:41.733969 2082 kubelet_node_status.go:535] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-05-17T00:41:31Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-05-17T00:41:31Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-05-17T00:41:31Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-05-17T00:41:31Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\\\"],\\\"sizeBytes\\\":166719855},{\\\"names\\\":[\\\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\\\",\\\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\\\"],\\\"sizeBytes\\\":91036984},{\\\"names\\\":[\\\"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\\\",\\\"ghcr.io/flatcar/nginx:latest\\\"],\\\"sizeBytes\\\":73306098},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\\\",\\\"registry.k8s.io/kube-proxy:v1.31.9\\\"],\\\"sizeBytes\\\":30354642},{\\\"names\\\":[\\\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\\\"],\\\"sizeBytes\\\":18897442},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db\\\",\\\"registry.k8s.io/pause:3.6\\\"],\\\"sizeBytes\\\":301773}]}}\" for node \"172.31.16.188\": Patch \"https://172.31.24.80:6443/api/v1/nodes/172.31.16.188/status?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" May 17 00:41:42.653770 kubelet[2082]: E0517 00:41:42.653710 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:43.562137 kubelet[2082]: E0517 00:41:43.562094 2082 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.188?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" May 17 00:41:43.654639 kubelet[2082]: E0517 00:41:43.654584 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:44.655756 kubelet[2082]: E0517 00:41:44.655698 2082 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"